id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2309.13399 | MBIR Training for a 2.5D DL network in X-ray CT | In computed tomographic imaging, model based iterative reconstruction methods
have generally shown better image quality than the more traditional, faster
filtered backprojection technique. The cost we have to pay is that MBIR is
computationally expensive. In this work we train a 2.5D deep learning (DL)
network to mimic MBIR quality image. The network is realized by a modified
Unet, and trained using clinical FBP and MBIR image pairs. We achieve the
quality of MBIR images faster and with a much smaller computation cost.
Visually and in terms of noise power spectrum (NPS), DL-MBIR images have
texture similar to that of MBIR, with reduced noise power. Image profile plots,
NPS plots, standard deviation, etc. suggest that the DL-MBIR images result from
a successful emulation of an MBIR operator. | Obaidullah Rahman, Madhuri Nagare, Ken D. Sauer, Charles A. Bouman, Roman Melnyk, Brian Nett, Jie Tang | 2023-09-23T15:21:28Z | http://arxiv.org/abs/2309.13399v1 | # MBIR Training for a 2.5D DL network in X-ray CT
###### Abstract
In computed tomographic imaging, model based iterative reconstruction methods have generally shown better image quality than the more traditional, faster filtered backprojection technique. The cost we have to pay is that MBIR is computationally expensive. In this work we train a 2.5D deep learning (DL) network to mimic MBIR quality image. The network is realized by a modified Unet, and trained using clinical FBP and MBIR image pairs. We achieve the quality of MBIR images faster and with a much smaller computation cost. Visually and in terms of noise power spectrum (NPS), DL-MBIR images have texture similar to that of MBIR, with reduced noise power. Image profile plots, NPS plots, standard deviation, etc. suggest that the DL-MBIR images result from a successful emulation of an MBIR operator.
## 1 Introduction
X-ray computed tomography has become a important tool in applications such as healthcare diagnostics, security inspection, and non-destructive testing. The industry preferred method of reconstruction is filtered backprojection (FBP) and its popularity is owed to its speed and low computational cost. Iterative methods such as model-based iterative reconstruction (MBIR) generally have better image quality than FBP and do better in limiting image artifacts [1, 2].
MBIR is a computationally expensive and potentially slow reconstruction method since it entails repeated forward projection of the estimated image and back projection of the sinogram residual error. Even with fast GPUs becoming the norm, MBIR may take minutes compared to an FBP reconstruction that can be performed in seconds. The computational cost and reconstruction time have been decreents in wide adoption of MBIR.
In recent years, deep learning has made serious inroads in CT applications. It is applied in sinogram and image domains and sometimes in both. It has been applied in low signal correction [3], image denoising [4, 5], and metal artifact reduction [6].
Ziabari, et al [7] showed that a 2.5D deep neural network, with proper training, can effectively learn a mapping from an FBP image to MBIR. In this paper we expand on their work and study the characteristics of output from networks trained to simulate MBIR with a highly efficient neural network implementation.
## 2 Methods
We will first train a deep neural network, which we will, similarly to [7], entitle DL-MBIR. Our aim is to train the network to closely approximate MBIR images from FBP images. The training input is FBP images and the target is MBIR images from the same data. Let \(X_{FBP}\) be the input to the network, \(X_{MBIR}\) be the target, and \(\sigma\) represents a hypothetical mapping such that \(\sigma:X_{MBIR}\to X_{FBP}\). Let \(f_{DL\_MBIR_{Z}}\) be the DL neural network with \(Z\) number of input channels. During the training phase:
\[\hat{f}_{DL\_MBIR_{Z}}=\operatorname*{argmin}_{f_{DL\_MBIR_{Z}}}\big{|}\big{|} \big{.}f_{DL\_MBIR_{Z}}(X_{FBP})-X_{MBIR}\big{|}\big{|}_{2} \tag{1}\]
\(f_{DL\_MBIR_{Z}}\) can be thought of as the inverse of \(\sigma\), i.e. \(f_{DL\_MBIR_{Z}}=\sigma^{-1}\). During the training phase, the weights of \(f_{DL\_MBIR_{Z}}\) are randomly initialized and then adjusted in several iterations using error backpropagation. Once the number of iterations is exhausted or the convergence criteria is met, the training stops.
For training, 4 pairs of clinical exams were selected. Each pair had one FBP image volume and the corresponding MBIR volume. Each image volume had about 200 slices, resulting in about 800 training image pairs. A modified version of Unet [8] was chosen as the network architecture. The learning rate was set to 0.0001 and 2 GPUs were used. Training and inferencing were done on Tensorflow/Keras. Training was run for 300 epochs. 3 versions of DL-MBIR were trained: _DL-MBIR\({}_{1}\)_ was trained with inputs with 1 channel i.e. 1 axial slice, _DL-MBIR\({}_{3}\)_ was trained with 3-channel inputs and _DL-MBIR\({}_{5}\)_ was trained with 5-channel inputs. Having adjacent slices in the input provides additional information to the DL network [7] and helps train it better. Figure 1 shows the DL architecture and the training setup.
Figure 1: DL architecture and training setup. This is a modified form of Unet. Layers on the left denote the contracting path where features are compressed from image towards latent space but the number of features increases. Layers on the right denote the expanding path where feature are decompressed from latent space towards corrected image but the number of features decreases.
## 3 Results
A cardiac FBP image was inferenced on the trained DL-MBIR network. Inference time for every network was between 4 and 6 seconds, and it goes up with the increase in the number of input channels. The MBIR version of the same exam was also available. Figure 3 shows a comparison, for 4 slices - (a), (b), (c), and (d) in the image volume, among MBIR image, FBP image, and the outputs of \(DL\)-\(MBIR_{Z}\), where \(Z=1,~{}3,~{}5\). Figure 4 shows a comparison, for the same slices in the image volume, among difference between images and the MBIR images. Figure 5 has a profile plot to show the comparison of \(DL\)-\(MBIR_{Z}\) and FBP images w.r.t the MBIR images.
## 5 Conclusion
We trained a U-net, 2.5D DL network that effectively estimates MBIR results from FBP input images. The computation cost is also signifcantly less than that of MBIR. All metrics - NPS, PSNR, standard deviation, profile plots demonstrate that DL-MBIR images have all the features of MBIR including noise reduction and noise texture.
|
2306.00206 | Quantifying Representation Reliability in Self-Supervised Learning
Models | Self-supervised learning models extract general-purpose representations from
data. Quantifying the reliability of these representations is crucial, as many
downstream models rely on them as input for their own tasks. To this end, we
introduce a formal definition of representation reliability: the representation
for a given test point is considered to be reliable if the downstream models
built on top of that representation can consistently generate accurate
predictions for that test point. However, accessing downstream data to quantify
the representation reliability is often infeasible or restricted due to privacy
concerns. We propose an ensemble-based method for estimating the representation
reliability without knowing the downstream tasks a priori. Our method is based
on the concept of neighborhood consistency across distinct pre-trained
representation spaces. The key insight is to find shared neighboring points as
anchors to align these representation spaces before comparing them. We
demonstrate through comprehensive numerical experiments that our method
effectively captures the representation reliability with a high degree of
correlation, achieving robust and favorable performance compared with baseline
methods. | Young-Jin Park, Hao Wang, Shervin Ardeshir, Navid Azizan | 2023-05-31T21:57:33Z | http://arxiv.org/abs/2306.00206v2 | # Representation Reliability and
###### Abstract
Self-supervised pre-trained models extract general-purpose representations from data, and quantifying how reliable they are is crucial because many downstream models use these representations as input for their own tasks. To this end, we first introduce a formal definition of _representation reliability_: the representation for a given test input is considered to be reliable if the downstream models built on top of that representation can consistently generate accurate predictions for that test point. It is desired to estimate the representation reliability without knowing the downstream tasks a priori. We provide a negative result showing that existing frameworks for uncertainty quantification in supervised learning are not suitable for this purpose. As an alternative, we propose an ensemble-based method for quantifying representation reliability, based on the concept of _neighborhood consistency_ in the representation spaces across various pre-trained models. More specifically, the key insight is to use shared neighboring points as anchors to align different representation spaces. We demonstrate through comprehensive numerical experiments that our method is capable of predicting representation reliability with high accuracy.
## 1 Introduction
Self-supervised learning has opened the door to the development of general-purpose embedding functions, often referred to as _foundation models_, that can be used or fine-tuned for various downstream tasks (Jaiswal et al., 2020; Jing and Tian, 2020). These embedding functions are pre-trained on large corpora of different data modalities, spanning visual (Chen et al., 2020), textual (Brown et al., 2020), audio (Al-Tahan and Mohsenzadeh, 2021), and their combinations (Radford et al., 2021; Morgado et al., 2021), aimed at being general purpose and agnostic to the downstream tasks they may be utilized for. For instance, the recent surge in large pre-trained models such as CLIP (Radford et al., 2021) and ChatGPT (OpenAI, 2022) has resulted in the development of many prompt-based or dialogue-based downstream use cases, none of which are known a priori when the pre-trained model is being deployed.
Embedding functions learned through self-supervised learning do not always produce reliable outputs. For example, large language models can generate factually inaccurate information with a high level of confidence (Bommasani et al., 2021; Tran et al., 2022). With the increasing use of self-supervised learning to generate textual, visual, and audio content, unreliable embedding functions could have significant implications. Furthermore, given that these embedding functions are frequently employed as frozen backbones for various downstream use cases, adding more labeled downstream data may not improve the performance if the initial representation is unreliable. Therefore, _having notion(s) of **reliability/uncertainty** for such pre-trained models alongside their abstract embeddings would be a key enabler for their reliable deployment, especially in safety-critical settings.
In this paper, we introduce a formal definition of representation reliability based on its impact on downstream tasks. Our definition pertains to a representation of a given test point produced by a pre-trained embedding function. If a variety of downstream tasks that build upon this representation consistently yield accurate results for the test point, we consider this representation reliable. Existing uncertainty quantification frameworks mostly focus on the supervised learning setting, where they rely on the consistency of predictions across various predictive models. We provide a counter-example showing that they cannot be directly applied to our context, as representations lack a ground truth for comparison. In other words, inconsistent predictions often indicate a high prediction uncertainty, but inconsistent representations do not necessarily imply unreliable representations. Hence, it is critical to align representation spaces in such a way that corresponding regions have consistent semantic meanings before comparing them.
To this end, we propose an ensemble-based approach for estimating representation reliability. We prove that a test point has a reliable representation if it has a reliable neighbor which remains consistently close to the test point, across multiple representation spaces generated by different embedding functions. Based on this theoretical insight, we select a set of embedding functions and reference data (e.g., data used for training the embedding functions). We then compute the number of consistent neighboring points in the reference data to estimate the representation reliability. The underlying reasoning is that a test point with more consistent neighbors is more likely to have a reliable and consistent neighbor. This reliable and consistent neighbor can be used to align different representation spaces that are generated by different embedding functions.
We conduct extensive numerical experiments to validate our approach and compare it with state-of-the-art out-of-distribution (OOD) detection measures. The results indicate that our approach consistently captures the reliability of representations and outperforms the baselines in terms of correlation with representation reliability. Moreover, we observe that our approach is more robust than the baselines with respect to the geometry of the representation space. Specifically, regardless of using Euclidean or cosine distance to compute consistent neighboring points, our approach shows a high correlation with representation reliability. On the other hand, the choice of distance measures significantly impacts the performance of baseline approaches and can even result in a negative correlation with representation reliability.
In summary, our main contributions are:
* We present a formal definition of representation reliability, which is measured by its impact on various downstream tasks. To the best of our knowledge, this is the first comprehensive study to investigate uncertainty in representation space.
* We provide a counter-example, showing that existing supervised learning frameworks for studying uncertainty cannot be directly applied to estimate representation reliability.
* We prove that identifying an anchor point that aligns various representation spaces is crucial in estimating representation reliability.
* Based on our theoretical findings, we introduce an ensemble-based approach that uses neighborhood consistency to measure representation reliability.
* We conduct comprehensive numerical experiments, showing that our metric can consistently capture representation reliability and outperforms baseline methods.
Related Works
Uncertainty Quantification in Supervised Learning.Existing work on uncertainty quantification mostly focused on supervised learning settings. For example, Bayesian inference quantifies uncertainty by placing a prior distribution over model parameters, updating this prior distribution with observed data to obtain a posterior distribution, and examining the inconsistency of predictions derived from the posterior distribution (Neal, 1996; MacKay, 1992; Kendall and Gal, 2017; Depeweg et al., 2018). Since the posterior distribution may not have an analytical form, many approximating approaches have been introduced, including Monte Carlo dropout (Gal and Ghahramani, 2016), deep ensembles (Osband et al., 2018; Lakshminarayanan et al., 2017; Wen et al., 2020), and Laplace approximation (Daxberger et al., 2021; Sharma et al., 2021). In this paper, we focus on quantifying the uncertainty of representations and prove that standard supervised-learning frameworks cannot be directly applied to investigate representation uncertainty (see Section 3.2 for more details).
Novelty Detection and Representation Reliability.Self-supervised learning is increasingly used for out-of-distribution (OOD) detection. This approach involves training an embedding function and then computing an OOD score for a new test point based on its distance from the training data in the representation space (Lee et al., 2018; van Amersfoort et al., 2020; Tack et al., 2020; Mirzae et al., 2022). It is important to note that representation reliability and OOD detection are different concepts. Being in-distribution does not necessarily guarantee the reliability of a sample's representation, and vice versa. Recently, Ardeshir and Azizan (2022) introduced several empirical measures for quantifying representation uncertainty. To compare with this line of work, we conduct comprehensive numerical experiments (Section 4). The results suggest that our approach is more favorably correlated with the representation reliability compared with state-of-the-art OOD detection measures and the empirical metrics proposed in Ardeshir and Azizan (2022).
Uncertainty-aware Representation Learning.There is a growing body of research aimed at training robust self-supervised models using an embedding function that maps input points to distribution in the representation space as opposed to a single point (Vilnis and McCallum, 2015; Neelakantan et al., 2015; Karaletsos et al., 2015; Bojchevski and Gunnemann, 2017; Oh et al., 2018; Chen et al., 2020; Wu and Goodman, 2020; Zhang et al., 2021). These methods modify the neural network's architecture and introduce alternative training schemes. For example, the approach proposed by Zhang et al. (2021) requires an additional output (i.e., a temperature parameter), while the approach by Oh et al. (2018) necessitates the network to output means and variances of a mixture of Gaussian distributions. However, these methods primarily focus on the training scheme and adopt relatively simple (and heuristic) approaches for quantifying uncertainty. In contrast, we aim to avoid making any assumptions about the training process of the embedding function, while only necessitating black-box access to it. Furthermore, we provide a theoretical analysis of our method and explore the impact of representation reliability on the performance of downstream tasks.
We provide a more in-depth discussion about related works in Appendix E.
## 2 Background: Uncertainty Quantification in Supervised Learning
We recall a Bayesian-inference view of uncertainty in supervised learning. Consider a class of predictive models \(\mathcal{F}\), where each model \(f\) generates a predictive probability \(p(y|\mathbf{x},f)\) for an input variable \(\mathbf{x}\). In the Bayesian framework, a prior probability distribution \(p(f)\) is first introduced over \(\mathcal{F}\) and a posterior distribution is learned given a training dataset \(\mathcal{D}\): \(p(f|\mathcal{D})\propto p(\mathcal{D}|f)\cdot p(f)\). For a new test point \(\mathbf{x}^{*}\), its posterior predictive distribution is obtained by averaging the predictive probabilities over models:
\[p(\hat{y}^{*}|\mathbf{x}^{*},\mathcal{D})=\int p(\hat{y}^{*}|\mathbf{x}^{*},f)p(f| \mathcal{D})\text{d}f. \tag{1}\]
Since the posterior distribution does not have an analytical expression for complex neural networks, Monte Carlo approaches are often used to approximate (1). A special instance is deep ensembles (Lakshminarayanan et al., 2017) that train a set of neural networks with different random initialization of the learnable parameters \(\{\theta_{i}\}_{i=1}^{M}\). Assume a uniform prior over functions and sampling models well-fitted to \(\mathcal{D}\) (i.e., treating the likelihood of different models as equal and dominant), the posterior predictive distribution can be approximated by
\[p(\hat{y}^{*}|\mathbf{x}^{*},\mathcal{D})\approx\frac{1}{M}\sum_{i=1}^{M}\big{[} \delta(\hat{y}^{*}-f_{\theta_{i}}(\mathbf{x}^{*}))\big{]}. \tag{2}\]
Finally, the uncertainty can be assessed by the _inconsistency_ (e.g., measured by variance) of the output across sampled functions:
\[U_{\text{supervised}}(\mathbf{x}^{*})\triangleq\mathsf{Var}\left(\hat{y}^{*}\mid \mathbf{x}^{*},\mathcal{D}\right)\approx\mathsf{Var}_{i\sim[M]}\left(f_{\theta_{i }}(\mathbf{x}^{*}))\right). \tag{3}\]
When different models produce significantly different predictions, it suggests a higher level of uncertainty. Conversely, if multiple independently trained models map an input variable to similar outputs, the prediction can be considered certain.
## 3 Estimating Representation Reliability
In this section, we introduce a formal definition of representation reliability by examining its impact on downstream tasks. We provide a negative result showing that existing supervised learning uncertainty quantification frameworks (described in Section 2) are inadequate for capturing representation reliability. We present an ensemble-based method that examines the consistency of neighboring points in the representation space. Numerical experiments show that our method consistently exhibits a high correlation with representation reliability.
### Representation Reliability
Consider an embedding function \(h:\mathcal{X}\rightarrow\mathcal{Z}\) that maps a data point (e.g., an image) to the representation space \(\mathcal{Z}\). Here the representation space can be either a real-\(d\) space \(\mathbb{R}^{d}\) or a unit hyper-sphere \(\mathcal{S}^{d-1}\). Below, we provide a formal definition of _representation reliability_ by analyzing its impact on a collection of downstream tasks. Intuitively, a reliable representation should consistently lead to more accurate results across these downstream tasks.
**Definition 1**.: Suppose there is a collection of downstream prediction tasks \(\mathcal{T}\). Each task builds upon a common embedding function \(h\) and learns a predictive model \(g_{t}\circ h:\mathcal{X}\rightarrow\mathcal{Y}_{t}\). The representation reliability for a given test point \(\mathbf{x}^{*}\in\mathcal{X}\) is defined as
\[\mathsf{Reli}(\mathbf{x}^{*};h)\triangleq\mathbb{E}_{t\sim\mathcal{T}}\left[ \mathsf{Perf}\left(g_{t}\circ h(\mathbf{x}^{*}),y_{t}^{*}\right)\right] \tag{4}\]
where \(y_{t}^{*}\in\mathcal{Y}_{t}\) denotes the ground-truth label of \(\mathbf{x}^{*}\) under task \(t\), \(\mathsf{Perf}\) is a performance metric (e.g., accuracy or variance of predictive outputs), and the expectation is taken over downstream task \(t\) chosen uniformly at random from \(\mathcal{T}\).
The above definition assumes that the set of downstream tasks and ground-truth labels are accessible. However, it is important to note that in practical scenarios, this may not always be the case. Next, we discuss how to estimate the representation reliability solely based on the properties of the representations themselves, without relying on access to the downstream labels.
### First Attempt: Representation Consistency
Our first attempt is directly applying standard supervised-learning techniques (see Section 2) to estimate representation reliability. Recall that if multiple predictive models give different predictions for the same test point, then it is likely that their predictions are uncertain. One may wonder if they could apply the same idea to estimate representation reliability. We present a negative result, showing that even if different embedding functions produce completely different representations, their downstream predictions can still be consistent.
Figure 1: Illustration of our proposed method. Let \(\mathcal{Z}_{i}\) and \(\mathcal{Z}_{j}\) be representation spaces determined by embedding function \(h_{i}\) and \(h_{j}\). For a downstream task \(t\), a reliable neighboring point \(\mathbf{x}^{r}\) serves as an anchor for comparing different representations \(\mathbf{z}_{i}^{*}=h_{i}(\mathbf{x}^{*})\) and \(\mathbf{z}_{j}^{*}=h_{j}(\mathbf{x}^{*})\) of the test point \(\mathbf{x}^{*}\).
**Theorem 1**.: _Suppose the representation reliability in Definition 1 is evaluated via downstream regression tasks. Each task builds upon an embedding function \(h_{i}\) and applies the empirical risk minimization to find an optimal linear head \(g_{i,t}\). For any given \(A>0\), there exists a set of embedding functions \(h_{1},\cdots,h_{M}\) such that \(\mathsf{Var}_{i\sim[M]}\left(h_{i}(\mathbf{x}^{*})\right)\geq A\) but \(\mathsf{Var}_{i\sim[M]}\left(g_{i,t}\circ h_{i}(\mathbf{x}^{*})\right)=0\) for any downstream task \(t\). Here \(\mathsf{Var}_{i\sim[M]}\left(h_{i}(\mathbf{x}^{*})\right)\triangleq\mathbb{E}_{i \sim[M]}\left[\left\|h_{i}(\mathbf{x}^{*})-\mathbb{E}_{i^{\prime}\sim[M]}\left[h_{ i^{\prime}}(\mathbf{x}^{*})\right]\right\|_{2}^{2}\right]\)._
Proof.: See Appendix A.1.
The key insight behind our proof is that an input point's representation may not be unique (e.g., rotated spaces are in fact equivalent) and there is no ground truth to compare with. In other words, even if different embedding functions assign distinct representations to the same test point, downstream heads built on these embedding functions can also vary, ultimately leading to similar predictions.
### Proposed Method: Neighborhood Consistency
Unlike in supervised learning, where inconsistent predictions often indicate high uncertainty, having diverse embeddings does not necessarily imply that the representations are unreliable. This is because the representation space is different from the output space, and it does not have a ground truth to compare with. To address this issue, we propose the idea of using an "anchor" point to align different representation spaces. The anchor point serves as a bridge that transforms different representation spaces into the same space. We can formalize this intuition more rigorously in the following theorem. It states that if a test point has a reliable neighborhood consistently around it across all representation spaces, then its downstream predictions are certain.
**Theorem 2**.: _For a test point \(\mathbf{x}^{*}\), suppose there exists a **reliable** and **consistent** neighbor \(\mathbf{x}^{r}\) across all embedding functions \(h_{1},\cdots,h_{M}\), satisfying_
\[|y^{r}_{t}-g_{i,t}\circ h_{i}(\mathbf{x}^{r})| \leq\epsilon_{r}\;,\;\;\forall i\in[M] (\text{Reliability of }\mathbf{x}^{r}) \tag{5}\] \[\|h_{i}(\mathbf{x}^{r})-h_{i}(\mathbf{x}^{*}))\|_{2} \leq\epsilon_{nb}\;,\;\forall i\in[M] (\text{Consistent neighbor}) \tag{6}\]
_where \(y^{r}_{t}\) is the ground-truth label of \(\mathbf{x}^{r}\) under downstream task \(t\); and \(g_{i,t}\) is the downstream classifier built upon embedding function \(h_{i}\) under task \(t\). Moreover, suppose \(g_{i,t}(\mathbf{z})=a(\mathbf{w}^{T}_{i,t}\mathbf{z}+b_{i,t})\) with a normalized weight \(\|\mathbf{w}_{i,t}\|_{2}=1\) and an 1-Lipschitz continuous activation function \(a(\cdot)\) (e.g., sigmoid and identity). Then for any downstream task \(t\):_
\[\mathsf{Var}_{i\sim[M]}\left(g_{i,t}\circ h_{i}(\mathbf{x}^{*})\right) \leq 2\left(1-\frac{1}{M}\right)(\epsilon_{nb}+\epsilon_{r})^{2}. \tag{7}\]
Proof.: See Appendix A.3.
The above theorem suggests that a reliable and consistent neighbor can serve as an anchor point for aligning different representation spaces (see Appendix B for more discussions). Nevertheless, in practice, identifying such a neighbor may be difficult without prior knowledge of the downstream tasks. Instead of searching for a single reliable and consistent point as the anchor, we draw a set of reference points, denoted as \(\mathbf{X}_{\text{ref}}=\{\mathbf{x}^{(l)}\}_{l=1}^{n}\), which can be the pre-training data for the embedding function. We then compute the number of consistent neighboring points in \(\mathbf{X}_{\text{ref}}\) and use it to estimate the representation reliability. The rationale behind this approach is that a test point with more consistent neighbors is more likely to have a reliable and consistent neighbor. This motivates the following definition for estimating representation reliability.
Proposed Method:Given a set of embedding functions \(h_{1},\cdots,h_{M}\) and a reference dataset \(\mathbf{X}_{\text{ref}}=\{\mathbf{x}^{(l)}\}_{l=1}^{n}\), we define **Neighborhood Consistency** (NC) of a test point \(\mathbf{x}^{*}\) as
\[\mathsf{NC}_{k}(\mathbf{x}^{*})=\frac{1}{M^{2}}\sum_{i<j}\mathsf{Sim}\left(k\text{- NN}_{i}\big{(}\mathbf{x}^{*}\big{)},\;k\text{-NN}_{j}\big{(}\mathbf{x}^{*}\big{)}\right) \tag{8}\]
where \(k\text{-NN}_{i}(\mathbf{x}^{*})\) is the index set of \(k\) nearest neighbors of \(h_{i}(\mathbf{x}^{*})\) among \(\{h_{i}(\mathbf{x})\mid\mathbf{x}\in\mathcal{D}_{\text{ref}}\}\); and \(\mathsf{Sim}\) is a measure of similarity between sets. We use _Jaccard Similarity_ as the metric for set similarity \(\mathsf{Sim}(\cdot,\cdot)\) inspired by a graph representation interpretation of our method (see Appendix C for more details).
We can use different distance metrics to find the nearest neighbors, depending on the geometry of the representation space. For example, we can use cosine distance for embeddings in a unit hyper-sphere and Euclidean distance for embeddings in real space.
The value of \(k\) involves a trade-off between incorporating more consistent neighbors and increasing the overall reliability of those neighbors. If we choose a large value of \(k\), it is more likely to find a consistent anchor point, but this anchor point may be less reliable. Conversely, if we choose a small value of \(k\), it may filter out a desirable anchor point but the reliability of the consistent neighbors will increase. To address this issue, we will conduct an ablation study to search for the optimal \(k\) in the next section.
## 4 Numerical Experiments
### Experiment Setup
Embedding function.We use the SimCLR approach (Chen et al., 2020) to pre-train ResNet-18 and ResNet-50 models (He et al., 2016) as the embedding functions, using a single NVIDIA V100 GPU. Moreover, we conducted our experiments on two pre-training datasets, CIFAR-10 and CIFAR-100 (Krizhevsky et al., 2009), which consist of 50,000 training points and 10,000 test points. During the pre-training stage, we did not utilize any class label information. To construct the ensemble, we trained a total of \(M=10\) models on the same dataset with different initializations, following Lakshminarayanan et al. (2017). The code will be made available upon publication.
Downstream task.Recall that our definition of representation reliability (Definition 1) depends on a set of downstream tasks. To optimize the utilization of the original dataset's multi-class labels and reduce uncertainty arising from the downstream tasks themselves, we construct a set of binary classification tasks. Specifically, we define a pre-training dataset \(\mathbf{X}_{\text{down}}=\mathbf{X}_{1}\cup\dots\cup\mathbf{X}_{C}\), where \(C\) denotes the total number of classes in the dataset, and \(\mathbf{X}_{i}\) contains the data with class label \(i\). The set of downstream tasks \(\mathcal{T}\) is composed by binary classification tasks that determine whether a data point's label is \(i\) or \(j\), where \(i\neq j\) and \(i,j\in[C]\), assuming the data belongs to \(\mathbf{X}_{i}\cup\mathbf{X}_{j}\). As a result, the total number of tasks is \(|\mathcal{T}|=C(C-1)/2\), with each data point being evaluated in \((C-1)\) tasks. We measure the performance of each downstream task based on the accuracy score: \(\mathsf{Perf}(\hat{y},y)=-\log|\hat{y}-y|\). Finally, we compute the representation reliability of a test point by averaging its values across multiple embedding functions.
Baseline.We compare our proposed method (\(\mathsf{NC}_{k}\) in (8)) with state-of-the-art OOD detection scores and the empirical measures proposed in Ardeshir and Azizan (2022) :
* \(\mathsf{AvgDist}_{k}\)(Tack et al., 2020; Mirzae et al., 2022): the average of the \(k\) minimum distances from the test point to the reference data in the representation space. A lower value of \(\mathsf{AvgDist}_{k}\) indicates higher reliability.
* \(\mathsf{Norm}\)(Tack et al., 2020): the \(L_{2}\) norm of the representation \(\|h(\mathbf{x}^{*})\|_{2}\). A higher value of \(\mathsf{Norm}\) indicates higher reliability.
* LL(Ardeshir and Azizan, 2022): the log-likelihood of the data estimated by Gaussian mixture models, trained on the reference data. A higher value of LL indicates higher reliability.
* Feature Variance (FV): representation consistency measured by \(\mathsf{Var}_{i\sim[M]}\left(h_{i}(\mathbf{x}^{*})\right)\), as described in Theorem 1; this measure is extended from Ardeshir and Azizan (2022) to account for ensembles. A lower value of FV indicates higher reliability.
To reduce the computational burden, we randomly select \(5,000\) (i.e., \(10\%\)) pre-training data as the reference dataset \(\mathcal{D}_{\text{ref}}\). We repeat our experiments \(5\) times to report an error bar for each evaluation metric.
All baselines, except FV, are based on a single embedding function. For a fair comparison, we additionally consider the (point-wise) ensemble average of each score over different embedding functions for \(\mathsf{AvgDist}_{k}\), \(\mathsf{Norm}\), and LL. We report the best results obtained either from the scores computed individually for each function or from the ensemble average for those baseline methods.
We choose \(k=100\) for \(\mathsf{NC}_{k}\) and \(k=1\) for \(\mathsf{AvgDist}_{k}\) (see Section 4.2.2 for an ablation study about the choice of \(k\)). For our method and \(\mathsf{AvgDist}_{k}\), we test both cosine distance and Euclidean distance
as options for distance metric. In a similar manner, LL and FV are evaluated using both unnormalized \(h(\mathbf{x})\in\mathbb{R}^{d}\) and normalized representations \(h(\mathbf{x})/\|h(\mathbf{x})\|_{2}\in\mathcal{S}^{d-1}\).
Evaluation metric.To evaluate our method and baselines, we employ two metrics: (i) the Pearson correlation coefficient (PCC) between the reliability and its estimate1 and (ii) the area under the receiver operating characteristic curve (AUROC). For (ii), we call a data point's representation reliable (positive) if the average accuracy is \(>90\%\) (i.e., the average prediction error is \(\leq 0.1\)).
Footnote 1: The negative score is used for uncertainty measures, such as \(\mathsf{AvgDist}_{k}\) and FV.
### Main Results
We evaluate the effectiveness of our proposed method (i.e., \(\mathsf{NC}_{100}\)) in capturing representation reliability. Table 1 indicates that neighborhood consistency \(\mathsf{NC}_{100}\) demonstrates a strong correlation with representation reliability compared with baselines. Additionally, our method is more robust with the choice of distance metric, while the baselines suffer from a significant performance reduction with different selections of the distance metric. Finally, FV shows no correlation with representation reliability, rendering it an inappropriate measure for evaluating representation reliability. This observation aligns with the theoretical results in Section 3.2.
#### 4.2.1 Robustness to the Choice of Distance Metric
In order to examine the robustness of our method, we investigate the impact of different distance metrics. While Euclidean distance is a natural choice in \(\mathbb{R}^{d}\) where the representation lies, we also explore the applicability of cosine distance for our method and baselines. This is motivated by the fact that many recent self-supervised models, including SimCLR, are trained based on cosine similarity (i.e., the distance between the normalized vectors).
We observe that baseline methods, such as \(\mathsf{AvgDist}_{k}\) and LL, suffer from a significant performance reduction when paired with Euclidean distance. This reduction can even lead to a _negative_ correlation with representation reliability. In contrast, our proposed method \(\mathsf{NC}_{k}\) consistently produces robust results regardless of the chosen distance metric. We further demonstrate these results with ResNet-18 in Figure 2 as well as Figure 5 in Appendix.
As observed in our results and Tack et al. (2020), points with a larger \(L_{2}\)-norm tend to exhibit higher reliability. This implies that reliable points are often located in the outer regions of the representation space, resulting in larger distances between points in those regions compared to points near the origin. Consequently, baseline metrics that rely on unnormalized feature vectors fail to capture reliability and may even show negative correlations, contrary to their speculations.
\begin{table}
\begin{tabular}{c c|c c c c|c c c c} \hline \hline \multirow{3}{*}{Method} & \multirow{3}{*}{distance / normalized} & \multicolumn{4}{c|}{ResNet-18} & \multicolumn{4}{c}{ResNet-50} \\ \cline{3-10} & & \multicolumn{2}{c}{CIFAR-10} & \multicolumn{2}{c|}{CIFAR-100} & \multicolumn{2}{c}{CIFAR-100} & \multicolumn{2}{c}{CIFAR-100} \\ & & PCC & AUROC & PCC & AUROC & PCC & AUROC & PCC & AUROC \\ \hline \multirow{3}{*}{\(\mathsf{NC}_{\mathbf{100}}\)} & \multirow{3}{*}{cosine} & **0.630** & **0.787** & **0.440** & **0.562** & **0.440** & **0.743** & **0.440** & **0.611** \\ & & & 0.565 & 0.400 & 0.757 & 0.400 & 0.536 & 0.400 & 0.730 & 0.538 & \(\pm\) 0.000 & 0.493 & \(\pm\) 0.000 & 0.712 \(\pm\) 0.000 \\ \hline \multirow{3}{*}{\(\mathsf{AvgDist}_{1}\)} & \multirow{3}{*}{cosine} & 0.457 & 0.400 & 0.687 & 0.600 & 0.203 & 0.503 & 0.503 & 0.463 & \(\pm\) 0.000 & 0.684 & \(\pm\) 0.000 & 0.221 \(\pm\) 0.000 & 0.583 \(\pm\) 0.000 \\ & & Euclidean & -0.252 & 0.408 & 0.417 & 0.400 & -0.068 & 0.408 & 0.401 & 0.400 & -0.234 & \(\pm\) 0.000 & 0.422 & \(\pm\) 0.001 & -0.043 & \(\pm\) 0.000 \\ Norm & - & 0.470 & 0.657 & 0.233 & 0.588 & 0.449 & 0.651 & 0.203 & 0.574 & \\ \hline \multirow{3}{*}{LL} & \multirow{3}{*}{normalized} & 0.350 & 0.408 & 0.655 & 0.400 & 0.001 & 0.503 & 0.514 & 0.400 & 0.675 & \(\pm\) 0.001 & 0.079 \(\pm\) 0.000 & 0.537 \(\pm\) 0.000 \\ & & unnormalized & -0.114 & 0.508 & 0.502 & 0.403 & -0.107 & 0.400 & 0.479 & \(\pm\) 0.000 & -0.105 & \(\pm\) 0.000 & 0.493 & \(\pm\) 0.000 & -0.079 & \(\pm\) 0.000 \\ \cline{1-1} & & unnormalized & -0.065 & 0.411 & -0.398 & 0.344 & -0.051 & 0.448 & -0.359 & 0.351 & \(\pm\) 0.000 & -0.528 & \(\pm\) 0.000 \\ \cline{1-1} & & unnormalized & -0.528 & 0.313 & -0.411 & 0.330 & -0.514 & 0.315 & -0.388 & 0.343 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison between neighborhood consistency (ours) and baselines. The highest scores for each evaluation metric are highlighted in bold. We run our experiments for multiple trials with random selections of \(\mathcal{D}_{\text{ref}}\) and report the standard deviation. As shown, our approach exhibits a high correlation with representation reliability and consistently outperforms baselines.
#### 4.2.2 Ablation Studies
Trade-off on the number of neighbors (\(k\)).As discussed in Section 3.3, the choice of \(k\) in (8) leads to a trade-off between having more consistent neighbors and preserving the overall reliability of those neighbors. In order to explore this trade-off, we conduct experiments with different values of \(k\in\{1,5,10,50,100,250,500\}\). The correlation between our proposed method and representation reliability is illustrated in Figure 3: it initially increases and then decreases as expected. We observe that the optimal performance is achieved with \(k\) between 50 and 100 for our method, while \(\mathsf{AvgDist}_{k}\) performs well only with very small \(k\).
Effectiveness of the number of ensemble members (\(M\)).Figure 4 demonstrates that as the ensemble size increases, the evaluation metric becomes more robust and shows improved performance. Furthermore, even with just \(M=2\) ensemble members, our approach significantly outperforms other baselines. Overall, these results align with the trends presented by Lakshminarayanan et al. (2017), and highlight how neighborhood consistency effectively extends the deep ensemble approach from supervised learning to unsupervised representation learning.
## 5 Final Remarks and Limitations
Self-supervised learning is increasingly used for training general-purpose embedding functions that can be adapted to various downstream tasks. In this paper, we present a systematic study to evaluate the quality of representations assigned by the embedding functions. We introduce a mathematical definition of representation reliability, demonstrate that existing uncertainty frameworks in supervised learning do not capture representation reliability, derive an estimate for representation reliability, and validate our estimate through extensive numerical experiments.
Figure 2: Scatter plots for comparing the representation reliability with the proposed method (\(\mathsf{NC}_{100}\)) and with baseline (\(\mathsf{AvgDist}_{1}\)), under different distance metrics on CIFAR-10. Ours exhibits a higher correlation with the representation reliability and is more robust to the choice of distance metric.
There is a crucial need for future research to investigate and ensure the responsibility and trustworthiness of embedding functions. For example, representations should be interpretable and not compromise private information. Moreover, the embedding functions should exhibit robustness against adversarial attacks and incorporate a notion of uncertainty, in addition to their abstract representations. This work takes an initial step towards understanding the uncertainty of the representations. In the case where a downstream model fails to deliver a desirable output for a test point, the representation reliability can provide valuable insight into whether the mistake was due to unreliable representations or downstream heads.
There are several potential future directions that are worth further exploration. For example, our current method for estimating representation reliability uses a set of embedding functions to compute neighborhood consistency. It would be interesting to investigate whether our approach can be
Figure 4: Ablation over the number of ensemble functions (\(M\)). The performance of our proposed method improves as \(M\) increases. The figures report models using cosine distance.
Figure 3: Ablation over the number of neighboring points (\(k\)) for the proposed method (\(\text{NC}_{k}\)) and the baseline (\(\text{AvgDist}_{k}\)). The figures report models using cosine distance.
expanded to avoid the need for training multiple embedding functions. This could potentially be achieved through techniques such as MC dropout or adding random noise to the neural network parameters in order to perturb them slightly. Additionally, while we currently assess representation reliability through downstream prediction tasks, it would be valuable to investigate the generalization of our definition to encompass a broader range of downstream tasks.
|
2302.14784 | Evaluación del efecto del PAMI en la cobertura en salud de los adultos
mayores en Argentina | We conducted regression discontinuity design models in order to evaluate
changes in access to healthcare services and financial protection, using as a
natural experiment the age required to retire in Argentina, the moment in which
people are able to enroll in the free social health insurance called PAMI. The
dependent variables were indicators of the population with health insurance,
out-of-pocket health expenditure, and use of health services. The results show
that PAMI causes a high increase in the population with health insurance and
marginal reductions in health expenditure. No effects on healthcare use were
found. | Juan Marcelo Virdis, Fernando Delbianco, María Eugenia Elorza | 2023-02-28T17:36:48Z | http://arxiv.org/abs/2302.14784v1 | # Evaluacion del efecto del PAMI en la cobertura en salud de los adultos mayores en Argentina
###### Abstract
Investigaciones Economicas y Sociales del Sur, Concejo Nacional de Investigaciones Cientificas y Tecnicas (CONICET) - Universidad Nacional del Sur (UNS), San Andres 800, Bahia Blanca, Argentina. Departamento de Economia, UNS, San Andres 800, Bahia Blanca, Argentina. Instituto de Matematica de Bahia Blanca, CONICET-UNS, Av. Alem 1253, Bahia Blanca, Argentina
Nota del autor
Juan Marcelo Virdis
[https://orcid.org/0000-0001-7118-9259](https://orcid.org/0000-0001-7118-9259)
Fernando Delbianco
[https://orcid.org/0000-0002-1560-2587](https://orcid.org/0000-0002-1560-2587)
Maria Eugenia Elorza
[https://orcid.org/0000-0003-1562-1363](https://orcid.org/0000-0003-1562-1363)
\({}^{*}\) Autor para correspondencia. Informacion de contacto:
Instituto de Investigaciones Economicas y Sociales del Sur (CONICET-UNS), Departamento de Economia, Universidad Nacional del Sur.
San Andres 800, Bahia Blanca, Buenos Aires, Argentina (CP: 8000)
E-mail: [email protected]
Telefono: +54 291 4595138
## 1 Introduccion
El Programa de Atencion Medica Integral (PAMI) es una institucion cuyo estudio es relevante para evaluar el desempeno del sistema de salud argentino (SSA). Entre otros aspectos, se puede destacar que toorga un seguro de salud a cinco millones de personas aproximadamente, que su poblacion objetivo es la que mayor gasto esperado tiene, los adultos mayores, y que la transicion demografica que se atraviasa Argentina causara que, de mantenerse la legislacion actual, el numero de beneficiarios se incremente, tanto en terminos absolutos como relativos dentro del SSA.
Uno de los aspectos relevantes a evaluar es el impacto que tiene el PAMI en la cobertura en salud de sus beneficiarios, entendida como el acceso a la salud y la proteccion financera contra el gasto de bolsillo en salud (GBS). Este tipo de investigaciones puede realizarse a traves de distintas clases de regresiones, utilizando como variable dependiente indicadores de proteccion financera o el GBS de los hogares. La dimension del acceso podria evaluarse utilizando variables categoricas que indiquen si los hogares o las personas han utilizado servicios de salud preventivos y de tratamiento (Organizacion Mundial de la Salud (OMS) & Grupo del Banco Mundial, 2014). Sin embargo, tanto para variables de proteccion financera como de acceso, el uso de datos obtenidos en encuestas puede resultar en estimaciones sesgadas, a pesar de que los datos correspondan a una muestra probabilistica representativa de la poblacion. Esto se debe a la dificultad para observar el estado de salud de las unidades observacionales, aun en encuestas que incluyen preguntas relacionadas. Distintos autores han encontrado diferencias entre el autoreporte y la evaluacion objetiva de hipertension, diabetes, asma y enfermedades cardiacas (Mulyanto, Kringos y Kunst, 2019; Okura et al., 2004; Tenkorang et al., 2015). Ademas, los participantes de encuestas pueden ser inconsistentes cuando clasifican su salud entre distintas categorias. Se ha encontrado evidencia de que las respuestas pueden diferir segun la naturaleza del relevamiento (verbal o escrito) o la secuencia en que esten ordenadas las preguntas (Crossley y Kennedy, 2002). El control de variables relacionadas a la salud es muy importante por su incidencia en la demanda de seguros de salud y en la demanda de bienes y servicios relacionados con la salud. De otra manera, podria producirse un problema de endogeneidad en la estimacion (Levy y Meltzer, 2008; Trujillo, 2003; Vera-Hernandez, 1999; Waters, 1999).
Una solucion al problema de endogeneidad es el uso de tecnicas cuasiexperimentales, las cuales consisten en conformar a partir de datos de encuestas un grupo de estudio y un grupo control, seleccionando unidades observacionales iguales en todas sus caracteristicas, a execcion de una variable tratamiento y una variable objetivo (Chapin, 1938; Thistlethwaite y Campbell, 1960). De esta forma, es posible evaluar la existencia de una relacion causal entre ellas. Cualquier otra variable que sea diferente entre el grupo de estudio y el grupo control debilitara los resultados, pues las variables no controladas pueden haber incidido en la variable objetivo. En estudios basados en informacion proveniente de encuestas, las variables que es posible controlar estaran limitadas por los datos que es posible evaluar a partir de un cuestionario.
Por esta razon, siempre surgira el interrogante sobre la existencia de sesgos en las estimaciones producidos por variables no observadas o de difficil medicion.
Una metodologia cuasieexperimental que puede attenuar el sesgo por variables no controladas es el analisis de regresion discontinua (RDD) (Thistlethwaite y Campbell 1960) 1. El RDD consiste en observar el cambio en una variable objetivo causada por un tratamiento, el cual es asignado a partir de un valor determinado o punto de corte de una variable continua observable. De esta forma, las unidades observacionales que superan el punto de corte habran recibido el tratamiento, a diferencia de las que no lo han hecho. La ventaja del RDD reside en que las observaciones que se encuentran lo suficientemente cerca del punto de corte serian simiares, lo que permite reducir la cantidad de variables control necessarias.
Footnote 1: Otras metodologias cuasieexperimentales utilizadas en la evaluación de efectos causales pueden consultarse en Cunningham (2021).
Esta metodologia ha sido aplicada para evaluar el efecto de seguros en la cobertura en salud en distintas partes del mundo. Bernal, Carpio y Klein (2017) encontraron que un seguro gratuito existente en Peru ha causado mayor acceso a servcios de salud, pero tambien genero un mayor GBS. Por otra parte, Palmer et al. (2015) encontraron que un seguro para nios menores a los 6 anos en Vietnam genera mayor utilizacion de servcios ambulatorios y de medicina interna, sin encontrar modificaciones en el GBS. En el caso de Card, Dobkin y Maestas (2008b) y Card, Dobkin y Maestas (2009a) evaluaron el impacto del programa Medical Care (MEDICARE) de Estados Unidos en el acceso a prestaciones medicas y en los resultados de salud de los adultos mayores. En estos estudios, se evaluaron cambios en las variables objetivo cuando las personas llegaron a los 65 anos de edad, momento en el cual es posible para los estadounidenses que han trabajado 40 trimestres acceder de forma gratuita a MEDICARE. Los autores encontraron aumentos en el porcentaje de la poblacion con seguro, las consultas medicas y en otras prestaciones. Ademas, se encontro una reduccion del 20 % en la tasa de mortalidad de pacientes severos a partir de los 65 anos.
Al igual que en los trabajos descriptos, la normativa legal existente para acceder al PAMI permite evaluar efectos causales de este seguro sobre la cobertura en salud a traves de un RDD utilizando como grupo de estudio a las personas que recientemente alcanzaron la edad jubilatoria y como grupo control a quienes estan cerca de hacerlo. Es probable que estos grupos sean lo suficientemente simiares en muchos factores que se encuentran invariables al momento de cruzar el umbral de la edad jubilatoria, entre los que se incluye el estado de salud.
Este capitulo tiene por objetivo general evaluar el efecto del PAMI en la cobertura en salud. Como objetivos especificos se propone estimar el impacto del PAMI en la utilizacion de servcios medicos, en el GBS, en indicadores de proteccion financera contra GBS y en la demanda de seguros voluntarios. Los resultados obtenidos a partir de esta evaluacion permitiran diagnosticar la eficacia del PAMI como
financiador de servicios de salud para los adultos mayores y orientar el diseno de politicas por parte del organismo en relacion a su oferta prestacional. Los resultados podrian implicar la necesidad de ampliar la oferta de servicios o mejorar los mecanismos de acceso que deben utilizar los beneficiarios. En la segunda seccion se describen los datos y se expone la metodologia utilizada. En la tercer seccion se presentan los resultados. En la cuarta seccion, se presenta una discusion de los resultados.
## 2 Datos y metodologia
Las estimaciones que se presentan en este capitulo fueron realizadas con datos de la Encuesta Nacional de Gastos de los Hogares (ENGHo) relevados entre los anos 2017 y 2018 (Instituto Nacional de Estadistica y Censos (INDEC), 2020). La ENGHo es un relevamiento que permite conocer las estructuras de gasto de los hogares y caracterizar a la pollacion a traves de variables socioeconomicas (INDEC, 2020). La encuesta fue realizada entre los anos 2017 y 2018 a una muestra disenada en un procedimiento de tres etapas realizado sobre hogares situados en localidades de 2000 habitantes o mas2. La muestra estuvo compuesta por 44.922 viviendas de las cuales se obtuvieron 21.547 respuestas, lo que abarca a 68.725 habitantes. Cada uno de los hogares fue asociado a factores de expansion que permiten ajustar mediciones estadisticas por no respuesta, vivienda no elegible y calibracion por _benchmarks_ o totales pollacionales conocidos (INDEC, 2020). Las bases de datos fueron obtenidas en el sitio web oficial del INDEC (2020). La unidad observacional es el hogares dado que en las bases de datos de la ENGHo es imposible conocer cual de los integrantes accedio al sistema de salud o realizo GBS. Esto conlleva la tarea de asociar una edad, genero y/o situacion laboral a hogares compuestos por integrantes con distintos grados de heterogeneidad. Para abordar este obstaculo, se establecio el supuesto de que el hogar es caracterizado por las variables relevadas para su jefe de hogar3. Este supuesto se fundamententa en la legislacion que reglamenta a las Obras Sociales y al PAMI, que establece que cada beneficiario titular del seguro de salud cuenta con la posibilidad de extenderlo a su grupo familiar (Ley n\({}^{\circ}\) 19.032, art. 2, 1971; Ley n\({}^{\circ}\) 23.660, art. 9, 1988). Por este motivo, el hecho que el jefe de hogar cumpla la edad necesaria para poder acceder a los beneficiios del PAMI, podria tener un efecto en el seguro de salud de todos los integrantes del hogar y, por ende, en la cobertura en salud.
Footnote 2: Para un mayor detalle del diséno muestral ver INDEC (2020).
Footnote 3: En la ENGHo el jefe de hogar es la persona considerada como tal por los demás integrantes del hogar. En cada hogar hay un solo jefe, por lo tanto, hay tantos jefes como hogares (INDEC, 2022).
A continuacion se describe el RDD en su forma no parametririca siguiendo la notacion de Hahn, Todd y Klauw (2001). Se define a \(x_{i}\) como una variable binaria sobre la cual desemos evaluar el efecto en una variable objetivo \(y_{i}\). La variable \(x_{i}\) es llamada tratamiento e indica si la \(i-esima\) observacion lo ha
recibido. El valor de la variable objetivo puede ser expresado como
\[y_{i}=\alpha_{i}+x_{i}\beta_{i}\]
donde \(\alpha_{i}\equiv y_{0i}\) y \(\beta_{i}\equiv y_{1i}-y_{0i}\). La identificacion de las unidades observacionales que han recibido el tratamiento es realizada a partir de una variable continua \(z_{i}\). En un punto de corte \(z_{i}=z_{0}\) se produce un hecho exogeno que indica que alguna proporcion de las observaciones ha recibido el tratamiento \(x_{i}\). En los RDD realizados en este capitulo se utilizaron las variables que se presentan en la Tabla 1. La variable \(z_{i}\) se define como la edad del jefe de hogar menos la edad jubilatoria del regimen general, la cual es 65 anos para varones y 60 anos para mujeres (Ley n\({}^{\circ}\) 24.241 art. 19, 1993). A la edad jubilatoria de ambos sexos se le sumo un ano dado que existe una demora de entre entre tres y seis meses para finalizar el tramite administrativo de la jubilacion, momento a partir del cual es posible iniciar la gestion para obtener PAMI (Ambito Financiero 2022; Instituto Nacional de Servicios Sociales para Jubilados y Pensionados (INSSJyP), 2022). De esta forma, la variable continua observada queda definida como:
\[z_{i}=\begin{cases}edad_{i}-61\ \ \text{si }sexo_{i}=1\\ edad_{i}-66\ \ \text{si }sexo_{i}=0\end{cases}\]
En la literatura existen dos tipos de RDD: los _sharp_ y los _fuzzy_. En el diseno _sharp_, todas las observaciones para las cuales \(z_{i}\geq z_{0}\) reciben el tratamiento. Es decir, \(x_{i}=f(z_{i})\), donde \(z_{i}\) es continua y \(f(z_{i})\) es discontinua en un valor conocido \(z_{0}\). En el diseno _fuzzy_, \(x_{i}\) es una variable aleatoria con una funcion de probabilidad condicional \(f(z)\equiv E[x_{i}|z_{i}=z]=Pr[x_{i}=1|z_{i}=z]\), la cual presenta una discontinuidad en \(z_{0}\). La diferencia entre estos disenos radica en que en el _sharp_ la variable \(z_{i}\) indica de forma deterministic las unidades observacionales que han recibido un tratamiento mientas que en el _fuzzy_ esta variable indica un cambio en la probabilidad de recibir el tratamiento. Ambos disenos comparten los siguientes supuestos:
1. Existen limites \(x^{+}\equiv\lim_{z\to z_{0}^{+}}E[x_{i}|z_{i}=z]\) y \(x^{-}\equiv\lim_{z\to z_{0}^{-}}E[x_{i}|z_{i}=z]\)
2. \(x^{+}\neq x^{-}\)
Estos supuestos implican que es posible acercarse infinitesimalmente al punto de corte, tanto desde valores mayores como de valores menores, y que existe una discontinuidad en el valor esperado del tratamiento en \(z_{0}\). De forma no-parametrica puede definirse el efecto del tratamiento en un diseno _fuzzy_ como
\[\hat{\tau}_{RD}=E[\beta_{i}]=\frac{y^{+}-y^{-}}{x^{+}-x^{-}} \tag{1}\]
siendo el diseno _sharp_ un caso particular en el que \(x^{+}=1\), \(x^{-}=0\) y \(\hat{\tau}=y^{+}-y^{-}\). El parametro \(\hat{\tau}\) estimado es el salto que se verificaria en la variable objetivo \(y\) si el 100 % de las unidades observacionales hubiera recibido el tratamiento \(x\).
El efecto estimado del tratamiento a partir de la ecuacion 1 sera correcto siempre que no haya otra diferencia entre los grupos tratamiento y control que afecte a la variable objetivo \(y\), lo cual puede resultar un supuesto fuerte para las caracteristicas del efecto que se intenta medir en este capitulo. Esto sucede porque al momento de jubilarse los individuos pueden tener cambios importantes en otras variables relevantes para la adquisicion de un seguro de salud y el consumo de bienes y servicios de salud, como el nivel de ingreso o la situacion laboral, variable importante en el costo de oportunidad del tiempo. A su vez, estas variables pueden ser distintas entre hombres y mujeres producto de las diferencias de genero existentes en el mercado laboral. Este problema puede ser abordado a traves de la inclusion de covariables en el RDD lo cual, segun investigaciones previas, puede mejorar la precision de los resultados (Calonico, Cattaneo, Farrell y Rocio Titiunik 2019; Frolich 2007). Por esta razon, se incluveron como covariables \(sexo_{i}\), \(lgasto_{i}\) e \(inac_{i}\) (ver descripcion en Tabla 1).
Ademas de la discontinuidad en la variable objetivo, se evaluo la existencia y magnitud de un cambio de pendiente alrededor del punto de corte. De esta manera, es posible observar si se modifica la dinamica
\begin{table}
\begin{tabular}{l l} \multicolumn{2}{c}{Variables objetivo} \\ \hline \(lgsalud_{i}\) & Logaritmo natural del gasto per capita en salud del hogar. \\ \(pgsalud_{i}\) & Proporción de gasto en salud respecto al gasto total del hogar. \\ \(cat^{10\%}\) & Hogar que destino mas del 10 \% de su gasto total a GBS (Si = 1; No = 0). \\ \(cat^{25\%}\) & Hogar que destino mas del 25 \% de su gasto total a GBS (Si = 1; No = 0). \\ \(seg_{i}\) & El jefe de hogar es titular de un seguro de salud (Si = 1; No = 0). \\ \(segp_{i}\) & El jefe de hogar es titular de un seguro voluntario (Si = 1; No = 0). \\ \(segm_{i}\) & El jefe de hogar es titular de dos seguros o mas (Si = 1; No = 0). \\ \(farm_{i}\) & Algun integrante del hogar adquiño productos farmaceuicos (Si = 1; No = 0). \\ \(equi_{i}\) & Algun integrante del hogar adquiño artefactos o equipos terapeuticos (Si = 1; No = 0). \\ \(smed_{i}\) & Algun integrante del hogar realizo una consulta medica (Si = 1; No = 0). \\ \(odon_{i}\) & Algun integrante del hogar recibio servicios odontologicos (Si = 1; No = 0). \\ \(hosp_{i}\) & Algun integrante del hogar recibio servicios hospitalarios (Si = 1; No = 0). \\ \multicolumn{2}{c}{Variable tratamiento} \\ \hline \(pami_{i}\) & El jefe de hogar es beneficiario del PAMI (Si = 1; No = 0). \\ & Variable observable que asigna el tratamiento \\ \hline \(edad_{i}\) & Edad del jefe de hogar. \\ \multicolumn{2}{c}{Covariables} \\ \hline \(lgasto_{i}\) & Logaritmo natural del gasto per capita del hogar. \\ \(sexo_{i}\) & Sexo del jefe de hogar (Mujer = 1; Varon = 0). \\ \(inac_{i}\) & Jefe de hogar economicamente inactivo (Si = 1; No = 0). \\ \end{tabular}
\end{table}
Table 1: Variables utilizadas en las estimaciones de la seccion 3
entre las variables objetivo y la edad del jefe de hogar. Este tipo de estimacion se denomina analisis de discontinuidad en la pendiente (RKD) y, a diferencia del RDD en el cual se evalian diferencias en los niveles de la variable objetivo alrededor de \(z_{0}\), en el RKD se estiman diferencias en la primer derivada de la funcion de regresion.
Para la estimacion de la ecuacion 1 se utilizo la metodologia desarrollada por Calonico, Cattaneo y Rocio Titinuik (2014). Los autores proponen estimar un analisis de regresion polinomica local ponderada (RPLP) de orden 1 o 2 para aproximar los valores esperados por debajo y por encima del limite \(z_{0}\) en una variable objetivo \(y\). La RPLP es una extension de las estimaciones parametricas que permite ajustar una curva suavizada para \(y=f(z)\)(Cleveland y Loader 1996). Se define a la funcion de regresion \(m(\cdot)\) como
\[\hat{m}_{h}(x)=\begin{cases}\hat{\alpha}_{-}(z)\ \text{si}\ z<z_{0}\\ \hat{\alpha}_{+}(z)\ \text{si}\ z\geq z_{0}\end{cases}\]
en la cual los parametros estimados \(\hat{\alpha}_{-}\) y \(\hat{\alpha}_{+}\) surgen de resolver por minimos cuadrados las siguientes expresiones
\[\hat{\mu}(z)=\begin{cases}\left(\hat{\alpha}_{-}(z),\hat{\beta}_{-}(z)\right) =\arg\,\min_{\alpha,\beta}\sum_{i=1}^{N}\mathbf{1}_{z_{i}<z}\left(y_{i}- \alpha-\sum_{1}^{p}\beta_{q}(z_{i}-z)^{p}\right)^{2}w(z_{i})\\ \left(\hat{\alpha}_{+}(z),\hat{\beta}_{+}(z)\right)=\arg\,\min_{\alpha,\beta} \sum_{i=1}^{N}\mathbf{1}_{z_{i}>z}\left(y_{i}-\alpha-\sum_{1}^{p}\beta_{q}(z_{ i}-z)^{p}\right)^{2}w(z_{i})\end{cases}\]
donde \(q\) es el orden de la regresion. El efecto estimado es igual a
\[\hat{\tau}_{RD}=\hat{\mu}_{+}-\hat{\mu}_{-}\]
donde
\[\hat{\mu}_{-}=\lim_{z\to z_{0}^{-}}\hat{m}_{h}(z)=\hat{\alpha}_{-}(z_{0})\]
\[\hat{\mu}_{+}=\lim_{z\to z_{0}^{+}}\hat{m}_{h}(z)=\hat{\alpha}_{+}(z_{0})\]
El intervalo de confianza de \(\hat{\tau}_{RD}\) es igual a
\[CI_{1-\alpha,n}^{bc}=\left[\left\{\hat{\tau}_{RD}(h_{n})-\hat{b}_{n}\right\} \pm\Phi_{1-\frac{n}{2}}^{-1}\sqrt{\hat{b}_{n}^{bc}}\right]\]
donde el superindice \(rbc\) denota que es un intervalo robusto con correcciones de sesgo \(\hat{b}_{n}\) en la estimacion de \(\hat{\tau}(h_{n})\) y en la estimacion de la varianza. Las correcciones de los sesgos se describen en Calonico, Cattaneo y Rocio Titinuik (2014).
La estimacion de una RPLP requiere determinar: i) el orden de la RPLP, ii) el ancho de banda \(h\), es decir, las observaciones \((y_{i},z_{i}):z_{i}\in[z_{0}-h;z_{0}+h]\) que se utilizaran en las RPLP y iii) la funcion de
ponderacion \(w(z_{i})\). En relacion al orden, no es recomendable en RDD realizar regresiones polinomicas de orden mayor a 2. Esto se debe a que la estimacion de la diferencia en los valores esperados a la izquierda y a la derecha del punto de corte es sensible al orden del polinomio, y la utilizacion de polinomios de orden alto puede incrementar la probabilidad de error de tipo 1, es decir, encontrar significatividad estadistica en una discontinuidad cuando esta no existe (Gelman y Imbens 2019). En relacion a \(w(z_{i})\), las estimaciones no-parametricas pueden presentar estimaciones sesgadas cuando el soporte de la curva verdadera es acotado, el cual puede ser reducido si se utiliza una funcion de ponderacion triangular (Cheng, Fan y Marron 1997):
\[w_{k}=1-\frac{|x_{k}-x_{i}|}{h} \tag{2}\]
Por ultimo, los RDD utilizando RPLP son sensibles al \(h\) utilizado. En este sentido Calonico, Cattaneo y Farrell (2019) sugieren estimarlo como:
\[h=\left[\frac{(1+2q)\hat{v}_{n}^{bc}}{2(1+p-q)\hat{b}_{n}}\right]^{1/(2p+3)}n^ {-1/(2p+3)} \tag{3}\]
donde \(p\) es el orden de la RPLP, \(q\) es el orden de la derivada (0 en el caso de RDD y 1 en RKD), \(\hat{v}_{n}^{bc}\) la aproximacion de la varianza y \(\hat{b}_{n}\) la aproximacion del sesgo en el estimador de discontinuidad. Con el fin de conocer la sensibilidad de los resultados a la especificacion elegida, se realizaron estimaciones de RDD y RKD utilizando polinomios de primer y segundo orden, con y sin covariables. Dado que la edad jublatoria no determina inequvocamente que las personas se han convertido en beneficiarias del PAMI, todas las estimaciones realizadas son del tipo _fuzzy_. Las estimaciones fueron realizadas con el paquete para RStudio _Robust Data-Driven Statistical Inference in Regression-Discontinuity Designs_ (Calonico, Cattaneo, Farrell y Rocio Titiunik 2022).
## 3 Resultados
En la Tabla 2 se presentan las estadisticas descriptivas de las variables utilizadas en este capitulo. Se observa que un 73 % de los hogares tienen un jefe de hogar titular de algun tipo de seguro, mientras que el 27 % no tiene seguro de salud. Alemas, un 20 % de los jefes de hogar es beneficiario del PAMI, mientras que los jefes de hogar con seguro voluntario o mas de un seguro representan el 4,8 y 3,2 % respectivamente. Por otra parte, los servicios de salud que registraron consumos con mas frecuencia fueron los productos farmaceuticos y las consultas medicas, los cuales se verificaron en un 48,3 y 37,2 % de los hogares, respectivamente. En relacion al GBS, la media per capita es de $ 909, lo que representa una proporcion media de 4,8 % del gasto total. Por ultimo, los indicadores de gasto catastrofico en salud (GCS) estimados muestran que un 15,6 % de los hogares destino mas del 10 % de su gasto total a GBS, mientras que un 4,3 % destino mas del 25 % de su gasto. |
2309.06019 | DSLOT-NN: Digit-Serial Left-to-Right Neural Network Accelerator | We propose a Digit-Serial Left-tO-righT (DSLOT) arithmetic based processing
technique called DSLOT-NN with aim to accelerate inference of the convolution
operation in the deep neural networks (DNNs). The proposed work has the ability
to assess and terminate the ineffective convolutions which results in massive
power and energy savings. The processing engine is comprised of low-latency
most-significant-digit-first (MSDF) (also called online) multipliers and adders
that processes data from left-to-right, allowing the execution of subsequent
operations in digit-pipelined manner. Use of online operators eliminates the
need for the development of complex mechanism of identifying the negative
activation, as the output with highest weight value is generated first, and the
sign of the result can be identified as soon as first non-zero digit is
generated. The precision of the online operators can be tuned at run-time,
making them extremely useful in situations where accuracy can be compromised
for power and energy savings. The proposed design has been implemented on
Xilinx Virtex-7 FPGA and is compared with state-of-the-art Stripes on various
performance metrics. The results show the proposed design presents power
savings, has shorter cycle time, and approximately 50% higher OPS per watt. | Muhammad Sohail Ibrahim, Muhammad Usman, Malik Zohaib Nisar, Jeong-A Lee | 2023-09-12T07:36:23Z | http://arxiv.org/abs/2309.06019v2 | # DSLOT-NN: Digit-Serial Left-to-Right Neural Network Accelerator
###### Abstract
We propose a Digit-Serial Left-to-right (DSLOT) arithmetic based processing technique called _DSLOT-NN_ with aim to accelerate inference of the convolution operation in the deep neural networks (DNNs). The proposed work has the ability to assess and terminate the ineffective convolutions which results in massive power and energy savings. The processing engine is comprised of low-latency most-significant-digit-first (MSDF) (also called _online_) multipliers and adders that processes data from left-to-right, allowing the execution of subsequent operations in digit-pipelined manner. Use of online operators eliminates the need for the development of complex mechanism of identifying the negative activation, as the output with highest weight value is generated first, and the sign of the result can be identified as soon as first non-zero digit is generated. The precision of the online operators can be tuned at run-time, making them extremely useful in situations where accuracy can be compromised for power and energy savings. The proposed design has been implemented on Xilinx Virtex-7 FPGA and is compared with state-of-the-art Stripes on various performance metrics. The results show the proposed design presents power savings, has shorter cycle time, and approximately \(50\%\) higher OPS per watt.
Online arithmetic, most-significant-digit first, convolution neural network, CNN acceleration.
## I Introduction
In the recent years, deep neural networks have shown impressive performance and are considered as state-of-the-art classification algorithms, achieving near-human performance in applications including image processing, natural language processing, object detection, and bio-informatics etc., [1, 2, 3]. The performance of the DNNs is related to their computational complexity. It is commonly observed that the number of layers has a significant impact on the network's performance [4]. Specifically, a greater number of layers often results in superior feature extraction capabilities. However, it is important to note that deeper networks typically require a larger number of parameters and, consequently, more extensive computational resources and memory capacity to be effectively trained. The main computation is the multiply-accumulate (MAC) operation that account for \(99\%\) of the total computations in convolution neural networks (CNN) [5]. The arrangement of MAC units are dependent on the size and shape of DNN. For example, the first entry DNN in ImageNet challenge to surpass human-level accuracy named ResNet model with \(152\) layers requires \(11.3\) GMAC operations and \(60\) million weights [6]. As such, there exists a trade-off between the benefits of increased network depth and the associated costs in terms of model size and resource requirements.
### _Related Works_
The aforementioned challenges led the research into designing domain specific architectures to accelerate the computation of convolution operations in deep neural networks [7, 8]. Moreover, such designs perform the CNN inference in a layer-by-layer fashion, which substantially increases the flow of data to and from the external memory. During the past few years, there has been an emerging trend towards the implementation of DNN acceleration and evaluation designs using bit-serial arithmetic circuits [9, 10]. This trend has been motivated due to various reasons such as: (1) reduce the computational complexity and the required communication bandwidth (2) the requirement of variable data precision by various deep learning networks as well as the requirement of variable precision within the layers of a network, (3) the compute precision can be varied easily using bit-serial designs simply by adjusting the number of compute cycles in a DNN model evaluation, and (4) the need to improve the energy and resource utilization by early detection of negative results, hence terminating such ineffectual computations yielding negative results. Stripes [9] is considered among the pioneering works employing bit-serial multipliers instead of conventional parallel multipliers in their accelerator architecture to address the challenges such
as power and throughput. In the similar context, UNPU [10] enhanced the Stripes architecture by incorporating look-up tables (LUTs) to store the inputs to be reused multiple times during the computation of an input feature map.
Most modern CNNs use rectified linear unit (ReLU) as an activation function which filters the negative results of the convolution and replaces them with zero. Studies [11, 12, 13], show that about \(42\%\)-\(68\%\) of the modern CNN produce a negative output, suggesting a significant wastage of power on unnecessary computation. Most conventional CNN acceleration designs perform the ReLU activation separately, after the completion of the convolution operations. Recently, some researchers have proposed methods of early detection and termination of the negative results [11, 12, 13]. Early detection of the negative activations results in reduced computations and improvement in energy requirements of the hardware designs. Existing solutions either involve special digit encoding schemes [12, 13] or designing sophisticated circuits [11] to predict if the result is negative. In [11], the algorithm requires significant software complicity to re-order the operation, limiting the deployment of such techniques.
In this research, we propose to use _Online arithmetic_ for early detection of negative input to the ReLU activation function and terminate ineffective convolution. We develop online arithmetic-based multiplier and adders to perform multiply and accumulate operation.
### _Organization and Specific Contributions of the Paper_
The specific contributions of this work are as follows:
* DNN accelerator design based on MSDF arithmetic scheme.
* A novel and straight-forward mechanism for the detection of negative activation during the computation of the convolution operation.
* Energy efficient design resulting in \(50\%\) higher OPS per watt compared to SIP [9].
The rest of the paper is organized as follows. Section II presents the details of proposed online arithmetic based convolution computation and early termination technique. The evaluation and results of the proposed methodology has been presented in Section III, followed by conclusion in Section IV.
## II Materials and Methods
A convolution layer processes an input image by applying \(M\) 3D kernels in a sliding window fashion. Typically, convolution layers in CNNs perform a series of multiply-accumulate (MAC) operations to compute the output feature maps. Each MAC operation involves multiplying corresponding elements of the kernel and input feature maps and summing up the results. The convolution operation carried out in a CNN layer can be outlined by a simple weighted sum or SOP equation as follows;
\[y_{ij}=\sum_{a=0}^{m-1}\sum_{b=0}^{m-1}w_{ab}x_{(i+a)(j+b)} \tag{1}\]
where, \(y_{ij}\) is the \(ij^{th}\) output of layer \(l\), \(w\) is the kernel of dimensions \(m\times m\), and \(x\) represents the input of the convolution. It can be observed from the equation that for any \((i,j)\), the kernel \(w\) remains the same while the input changes according to the sliding window operation. This characteristic of the convolution brings the opportunity of weight stationarity in the dataflow architecture of convolution layers.
### _Online Artihemtic_
The online arithmetic is essentially a computing paradigm that works serially in most-significant digit first (MSDF) manner, i.e., the inputs are fed, and output is generated digit-by-digit from left-to-right (LR) [14]. Digit level pipelining stands out as a key feature of this computing paradigm, among several other characteristics. Since all the computation is done in LR manner, it is possible to pipeline subsequent operations at digit level i.e., as soon as first digit of the preceding operation is obtained, the succeeding operation regardless of data dependency, can start computation after a small fixed delay called _online delay_, denoted by \(\delta\) as shown in Fig. 1.
Owing to this property, the intermediate results need not be stored, rather they are consumed in the successive computation, resulting in a decreased number of read/write operation from/to memory, hence low bandwidth requirements and consequent energy savings. In order to generate the output on the basis of partial input information, the online computation requires flexibility in computing digits. This is done by employing redundant digit number system. To this end, signed digit (SD) redundant number system in which number representation is done in radix (\(r\)) format is usually employed which has more than \(r\) values for the representation of a given value. In this study, we use symmetric radix-\(2\) redundant digit set of \(-1,0,1\). For compatibility, the online modules use fractional numbers, this also simplifies the alignment of the operands. The first digit of the operand has weight of \(r^{-1}\), and at a given iteration \(j\), the digit \(x_{j}\) is represented by two bits \(x^{+}\) and \(x^{-}\), and the numerical value is given by (2).
\[x_{j}=SUB(x^{+},x^{-}) \tag{2}\]
The input and outputs are given as (3) and (4) respectively.
\[x[j]=\sum_{i=1}^{j+\delta}x_{i}r^{-i} \tag{3}\]
\[z[j]=\sum_{i=1}^{j}z_{i}r^{-i}, \tag{4}\]
Fig. 1: Timing characteristics of online operation with \(\delta=3\).
where the square brackets represent the iteration index and subscript denote the digit index. A given online algorithm executes for \(n+\delta\) cycles. The single digit input is fed for \(n\) iterations, and after \(\delta\) cycles a single output digit is generated in each iteration.
#### Iii-A1 Online Multiplier (OLM)
In most CNN designs, the convolution during inference is carried out by multiplying a constant weight kernel with the input image in a sliding window fashion. This particular characteristic of CNNs suggests that the kernel matrix must be used multiple times for the convolution operation. In this context, an online multiplier, with one operand in parallel and the other in serial manner, can be useful, where the weight kernel can be employed in parallel and input can be fed in serial manner. In this study, we use the non-pipelined serial-parallel multiplier presented in [15], and depicted in the following Fig. 2(a). The multiplier generates its output in MSDF fashion after an online delay of \(\delta=2\) cycles. The serial input and and output in each cycle are represented as (3) and (4) respectively, while the constant weight is represented as:
\[Y[j]=Y=-y_{0}\cdot r^{0}+\sum_{i=1}^{n}y_{i}r^{-i} \tag{5}\]
Further derivations related to the recurrence and selection function of the serial-parallel online multiplier can be found in [15].
#### Iii-A2 Online Adder (OLA)
Since the multipliers used in this study generate their outputs in an MSDF fashion, an adder with similar capability is needed to compute the sum-of-product (SOP). In this context, a digit-serial online adder that takes both its inputs and generates its output in an MSDF fashion, is employed. This enables digit-level pipelining in the proposed SOP design and also helps in the early determination and subsequently termination of negative activations. The online adder with an online delay of \(\delta=2\), follows a simple construction as presented in Fig. 2(b). Further details and relevant derivations can be found in [16].
### _Proposed Design_
This section details the architecture of the proposed DSLOT-NN based on online computation units with early termination capability. The arrangement of computation units in the processing engine (PE) of DSLOT-NN and the techniques for terminating ineffectual convolutions (resulting as negative) are discussed.
#### Iii-B1 Processing Engine and DSLOT-NN Architecture
The architecture of the proposed DSLOT-NN is presented in Fig. 4. Each PE, presented in Fig. 3 contains \(k\times k\) online serial-parallel multipliers followed by a reduction tree to generate one output pixel. The input pixel is fed serially while the kernel pixel is fed in parallel, depicted by the thickness of the arrows in Fig. 4. The arrangement of PEs is done in such a way that the outputs of the \(4\) PEs will directly be fed to the ensuing pooling layer. It is worth noting that the architecture presented in Fig. 4 is designed for a CNN with single input feature map. A similar approach can be followed for a CNN with multiple input feature maps. A generic representation of the DSLOT-NN is also presented in the following sections.
Each multiplier in the PE is responsible for the multiplication of one pixel in the convolution window with the corresponding pixel in the same feature map of the convolution kernel. Therefore, all the \((k\times k)\) pixels are processed in parallel. The number of cycles required for a PE to generate its output can be calculated as follows
\[\begin{split} Num_{Cycles}=\delta_{\times}+\delta_{+}\times\lceil log _{2}(k\times k)\rceil+\\ \delta_{+}\times\lceil log_{2}(N)\rceil+p_{out}\end{split} \tag{6}\]
where \(\delta_{\times}\) and \(\delta_{+}\) are the online delays of online multiplier and adder respectively, \(\lceil log_{2}(k\times k)\rceil\) is the number of reduction tree stages required to generate the SOP of the \(k\times k\) multipliers, \(\lceil log_{2}(N)\rceil\) is the number of reduction tree stages required to add the SOP results of \(N\) input feature maps, and \(p_{out}\) is the precision of the SOP result. \(p_{out}\) is calculated as follows.
\[p_{out}=p_{out}^{Mult}+\lceil log_{2}(k\times k)\rceil \tag{7}\]
#### Iii-B2 Early Termination of Negative Computations
Most CNN accelerator designs put emphasis on the faster or efficient generation of the sum-of-product (SOP), but only a few works discuss the possibility of early assessment of negative values
Fig. 3: processing engine Architecture
Fig. 2: Basic Components (a) Online Serial-Parallel Multiplier [15], where \(x\) is the serial input and \(Y\) is the parallel output, (b) Online Adder [16]
for the activation layer (ReLU). The early determination of negative activations is a challenging problem in accelerators based on conventional arithmetic. For instance, the bit-serial multipliers takes _multiplicand_ in parallel and the _multiplier_ is processed serially. In each iteration a partial product is generated and stored in register, which is then shifted into appropriate positions before being added to other partial products to obtain the final product. Typically a series of adders, such as carry save adders, ripple-carry adders, etc., are employed to perform this reduction. In convolution, another level of reduction is required to get the output pixel. Furthermore, another level of reduction is needed, if there are more than \(1\) input feature maps, to compute the SOP. In conventional bit-serial multipliers, the determination of the most significant bit and the identification of the result's polarity require waiting until all partial products have been generated and added to the previous partial sums. Among the few works that aim to solve the early detection of the negative activations, use either a digit encoding scheme or an estimation technique for early negative detection [12, 13, 17].
The challenge of early detection and termination of negative activations can be addressed by the intrinsic ability of online arithmetic to generate output digits in an MSDF manner. The proposed design supports the termination of negative activation computation in \(p\) cycles, where \(p<\mathbb{N}\), and \(\mathbb{N}\) is the number of cycles to compute complete result. This is done by observing and comparing the output digits. The process of detecting the negative activations and subsequently terminating the relevant computation is summarized in Algorithm 1.
```
1:\(z^{+}[j]\), \(z^{-}[j]\) bits
2:for j : l to \(Num_{cycles}\)do
3:\(z^{+}[j]\gets z^{+}[j]\)\(\frown z^{+}_{j}\)
4:\(z^{-}[j]\gets z^{-}[j]\)\(\frown z^{-}_{j}\)
5:if\(z^{\uparrow}[j]<z^{-}[j]\)then
6: Terminate
7:else
8: Continue
9:endif
10:endfor
```
**Algorithm 1** Early detection and termination of negative activations
The ReLU unit is equipped with registers to store redundant output \(z^{+}[j]\) and \(z^{-}[j]\) bits, which are the positive and negative output digits of the SOP representing the output SOP in redundant number representation. During each iteration, the new digits are concatenated, indicated by "\(\frown\)" in Algorithm 1, with their corresponding pre logits and as soon as the value of \(z^{+}[j]\) goes below the value of \(z^{-}[j]\) indicating a negative output, a termination signal is generated by the control unit and the computation of the SOP is terminated. Fig. 4, shows the block diagram of the proposed DSLOT-NN considering one input feature map. This simple procedure of early negative detection can save upto \(45-50\%\) of the computation cycles for a convolution operation resulting in a negative number subsequently resulting in an energy efficient design. According to (6), the number of cycles required by the proposed design to process one convolution is found to be \(33\), where \(\delta_{\times}=\delta_{+}=2\), \(k=5\), \(N=1\), and \(p_{out}=21\) considering the bit growth in the reduction tree stages. Where \(p_{out}\) is calculated by eq. 7 as, \(p_{out}^{Mult}=16\) and \(\lceil log_{2}(k\times k)\rceil=5\), with \(k\times k=5\times 5=25\) as the convolution kernel dimensions.
#### Iii-B3 General DSLOT-NN Design
A general extension of the proposed DSLOT-NN for larger networks is presented in Fig. 5. The number of PEs in a processing block (PB) depend upon the number of input feature maps for a particular convolution layer in a CNN. This generic architecture can be repeated multiple times depending upon the number of output feature maps if more parallelism is required.
The PBs are responsible for the computation of one of the pixels belonging to a pooling (or maxpooling) window. In Fig. 5, we presented an example of a \(2\times 2\) pool window hence the 4 PBs. Each PB consists of multiple PEs followed by an online adder tree. The number of PEs in a PB represents
Fig. 4: DSLOT-NN block diagram
the input tiling and it has a range of \((1,N)\), where \(N\) is the number of input feature maps. The output digits of the adder tree are forwarded to a simple comparator circuit to perform the detection of negative activations for ReLU. The structure of a PE is presented in Fig. 3.
Section III contains further details on the experiments conducted to determine the amount of clock cycles and the subsequent energy savings achieved due to the early detection and the termination of the negative activations.
## III Experimental Results
To show the effectiveness of DSLOT-NN both in-terms of latency as well as the early determination of negative activations, we consider a pre-trained CNN as shown in Fig. 6.
As an initial study, we opt to accelerate the first three layers i.e., convolution, ReLU and maxpooling only as presented in Fig. 7. With one input feature map, and generating one output pixel after a maxpooling of \(2\times 2\), we employ the configuration of DSLOT-NN as shown in Fig. 5. Four PEs equipped with \(25\) multipliers and reduction tree each, compute the sum-of-product of one of the convolution window shown in different colors of Fig. 7, in parallel. The rectified linear unit (ReLU) operation has been integrated as an inherent characteristic of the design, whereby each PE in the system detects the sign of its output. In the event of a negative sign detection, the further computation process is terminated following Algorithm 1. An experiment was conducted on the MNIST handwritten digit classification database [18].
### _Results and Analysis of the Proposed Early Negative Detection and Termination_
During inference of the proposed design, it is found that on average, \(12.5\%\) of output pixels result in negative values for each MNIST test set image. Fig. 8 presents a graphical representation of the average percentage of negative activations in each MNIST class. This is calculated by counting the number of negative activations resulting in each convolution performed on MNIST test set. The reason for only \(12.5\%\) predicted negative activations, compared to the statistics explained in studies such as [11, 12, 13], is mainly because these works report the statistics for popular DNNs such as VGG-16, AlexNet, ResNet50, etc., while the proposed work uses a relatively simple CNN design. Another reason is that for the proposed implementation, the adopted CNN design was trained and implemented without the inclusion of the bias term. In general CNN architectures handling MNIST database, a substantial number of activations are rendered negative due to the reason that in MNIST database, a large number of input pixels are zero due to the presence of massive black regions in the image. In most networks trained on MNIST database, these bias terms are usually very small, and mostly negative, values. Therefore, the absence of the bias term in the proposed CNN implementation causes lesser negative values. This problem can also be catered by exploiting the sparsity in the input feature maps. This can lead to significant computational savings in terms of the number of cycles required to calculate an entire convolution. For simplicity, a randomly selected batch of 1000 images (100 images per class) from the MNIST test set was used.
The average number of computation cycles being saved per digit can be seen in Fig. 9, where, the x-axis represent the digit classes in the MNIST database and y-axis represent the percentage of average number of computation cycles being
Fig. 5: General DSLOT-NN architecture
Fig. 8: Average number of negative output activations (\(\%\)) (after CNN layer) per image in each MNIST digit class
Fig. 6: CNN for MNIST handwritten digit classification
Fig. 7: Simultaneous computation of first three layers of the CNN
saved during the convolution computation using the proposed design.
### _FPGA Implementation_
For comparison, we consider the bit-serial inner product units (SIP) from Stripes [9], presented in Fig. 10, for a similar configuration as the proposed design. The SIP unit design is extended to perform 8-bit multiplication and subsequently the SIP processing engines are designed for computing the (\(k\times k\)) convolution. This results in a similar configuration as the proposed design presented in Fig. 4.
A detailed description of the SIP design is presented in Fig. 11. The partial product generator (PPG) presented in Fig. 11(a) is the AND gate array responsible for generating the partial products for the multiplication of a pixel of kernel matrix with the corresponding input pixel fed in a bit-serial manner. Where \(w[0],w[1],\ldots w[n]\) represent the bits of a \(n\)-bit kernel pixel, while \(x[i]\) is the input bit at iteration \(i\). This input bit is ANDed with \(n\)-bits of the kernel pixel to generate the \(i^{th}\) partial product. For a fair comparison, \((k\times k)\) PPGs are used in the SIP design whose outputs, the \((k\times k)\) partial products are forwarded to a reduction tree which generates the sum of these partial products. This reduction tree is followed by an accumulator which accumulates the incoming sum of partial products (SOPP) by shifting and adding the previous sum with the incoming SOP. This process is iterated \(n\) times, keeping the input and kernel precision the same (\(n\)).
The critical path of the SIP design can be represented by the following equation
\[t_{SIP}=t_{AND}+5\times t_{CPA-8}+t_{CPA-21} \tag{8}\]
Similarly, the critical path of the proposed design can be calculated as the sum of the critical path of online multiplier and the subsequent reduction tree.
\[t_{OLM}=t_{[2:1]MUX}+t_{[3:2]Adder}+t_{CPA-4}+t_{SELM}+t_{XOR} \tag{9}\]
The critical path of an online adder (OLA) is found to be
\[t_{OLA}=2\times t_{FA}+t_{FF} \tag{10}\]
Therefore, the critical path for the reduction tree can be calculated as the product of the number of stages and \(t_{OLA}\). So, the critical path of the proposed DSLOT-NN can be calculated as
\[t_{DSLOT}=t_{OLM}+5\times t_{OLA} \tag{11}\]
The input and weight are represented by fixed point 8-bits. However, for the proposed design the fixed point-8 is converted to redundant representation. The effect of precision of the input on the model accuracy is not considered in the scope of this work. SIP uses simple implementation for multiplication where the weights bits are fed in parallel and ANDed with the input which is fed serially. Both the SIP and the proposed design have been implemented on Virtex-7 FPGA and the results of the implementation are presented in Table. I. In terms of area, the proposed design has marginally higher consumption than SIP in terms of look-up tables (LUT). The proposed design shows savings in power consumption. In particular, the design has \(9.1\%\) and \(33.22\%\) low power and energy consumption than SIP, respectively. The proposed design has smaller critical path and shows approximately \(48.6\%\) shorter than SIP. Besides the significant improvement in critical path delay, in this implementation, the experiments were conducted on FPGA and the primary issues considered for the scope of this work were the challenge of early termination of negative activations and the subsequent computational efficiency. The results of performance density in-terms of \(GOPS/W\) showcase the effectiveness of the proposed method. Moreover, in future works, more experiments
Fig. 11: SIP design, (a) Partial product generator (PPG), (b) Overall SIP design
Fig. 10: A general bit-serial inner product unit (SIP) [9]
Fig. 9: Average number of computation cycles (\(\%\)) saved per class in MNIST hand-written digit classification database
on professional design tools will be conducted where various design optimizations, including the timing optimization will be included to assess the robustness and flexibility of the proposed design. The effect of early termination is observed in the significant improvement in performance of the proposed design. DSLOT-NN has approximately \(49.7\%\) higher \(OPS/W\) than SIP.
The LUTs used by the proposed design are slightly higher in number compared to the SIP design. In particular, the proposed design uses \(56.86\%\) more number of LUTs compared to the SIP design, however, given the lower critical path delay, dynamic power, and the capability of energy and computation savings owing to the early detection and termination of negative activations, it can be observed from the results that the proposed design has superior performance compared to SIP design.
Although, the proposed design has been tested on a relatively simple and small benchmark, however, the general design presented in Fig. 5 shows the overall scheme of the implementation which can handle arbitrary kernel size and the number of input feature maps to construct the convolution layer of various dimensions for any given network and database.
## IV Conclusion
In this paper we presented DSLOT-NN which utilize online arithmetic based arithmetic operators for the acceleration of convolution layers in deep neural networks. The online arithmetic presents various benefits including shorter latency, variable precision and digit-level pipelining. We implemented a mechanism to detect and terminate the ineffective convolutions which resulted in power savings and increased performance. In particular, the proposed design has approximately \(50\%\) higher performance compared to the state-of-the-art approach for convolution computation. In future, we plan to analyze the behavior of online arithmetic in DNN acceleration with variable input and kernel precision in an inter-layer as well as intra-layer setting. Furthermore, the sparsity in the input and kernels will also be exploited to further improve the performance and energy efficiency of the proposed design.
|
2309.17114 | UXsim: An open source macroscopic and mesoscopic traffic simulator in
Python -- a technical overview | This note describes a technical overview of UXsim, an open source
macro/mesoscopic traffic simulator in pure Python programming language. UXsim
is based on Kinematic Wave model (more specifically, mesoscopic version of
Newell's simplified car-following model) and dynamic user optimum-like route
choice principle, which are well established methodology in the transportation
research field. It can compute dynamical network traffic flow and have basic
visualization and analysis capability. Furthermore, users can implement their
own models and control methods into the simulator by using Python, thanks to
the flexibility of the language. The simulator and its codes are freely
available at https://github.com/toruseo/UXsim under the MIT license. | Toru Seo | 2023-09-29T10:16:28Z | http://arxiv.org/abs/2309.17114v2 | # UXsim: An open source macroscopic and mesoscopic traffic simulator in Python--a technical overview
###### Abstract
This note describes a technical overview of UXsim, an open source macro/mesoscopic traffic simulator in pure Python programming language. UXsim is based on Kinematic Wave model (more specifically, mesoscopic version of Newell's simplified car-following model) and dynamic user optimum-like route choice principle, which are well established methodology in the transportation research field. It can compute dynamical network traffic flow and have basic visualization and analysis capability. Furthermore, users can implement their own models and control methods into the simulator by using Python, thanks to the flexibility of the language. The simulator and its codes are freely available at [https://github.com/torusco/UXsim](https://github.com/torusco/UXsim) under the MIT license.
## 1 Introduction
Vehicular traffic flow plays essential roles in today's civilization. However, it faces several critical issues such as congestion, accidents, and environmental burden. Macroscopic traffic simulation is important to understand and manage dynamic urban-scale vehicular traffic flow. Mesoscopic traffic simulation is a type of macroscopic traffic simulation, in which some microscopic nature of traffic is incorporated in order to enhance simulation capability while keeping computational efficiency high.
_UXsim_ is a new open source macroscopic and mesoscopic traffic simulator developed by the author. It is written in pure Python and its common libraries. Thus, it can be used and modified flexibly by users. Although Python's computational efficiency is not so high, UXsim can compute large-scale traffic phenomena fairly effectively thanks to the use of the macro/mesoscopic model.
This note describes a technical overview of UXsim, namely, its simulation logic with simple examples. The details are throughly explained in Seo (2023), a Japanese book on general macroscopic traffic flow theory and simulation. The main functions of UXsim are as follows:
* Dynamic network traffic simulation with a given network and time-dependent OD demand.
* Implementation of traffic management schemes (e.g., traffic signals, inflow control, route guidance, congestion pricing).
* Basic analysis of simulation results (e.g., trip completion rate, total travel time, delay), and their export to pandas.DataFrameFrames and CSV files.
* Visualization of simulation results (e.g., time-space diagram, MFD, network traffic animation).
Just for information, the origin of the name "UXsim" is as follows. "U" stands for uroboros, a mythical snake that embodies some essence of network traffic congestion. "X" means the position coordinates, which is the most important state variable in the employed mesoscopic traffic flow model. "sim" signifies, of course, simulation. UXsim and its codes are freely available at [https://github.com/toruseo/UXsim](https://github.com/toruseo/UXsim) under the MIT license.
## 2 Simulation logic
### Models
#### 2.1.1 Overview
The simulator is based on Kinematic Wave (KW) model (Lighthill and Whitham, 1955; Richards, 1956) and dynamic user optimum-like route choice principle, which are well established and common methodology in the transportation research field. More specifically, it is constructed by combining the following models.
Link/Vehicle model determines traffic dynamics on links. The mesoscopic version of Newell's simplified car-following model (Newell, 2002), also known as X-model (Laval and Leclercq, 2013) is employed for this purpose. This is equivalent to KW model with a triangular fundamental diagram. In this model, vehicles travels as fast as possible, while maintaining safe spacing and headway which are speed-dependent. Thus, it can reproduce traffic congestion and queuing phenomena fairly accurately.
Node model determines inter-link transfer of traffic. The mesoscopic version of the incremental node model (Flotterod and Rohde, 2011; Flotterod, 2016) is employed for this purpose. This model can reproduce merging and diverging traffic in a manner consistent with KW model.
Route choice model determines network-level decision of travelers. Dynamic user optimum (Kuwahara and Akamatsu, 2001), also known as the reactive assignment, with stochasticity and delay is employed for this purpose. In this model, travelers tend to choose the shortest path to the destination that minimizes instantaneous travel time. However, instantaneous travel time is highly volatile, so a kind of inertia is added to traveler's decision making process by incorporating stochasticity and delay.
#### 2.1.2 Details
For reference, important model formula are presented in this section. Users do not necessarily need to understand these contents to use UXsim, but understanding on them would be useful for advanced usage and customization of UXsim. For further details, please refer to Seo (2023) or the original articles mentioned in Section 2.1.1.
Regarding the Link/Vehicle model, the driving behavior of a platoon consists of \(\Delta n\) vehicles in a link is expressed as
\[X(t+\Delta t,n)=\min\left\{\begin{array}{l}X(t,n)+u\Delta t,\\ X(t+\Delta t-\tau\Delta n,n-\Delta n)-\delta\Delta n\end{array}\right\}, \tag{1}\]
where \(X(t,n)\) denotes position of platoon \(n\) on time \(t\), \(\Delta t\) denotes simulation time step width, \(u\) denotes free-flow speed of the link, \(\delta\) denotes jam spacing of the link.
The Node model is computed by the following algorithm:
1. Let \(i\) be time step number
2. Select incoming link \(l\) with probability \(\alpha_{l}/\sum_{l}\alpha_{l}\), where \(\alpha_{l}\) is the merging priority parameter of link \(l\). If all incoming link have been selected, go to step 6.
3. Select the vehicle that exists at the end of link \(l\). Hereafter, the vehicle is denoted as \(n\). If there is no such vehicle, go back to step 2.
4. Let \(o_{n}^{i}\) be the outgoing link that vehicle \(n\) want to go. Check whether link \(o_{n}^{i}\) has a sufficient (larger than \(\delta\Delta n\)) vacant space at its starting position, and do the following: * If yes, transfer vehicle \(n\) from link \(l\) to link \(o_{n}^{i}\). * If no, vehicle \(n\) cannot move, so do nothing.
5. Go back to step 2.
6. Increment time step (\(i:=i+1\)) and go back to step 2.
The Route choice model is computed by the following steps. Let \(b_{o}^{z,i}\) be a dummy variable that is 1 if link \(o\) is a part of the shortest path from every nodes to destination \(z\) on time step \(i\), and 0 otherwise. Shortest path search is performed every \(\Delta i_{B}\) time steps. When shortest path search is performed, we update \(B_{o}^{z,i}\), an attractiveness of link \(o\) for vehicles with destination \(z\) on time step \(i\), as
\[B_{o}^{z,i}=(1-\lambda)B_{o}^{z,i-\Delta i_{B}}+\lambda b_{o}^{z,i}, \tag{2}\]
where \(\lambda\) is a given weight. We assume that the initial \(B_{o}^{z,0}\) is equal to \(b_{o}^{z,0}\). Finally, outgoing link \(o_{n}^{i}\) of vehicle \(n\) is determined as \(o\) with probability \(B_{o}^{z,i}/\sum_{o}B_{o}^{z,i}\), where \(z\) is the destination of \(n\).
### Implementation
The implementation of the models are summarized in the following diagrams. Fig. 1 shows a static structure of UXsim as a class diagram (based roughly on Unified Modeling Language). Each three-row rectangle represents a class. The top row denotes the class name, the second row denotes key variables, and the third row denotes key functions.
Similarly, Fig. 1(a) shows a dynamic computational flow of entire simulation of UXsim as an activity diagram. Fig. 1(b) shows that of an instance of Vehicle class in UXsim. For the details, please see the codes.
### Key inputs
Some of the key inputs for simulation are as follows:
* Reaction time of vehicles \(\tau\). This value determines the simulation time step width. This is a global parameter.
* Platoon size for mesoscopic simulation \(\Delta n\). This value determines the simulation time step width. This is a global parameter.
* Route choice model parameters: shortest path update interval \(\Delta i_{B}\) and weight value \(\lambda\).
* Lists of nodes and links. They define the network structure.
* Parameters of each link. For example, length, free flow speed \(u\), jam density \(\kappa\), and merging priority parameter \(\alpha\).
* Parameters of each node. For example, position and traffic signal setting.
* Demand. For example, origin, destination, and departure time of each vehicle are specified.
Roughly estimating, the computational cost of the simulation is proportional to the total number of vehicles, and inverse proportional to \(\tau\) and \(\Delta n^{2}\).
### Analysis
UXsim has several built-in analysis and visualization functions. Examples are as follows:
* Overall traffic analysis (e.g., total travel time)
* OD-level traffic analysis (e.g., travel time, delay)
* Link-level traffic analysis (e.g., traffic volume, delay, dynamic traffic states)
* Output simulation and analysis results to file or pandas dataframe
* Time-space diagram of trajectories of each link
* Time-space diagram of traffic states (i.e., flow, density, and speed) of each link
* Cumulative counts of each link
* Macroscopic fundamental diagram (MFD) (Geroliminis and Daganzo, 2007)
* Animation of network traffic state dynamics
* Animation of vehicle flow in network
Figure 1: Class diagram of UXsim (translated and modified from Seo (2023))
Figure 2: Activity diagrams of UXsim (translated and modified from Seo (2023))
Examples
### Gridlock and its prevention
Simulation of a simple gridlock congestion is shown as an example. The scenario setting is illustrated in Fig. 3. There is one circular road, and the demands are arranged in such a way that they interfere with each other. In this way, the beginning of some traffic jams and the end of others may engage, resulting in a gridlock condition.
Such simulation can be executed by Code 1. The contents of each CSV file loaded at lines 16 and 17 are shown in Fig. 4. These code and CSV files are included in the repository as demos_and_examples/example_05_gridlock_and_prevention.py.
```
1fromuxsimimport*
2importpandasaspd
3
4if__name__=="__main__":
5#Definesimulation
6W=World(
7name="",
8deltan=5,
9print_mode=1,save_mode=1,show_mode=0,
10random_seed=0
11}
12
13#Definescenario
14#importCSVfiles
15W.load_scenario_from_csv("dat/uroboros_nodes.csv",
16"dat/uroboros_links.csv","dat/uroboros_demand.csv")
17
18W.finalize_scenario()
Figure 3: Gridlock scenario (translated and modified from Seo (2023))
Some of the outputs of this simulation are shown in Fig. 5. According to vehicle trajectories shown in Fig. (a)a, we can see that traffic was free-flowing until 1200 s when only one direction traffic demand was traveling. However, after the second demand started traveling, traffic congestion immediately occurred at the merging nodes W and E. Each queue generated by traffic congestion extended quickly and reaches the heads of the other queue. Then, gridlock happened. The MFD in Fig. (b)b clearly shows a typical gridlock phenomenon. Other outputs such as gif animations are also useful to confirm the dynamics of gridlock phenomena.
Now let us consider how to prevent a gridlock by implementing proper traffic management. For example, because the merging nodes are the sources of the queues that triggered the gridlock, we can prevent the gridlock by increasing the merging priority of links in the circular road. This can be done by inserting Code 2 to line 17 of Code 1: here, the merging priority parameters of links NE and SW are increased from the default value 0.5 to 2 In the real world, this kind of management can be executed
Figure 4: Contents of CSV files defining scenario
by doing ramp metering, signal control, and perimeter control.
**Code 2:** Gridlock prevention management
\begin{tabular}{c c} \hline \hline & W.get\_link("NE").merge\_priority = 2 \\
2 & W.get\_link("SW").merge\_priority = 2 \\ \hline \hline \end{tabular}
The results are shown in Fig. 6. It is obvious that gridlock is perfectly prevented. The circular road was almost always free-flowing, and the MFD's state was also in free-flowing or critical regime. Although some congestion occurred at the entry links (this can confirmed by analyzing other outputs), such congestion quickly diminishes one traffic in the circular road finishes traveling. Thus, this kind of traffic management is very effective to prevent traffic congestion and gridlock with little or no loss to anyone.
### Sioux Falls network
Sioux Falls network is a popular network for testing and benchmarking in the transportation research field (Transportation Networks for Research Core Team, 2021). Fig. 7 shows simulation results by UXsim. A qualitatively reasonable traffic pattern on the network was obtained by considering traffic dynamics and route choice. The specification of the simulation scenario is as follows:
* Simulation duration: 7200 s
Figure 5: Gridlock simulation
* Total number of vehicles: 34690 veh
* Number of links: 76
* Total road length: 314000 m
* Time step width \(\Delta t\): 5 s
* Platoon size \(\Delta n\): 5 veh
* Route choice update interval \(\Delta i_{B}\Delta t\): 600 s
The computation time was about 16 s using Windows 10 computer with 3.79 GHz CPU and 32 GB RAM.
The repository [https://github.com/torusco/UXsim](https://github.com/torusco/UXsim) also contains detailed other examples such as highway bottleneck, traffic signal control, scenario generation by script, along with detailed documents.
|
2301.13550 | Linear Jacobi-Legendre expansion of the charge density for machine
learning-accelerated electronic structure calculations | Kohn-Sham density functional theory (KS-DFT) is a powerful method to obtain
key materials' properties, but the iterative solution of the KS equations is a
numerically intensive task, which limits its application to complex systems. To
address this issue, machine learning (ML) models can be used as surrogates to
find the ground-state charge density and reduce the computational overheads. We
develop a grid-centred structural representation, based on Jacobi and Legendre
polynomials combined with a linear regression, to accurately learn the
converged DFT charge density. This integrates into a ML pipeline that can
return any density-dependent observable, including energy and forces, at the
quality of a converged DFT calculation, but at a fraction of the computational
cost. Fast scanning of energy landscapes and producing starting densities for
the DFT self-consistent cycle are among the applications of our scheme. | Bruno Focassio, Michelangelo Domina, Urvesh Patil, Adalberto Fazzio, Stefano Sanvito | 2023-01-31T10:59:26Z | http://arxiv.org/abs/2301.13550v2 | Linear Jacobi-Legendre expansion of the charge density for machine learning-accelerated electronic structure calculations
###### Abstract
As the go-to method to solve the electronic structure problem, Kohn-Sham density functional theory (KS-DFT) can be used to obtain the ground-state charge density, total energy, and several other key materials' properties. Unfortunately, the solution of the Kohn-Sham equations is found iteratively. This is a numerically intensive task, limiting the possible size and complexity of the systems to be treated. Machine-learning (ML) models for the charge density can then be used as surrogates to generate the converged charge density and reduce the computational cost of solving the electronic structure problem. We derive a powerful grid-centred structural representation based on the Jacobi and Legendre polynomials that, combined with a linear regression built on a data-efficient workflow, can accurately learn the charge density. Then, we design a machine-learning pipeline that can return energy and forces at the quality of a converged DFT calculation but at a fraction of the computational cost. This can be used as a tool for the fast scanning of the energy landscape and as a starting point to the DFT self-consistent cycle, in both cases maintaining a low computational cost.
## I Introduction
As the main workhorse in electronic structure calculations, density functional theory (DFT) [1; 2] is today the most widely used method to compute materials properties. Its success derives from the favourable trade-off between computational overheads and accuracy, even when using simple approximations for the exchange and correlation energy functional [2; 3; 4; 5]. The central quantity in DFT is the electron charge density that, in principle, gives access to the ground-state properties [1], and of particular interest, to the ground-state total energy. In practice, however, the DFT functional is never minimized directly by using the charge density [6], but rather by solving a self-consistent set of single-particle equations, known as the Kohn-Sham (KS) equations [2]. This procedure effectively imposes a computational bottleneck and although large-scale calculations can be performed [7; 8], the typical system routinely simulated by DFT rarely reaches a few hundred atoms.
Machine learning (ML) has recently emerged as a surrogate for solving DFT KS equations and possibly replacing them [9; 10; 11]. For instance, trained ML models can be used as predictors for properties such as the energy gaps [12; 13; 14], superconducting critical temperatures [15; 16; 17; 18; 19], thermodynamic stability [20], topological invariant [21; 22], just to name a few. These models learn a direct map between the structure/composition and the target property, thus avoiding one or many computationally expensive calculations. Using ML for such mapping comes at the cost of accuracy, transferability, physical insight and the need for a large volume of high-quality training data, usually obtained through these very same computationally expensive calculations or, more rarely, from experimental sources [23; 24].
For tasks such as structure prediction [25; 26; 27; 28], phase diagrams evaluation [29; 30; 31], molecular dynamics [32; 33; 34], and, more generally, materials discovery [35; 36; 37; 38] one requires fast access to accurate energy, forces and stress tensor of the system investigated. Machine learning interatomic potentials (ML-IAPs) are developed to this end, bridging the gap between _ab initio_ methods and empirical force fields. The several strategies proposed to date implement a diversity of structural representations and learning algorithms [39; 40; 41; 42; 43; 44] to design ML-IAPs attaining accuracies close to that of DFT at a small fraction of the computational cost [43]. The performance of these models is not only a product of the representation of the atomic structure and the ML algorithm, but also the volume, quality, and diversity of the data play a fundamental role [43; 45]. In general, the construction of ML-IAPs requires campaigns of DFT calculations, whose extension and quality depend on the problem at hand (e.g., the number of species present in a given compound) and the range of applicability of the potential (e.g., the temperature range).
A radically different use of ML consists of improving the theory at its core instead of targeting the DFT outputs. For instance, ML can be used to numerically design new energy density functionals, effectively producing fully exchange and correlation energies [46; 47; 48; 49; 50; 51], and kinetic energy densities [52; 53; 54; 55]. These strategies, in general, seek to find more accurate approximations to the DFT energy, going beyond the current approximations [56], or to eliminate the need of introducing the KS construct by replacing the self-consistent KS equations with a direct minimization of the functional [6].
Unfortunately, although promising, these approaches are still far from obtaining a "universal" functional, treating all systems on an equal footing [56]. Note also that the construction of novel ML functionals requires results obtained at the wave-function quantum-chemistry level, a highly computationally expensive task.
In the same spirit, an alternative way to include ML in the DFT workflow is to construct models to directly predict the converged target DFT quantities, namely the Hamiltonian [57], the wavefunctions [58; 59] and the electron density [60; 61; 62; 63; 64; 65]. The goal here is not that of improving the functional, but to reduce or completely eliminate the number iterative steps needed to solve the KS equations. There are two main approaches used to predict the electronic charge density, \(n(\mathbf{r})\), through ML. One possibility is to expand \(n(\mathbf{r})\) over a local-orbital basis set and learn by ML the expansion coefficients. The completeness of such expansion, the basis set details, and the size of the training data limit the accuracy of the ML model [60; 61] and may introduce errors intrinsic to the particular representation [62]. Also, the approach is not transferable, namely a different ML model must be constructed for any different basis set.
The second approach considers the real-space representation of \(n(\mathbf{r})\), which is written over a grid in Cartesian space. This is a more "natural" representation available in any DFT code. Its main advantage is that the value of the electron density at a grid point is rotationally invariant with respect to the external potential, namely with respect to the position of the surrounding nuclei. As such, one can construct ML models that predict \(n(\mathbf{r})\) one grid point at the time, using as descriptors the local atomic neighbourhood of any given grid point (within some chosen cutoff radius). The success of such a grid-based approach largely depends on the chosen representation for the local environment and the learning algorithm. Usually, a single DFT calculation results in tens of millions of grid points so that the generation of abundant training data appears like an easy computational task. However, in a single calculation, there is data redundancy and little diversity (a narrow distribution of external potentials is explored), so multiple configurations for the same systems are usually considered. Then, one typically constructs large neural networks with millions of weights to be learned [63; 64], resulting in generally heavy models with little transferability.
Here our main focus is to transform such a grid-based approach into a lightweight tool that can be universally applied to DFT calculations. This is achieved by drastically reducing the computational overheads while reaching extremely high accuracies. In particular, we introduce a novel grid-centred representation of the atomic structure based on the Jacobi and Legendre (JL) polynomials, which were previously proposed to construct efficient ML force fields [66]. The JL representation is used to build a linear regression for the charge density, where the many-body contributions of different orders are separated. This results in a very compact model with a few coefficients to be trained on a small subset of the total number of grid points available. For the sake of brevity, we call such a class of models Jacobi-Legendre charge density models (JLCDMs). The efficiency and accuracy of our scheme are demonstrated for a range of molecules and solids, including benzene, aluminium, molybdenum, and two-dimensional MoS\({}_{2}\). In particular, we show that the KS self-consistent cycle can be bypassed completely in calculating fully converged total energies and forces. Our method is implemented to work with the widely used Vienna _ab initio_ simulation package (VASP) [67; 68].
## Results and Discussion
Figure 1 provides a schematic view of the construction of a JLCDM. Given an atomic configuration, the space is subdivided into a Cartesian grid, and the atomic environment (the position of the atoms) of each grid is described by an expansion of JL polynomials. A selected number of such expansions forms the training set of a linear regression model that predicts the charge density over the entire grid. Finally, this is used as the converged ground-state density to evaluate the energy and forces or as a starting point for self-consistent KS-DFT calculations.
### Linear expansion of the charge density
The charge density, \(n(\mathbf{r})\), at a grid point \(\mathbf{r}_{g}\) can be separated into many-body contributions as
\[n(\mathbf{r}_{g})=n^{(1)}(\mathbf{r}_{g})+n^{(2)}(\mathbf{r}_{g})+n^{(3)}( \mathbf{r}_{g})+\ldots+n^{(n)}(\mathbf{r}_{g}) \tag{1}\]
where \(n^{(m)}\) is the \(m\)th-body (\(m\)B) term of the expansion. Thus, \(n^{(1)}(\mathbf{r}_{g})\) encodes the atomic contributions to the charge density at \(\mathbf{r}_{g}\), \(n^{(2)}\) is the contribution from atom pairs, \(n^{(3)}\) is the contribution from atoms triplets, etc. Equation (1) can then be rewritten as
\[n(\mathbf{r}_{g})=\sum_{i}n_{i}^{(1)}(\mathbf{r}_{g})+\sum_{i\neq j}n_{ij}^{( 2)}(\mathbf{r}_{g})+\sum_{i\neq j,i\neq k,j\neq k}n_{ijk}^{(3)}(\mathbf{r}_{ g})+\ldots \tag{2}\]
where the sums over the \(i\),\(j\),\(k\ldots\) indexes run over the atoms neighbouring the grid point at \(\mathbf{r}_{g}\) up to the cutoff distance, \(r_{\text{cut}}\). The assumption that the electron density at one point is determined mostly by the external potential generated by the closest atoms follows from the wave mechanics' locality principle [69].
The atomic configurations required by each contribution in the expansion are expressed through a local representation that here we generally call "fingerprint". The fingerprints should be: (i) invariant by translations, (ii) invariant by global rotations of the atoms in the reference frame of the grid point, (iii) invariant to changes in the coordinate system, (iv) invariant to permutations
of the atomic indices. Furthermore, they should provide a continuous map of the atomic neighbourhood, i.e., small changes in the atomic structure must reflect small changes in the fingerprints. Finally, the fingerprints should be uniquely determined [70] and computationally cheap.
Following closely reference [66], we expand the one-body contribution, \(n_{i}^{(1)}\), using the distances between the grid point and the atomic neighbourhood as
\[n_{i}^{(1)}(\mathbf{r}_{g})=\sum_{n=1}^{n_{\text{max}}}a_{n}^{Z_{i}}\widetilde {P}_{n}^{(\alpha,\beta)}\left(\cos\left(\pi\frac{r_{ig}-r_{\text{min}}}{r_{ \text{cut}}-r_{\text{min}}}\right)\right) \tag{3}\]
\[\widetilde{P}_{n}^{(\alpha,\beta)}(x)=\begin{cases}P_{n}^{(\alpha,\beta)}(x)- P_{n}^{(\alpha,\beta)}(-1)&\text{for}\;-1\leq x\leq 1\\ 0&\text{for}\;x<-1\end{cases} \tag{4}\]
with \(P_{n}^{(\alpha,\beta)}\) being the Jacobi polynomial of order \(n\). Here, \(r_{ig}=|\mathbf{r}_{i}-\mathbf{r}_{g}|\) is the distance between the grid point \(g\) at \(\mathbf{r}_{g}\) and the \(i\)th atom \(i\) at \(\mathbf{r}_{i}\), \(r_{\text{cut}}\) is the radius cutoff, \(r_{\text{min}}\) is a distance shift parameter in the range \((-\infty,r_{\text{cut}})\). The degree of the expansion is set by \(n\) with the sum running in the interval \([1,n_{\text{max}}]\), while \(\alpha\) and \(\beta\) control the shape of the polynomial with \(\alpha,\beta>-1\). The expansion coefficients \(a_{n}^{Z_{i}}\) in Eq. (3) depend on the atomic species considered. As defined in Eq. (4), the "vanishing Jacobi polynomials" smoothly vanish at the cutoff radius without needing an additional _ad-hoc_ cutoff function.
The terms forming the two-body contribution, \(n_{ij}^{(2)}\), can be uniquely written as a function of two distances, \(r_{ig}\) and \(r_{jg}\), and the cosine of the subtended angle at \(g\), \(\hat{\mathbf{r}}_{ig}\cdot\hat{\mathbf{r}}_{jg}\). We then expand the distances over the vanishing Jacobi polynomials and the angle over Legendre polynomials. The expansion can then be written as,
\[n_{ij}^{(2)}(\mathbf{r}_{g})=\sum_{n_{1},n_{2}=1}^{n_{\text{max}}}\sum_{l=0}^ {l_{\text{max}}}a_{n_{1}n_{2}l}^{Z_{i}Z_{j}}\widetilde{P}_{n_{1}ig}^{(\alpha, \beta)}\widetilde{P}_{n_{2}jq}^{(\alpha,\beta)}P_{l}^{ijg}, \tag{5}\]
where we have used the shorthand notations
\[\widetilde{P}_{nig}^{(\alpha,\beta)}=\widetilde{P}_{n}^{(\alpha,\beta)}\left( \cos\left(\pi\frac{r_{ig}-r_{\text{min}}}{r_{\text{cut}}-r_{\text{min}}} \right)\right),\]
and \(P_{l}^{ijg}=P_{l}(\hat{\mathbf{r}}_{ig}\cdot\hat{\mathbf{r}}_{jg})\), \(P_{l}\) is the Legendre polynomial, \(\hat{\mathbf{r}}_{pg}=(\mathbf{r}_{p}-\mathbf{r}_{g})/r_{pg}\), and \(l\) defines the Legendre expansion degree with the sum running in the interval \([0,l_{\text{max}}]\). As in the one-body case, the expansion coefficients \(a_{n_{1}n_{2}l}^{Z_{i}Z_{j}}\) depend on the pair of atomic species considered. The Jacobi indices \(n_{1}\) and \(n_{2}\), and the atom indices \(i\) and \(j\) are symmetric under the simultaneous swap, therefore if \(Z_{i}=Z_{j}\) only terms \(n_{1}\geq n_{2}\) should be considered.
Notice that, in the \(m\)-body expansion for \(m>1\), angular information enters via a pairwise dot product of unit vectors joining the atoms to the grid point. The unit vectors are ill-defined when the distance of the grid point from the atom approaches zero and creates a discontinuity in the fingerprints. Assuming that the atomic contribution (1B term) to the charge density dominates at very small distances from the nucleus, we can introduce a double-vanishing Jacobi polynomial in place of the simple vanishing one for all the \(m\)-body expansions with \(m>1\) as given in Eqs. (7) and (8). The double-vanishing Jacobi polynomials are defined as
\[\overline{P}_{n}^{(\alpha,\beta)}(x)=\widetilde{P}_{n}^{(\alpha,\beta)}(x)- \frac{\widetilde{P}_{n}^{(\alpha,\beta)}(1)}{\widetilde{P}_{1}^{(\alpha,\beta )}(1)}\widetilde{P}_{1}^{(\alpha,\beta)}(x)\text{ for }n\geq 2 \tag{6}\]
with \(x=\cos\left(\pi\frac{r_{ig}-r_{\text{min}}}{r_{\text{cut}}-r_{\text{min}}}\right)\). Equation (5) now reads
Figure 1: Illustration of the workflow used to construct a JLCDM predicting the converged DFT ground-state charge density and the associated observables. (Step 1) The procedure starts with an atomic distribution and the mapping of the space over a Cartesian grid. (Step 2) Each grid point is associated with a local atomic environment described by the Jacobi-Legendre expansion. Such expansion is used to construct a linear model (Step 3) that, once trained, accurately predicts the charge density of the grid point. After computing the charge density over the entire grid, this is used to perform DFT calculations (Step 4). For instance, the total energy and the atomic forces can be easily obtained by using a few steps of frozen-density KS-DFT instead of the full self-consistent cycle.
\[n_{ij}^{(2)}(\mathbf{r}_{g})=\sum_{n_{1},n_{2}=2}^{n_{\max}}\sum_{l=0}^{l_{\max}}a _{n_{1}n_{2}l}^{Z_{i}Z_{j}}\overline{P}_{n_{1}ig}^{(\alpha,\beta)}\overline{P}_{ n_{2}jq}^{(\alpha,\beta)}P_{l}^{ijg} \tag{7}\]
with \(n_{1},n_{2}\geq 2\). Generally, a \(m\)-body cluster centred on the grid point \(g\) can be uniquely defined by \(m\) distances and the \(m(m-1)/2\) angles subtended at \(g\). Using the recipe from Eqs. (3) and (7), the \(m\)-body expansion can then be written by associating a Jacobi polynomial to each distance and a Legendre polynomial to each angle. For instance, the three-body contribution \(n_{ijk}^{(3)}\) is of the form
\[\begin{split} n_{ijk}^{(3)}(\mathbf{r}_{g})=\sum_{n_{1},n_{2},n_ {3}=2}^{n_{\max}}\sum_{l_{1}l_{2}l_{3}}^{l_{\max}}a_{n_{1}n_{2}n_{3}l_{1}l_{2} l_{3}}^{Z_{i}Z_{j}Z_{k}}\times\\ \times\overline{P}_{n_{1}ig}^{(\alpha,\beta)}\overline{P}_{n_{2} jg}^{(\alpha,\beta)}\overline{P}_{n_{3}kg}^{(\alpha,\beta)}P_{l_{1}}^{ijg}P_{l_{2}}^{ ikg}P_{l_{3}}^{jkg}\end{split} \tag{8}\]
Using this charge density expansion at each grid point, we can generate a linear representation of the charge density in the expansion coefficients. Therefore, we can learn the ground state charge density by using linear regression, as
\[n^{\text{DFT}}(\mathbf{r}_{g})\simeq\sum_{i}\mathbf{a}^{Z_{i}}\mathbf{P}_{ig} +\sum_{ij}\mathbf{a}^{Z_{i}Z_{j}}\mathbf{P}_{ijg}+\ldots \tag{9}\]
In the next section, we will demonstrate the prediction power of our JLCDM for a benzene molecule, for periodic solids such as aluminium (Al) and molybdenum (Mo), and a two-dimensional material MoS\({}_{2}\). We will also demonstrate the extrapolation power of JLCDM for previously unknown phases of Al and MoS\({}_{2}\). Finally, we will show that the charge density predicted by our model can be fed back into popular DFT codes to accurately calculate the total energy and forces at a fraction of the typical numerical cost.
### Grid-point sampling strategy
We start our analysis by discussing the construction of an appropriate training set for our JLCDM, which is truncated at the 2-body order since this is already enough for extremely accurate predictions. Previously published works [63, 64] have trained large neural networks over the entire grid-point mesh, typically containing a few million density values. Here we show that this is not necessary since there is significant redundancy in the information, and often the inclusion of the entire density in the training set has just the effect of producing an unbalanced ensemble. This is easy to see in the case of molecules, where most grid points are situated far away from the molecule and, by sitting in a vacuum, possess similar vanishing small charge density. For this reason, we implement a sampling strategy that allows us to use only a small fraction of the grid points but includes more diverse atomic arrangements.
In practice, our simple sampling scheme consists in assigning to a point \(\mathbf{r}\) in space a probability of selection based on the value of the charge density, \(n(\mathbf{r})\), at that point. The probability of selection is given by a normal distribution of the inverse of the charge density, namely \(\exp\bigl{[}-(1/n(\mathbf{r}))^{2}/2\sigma^{2}\bigr{]}\). This choice gives more importance to grid points presenting large electron densities, while low-density regions will contribute little to the training set. The parameter \(\sigma\) controls how sharp or broad this probability distribution is, a tool that helps us to select grid points closer or farther away from the charge density maxima. Such a targeted sampling technique is accompanied by uniform sampling across the unit cell, which guarantees that enough diversity is maintained in the training set. As a result, we can construct an accurate model trained with just about 0.1% of the available training points (see the Methods section for more details). Note that our sampling strategy is not limited to linear charge density expansions. The same can be used as an efficient way to train even neural network models, resulting in much smaller models attaining the same or higher accuracy.
### Accuracy of the models
We now discuss the accuracy that can be reached by the JLCDMs for both molecules and solids. Figure 2(a) displays the parity plot of the charge density at the grid points for the 30 atomic configurations contained in the test set of the benzene molecule. These have been obtained from Ref. [62] by molecular dynamics at 300 K and performing DFT calculations on each sampled geometry. For benzene, our 1B+2B JLCDM contains 1,572 coefficients trained over 6,000 density-grid points, out of the 5,832,000 available per atomic configuration over the 30 configurations used for training and another 30 for testing. The test-set mean absolute error (MAE) achieved is 0.000260 \(\text{e\AA}^{-3}\). Such error corresponds to 0.011% of the maximum density, meaning that the charge density obtained by the JLCDM is very close to that of a well-converged DFT calculation. Note that the MAE on the total electron count is 0.025. The model and sampling hyperparameters are reported in Table 3 and Table 4, respectively, in the Methods section.
In panel (b) of Fig. 2, we present a planar isosurface of the difference between the charge density obtained with JLCDM and the converged DFT charge density, while panel (c) shows a line scan in the same plane of the two charge densities and their difference. As expected, the absolute error is larger in the region closer to the nuclei, where the charge density is maximised. However, no emerging pattern indicates that the JLCDM is biased against any particular local atomic configuration. Importantly, the error, as the density, vanishes for positions far from the molecule. Our constructed JLCDM performs
better than published models [63] despite being trained over a tiny fraction of data and being constructed on only 1,572 trainable parameters.
Next, we move to metallic solids, aluminium and molybdenum. Aluminum is a benchmark system chosen for comparison with previously published models [61, 63, 64]. Its electronic structure features a very delocalized charge density, so as a second example, we also consider an early transition metal, Mo, which presents a higher degree of charge localization. In constructing the JLCDM, we use the same density sampling procedure as used for the benzene molecule. See the Methods section for details.
In the case of Al, we train and test over 10 configurations obtained from _ab initio_ molecular dynamics (AIMD). We find that a 1B+2B JLCDM with only 120 trainable coefficients gives us a MAE of 0.000534 \(e\mathrm{\SIUnitSymbolAngstrom}^{-3}\), at par with previously published deep neural networks [63, 64]. Most importantly, our model extrapolates better, as we will show in the next section. Figure 2(d) shows the parity plot for the Al test set, demonstrating the accuracy achieved. Similarly to the case of benzene, the difference between the ML predictions and the DFT charge density does not present any clear error pattern, see Fig. 2(e), except for the expected increase close to the nuclei. In general, the charge density error for Al is found to be 10 times smaller than that found for benzene, as one can see from the line plot of Fig. 2(f).
Similar results are also obtained for Mo, where a JLCDM with 812 trainable parameters returns a MAE and a RMSE of 0.001974 \(e\mathrm{\SIUnitSymbolAngstrom}^{-3}\) and 0.002820 \(e\mathrm{\SIUnitSymbolAngstrom}^{-3}\), respectively, see the Supplemental Materials (SM) for details. In contrast to benzene and aluminium, the charge density error appears to have a radial distribution centred around each atom with a minimum error in the interstitial region. The maximum absolute error over the test set in this case is only \(\sim\)0.06 \(e\mathrm{\SIUnitSymbolAngstrom}^{-3}\), and it is found over a small set of grid points.
Finally, we focus on two-dimensional MoS\({}_{2}\), which helps us to demonstrate the capability of our JLCDM to extrapolate to previously unseen phases. Two-dimensional MoS\({}_{2}\) can be found in multiple polymorphs, both semiconducting and metallic. Also, for MoS\({}_{2}\), we use the same charge-density sampling procedure adopted
Figure 2: Analysis of the performance of the JLCDM. Panel (a) displays the parity plot for the benzene test set together with MAE, RMSE, and \(R^{2}\) metrics. Panel (b) displays the isosurface of the difference plot between the fully converged DFT ground-state density and that predicted by the model for a selected benzene test configuration. Here we show the plane containing the molecule. (c) DFT and JLCDM-predicted charge density for benzene computed along the line indicated in the inset. The plot reports also their difference with values provided on the right-hand side scale (red). Panel (d) displays the parity plot for the aluminium test set. Panel (e) displayed the isosurface of the difference between the fully converged DFT ground-state density and that predicted by the model for a selected aluminium test configuration. The slice shows the basal plane of the supercell (\(z=0.0\,\mathrm{\SIUnitSymbolAngstrom}\)). (f) DFT and ML charge density for aluminium are computed along the line indicated in the inset. The plot reports also their difference with values provided on the right-hand side scale (red). The planes chosen in panels (c) and (f) are the same as those in (b) and (e), respectively.
for benzene, Al and Mo. See the Methods section. However, this time we train and test the model on different phases; namely, the training set is constructed using atomic configurations of the 1H and 1T phases while we test our prediction on the 1T\({}^{\prime}\) phase. The 1H phase is formed by sandwiched hexagonal layers of S-Mo-S in a Bernal stacking, while the 1T phase presents a rhombohedral arrangement [71]. As the free-standing 1T phase is unstable, a spontaneous lattice distortion in the \(x\) direction creates the 1T\({}^{\prime}\) one [72, 71], which is depicted in the inset of Figure 3(a). The three polymorphs present completely different electronic structures. The 1H phase is semiconducting with a 1.58 eV theoretical energy gap, while the 1T phase is metallic [73]. In contrast, the 1T\({}^{\prime}\) polymorph has a topological gap (0.08 eV) induced by spin-orbit coupling [74], while it remains a semi-metal in the absence of spin-orbit coupling interaction.
In order to train our JLCDM, we use 10 AIMD (at 300 K) configurations each for the 1H and 1T phases, while the test set is made of ten 1T\({}^{\prime}\) AIMD (at 300 K) snapshots. Figure 3(a) shows the parity plot for all three polymorphs, namely for the training and test set. By visual inspection, one can notice that the error slightly increases for the 1T\({}^{\prime}\) phase, but the JLCDM still performs extremely well, displaying a MAE and a RMSE of 0.002725 \(\text{e}\text{\AA}^{-3}\) and 0.008080 \(\text{e}\text{\AA}^{-3}\), respectively. Also, the JCDM remains compact with 2,346 trainable parameters in this case. The charge density difference isosurface plot, see Fig. 3(b), tells us that the error tends to be larger in the region around the Mo ions pointing towards the S atoms. This feature is somehow expected since the bonding structure of the three phases is different, trigonal prismatic for 1H, octahedral for 1T phase, and a distorted lattice for 1T\({}^{\prime}\). The line density plot of Fig. 3(c) further shows that the JLCDM slightly overestimates the charge density surrounding the Mo atoms. However, it is worth noting that the error is small, \(<2\%\), so the JLCDM-predicted charge density for the unseen 1T\({}^{\prime}\) phase is still of high quality, namely, the JLCDM can be used to explore new phases.
#### JlCDM performance on the DFT total energy and forces
In the previous section, we have shown that the charge density predicted by our JLCDM is close to the DFT converged one. Now we show that the energy and forces corresponding to such charge density are close to the corresponding converged values, with the average error matching those of state-of-the-art machine-learning force fields.
This is demonstrated by constructing the KS Hamiltonian corresponding to the JLCDM-predicted charge density. The band energy contribution to the total energy, \(E_{\text{band}}=\sum_{i}f(\epsilon_{i})\epsilon_{i}\), is obtained by summing up the occupied KS eigenvalues, \(\epsilon_{i}\) [\(f(\epsilon_{i})\) is the occupation number], which are computed by diagonalizing the KS Hamiltonian. The remaining contributions to the total energy are obtained directly from the JLCDM electron density. Such a scheme is implemented in the VASP code, where an interactive matrix-diagonalization procedure requires performing five non-self-consistent iterations to compute the KS eigenvalues and eigenvectors, i.e., the charge density is not updated during these iterations. As given by such procedure, the total energy yielded by the JLCDM-predicted charge density may be lower than the KS-DFT ground state energy.
The MAE and RMSE metrics of the calculated energy and forces are given in Table 1, while Fig. 4 shows the error distributions as box and violin plots. Aluminium presents the narrower total-energy error spread, with val
Figure 3: Analysis of the performance of the JLCDM for MoS\({}_{2}\). Panel (a) shows the parity plot between JLCDM-predicted and DFT charge density for the three MoS\({}_{2}\) polymorphs, 1H, 1T, and 1T\({}^{\prime}\). In this case 1H and 1T phases are used for training, and, which are used for training, while the model is tested on the 1T\({}^{\prime}\). All the error metrics are shown, \(R^{2}\), MAE, and RMSE, correspond to the test set. The inset depicts a snapshot of 1T\({}^{\prime}\)-MoS\({}_{2}\). Panel (b) display the difference between JLCDM and DFT for 1T\({}^{\prime}\)-MoS\({}_{2}\) over the plane of the monolayer (\(z=c/2\) Å). (c) shows the charge density profile for JLCDM and DFT along the path highlighted with a dashed line in panel (b). The difference between densities can be read on the right-hand side scale (red) of panel (c).
ues ranging from \(-0.11\) meV/atom to \(-0.02\) meV/atom and with a mean error at \(-0.05\) meV/atom. This is then followed by Mo, with a total-energy error spread between \(0.12\) meV/atom and \(0.33\) meV/atom with a mean error at \(0.20\) meV/atom, and then benzene, with a total-energy error between \(1.24\) meV/atom and \(4.67\) meV/atom with mean error at \(4.02\) meV/atom. Finally, the unseen \(1\)T\({}^{\prime}\) phase of MoS\({}_{2}\) returns an error range of \(-15.60\) meV/atom to \(-4.34\) meV/atom and a mean error of \(-8.06\) meV/atom. These errors are all very competitive with that achieved by linear ML force fields constructed with a comparable range of parameters [75].
Next, we investigate the ability of our JLCDM to perform over systems never seen before. Our test is constructed for Al, for which we were able to build the best model, and consists in computing the total energy and forces of a series of 256-atom supercells taken from Ref. [64]. This dataset contains 10 configurations corresponding to solid Al at 298 K and 10 configurations of both solid and liquid Al at its melting temperature of 933 K. The JLCDM used here is the same discussed before that produced the results from Figure 2(d)-(f), trained over 32-atom supercells for solid Al at 300 K. Table 2 summarizes our results. The error on the total energy and forces slightly increases when considering systems in the same conditions but different cell sizes, namely comparing the 32-atom and the 256-atom supercells for solid Al at 300 K and 298 K, respectively. In any case, the MAE remains below 1 meV/atom for the total energy and below 0.025 eV/A for the forces. As the structures tested become increasingly different from those used for training (data at 933 K) the error grows further, reaching 35.062 meV/atom and 0.164 eV/A in the liquid phase.
In order to put our results in perspective, neural network models (\(\sim\)10\({}^{6}\)-10\({}^{7}\) trainable weights) using the bispectrum components to describe the local environments reach a MAE of 123.29 meV/atom over the liquid phase, when trained on high-temperature solid structures only [64]. This means that, on the same test, our JLCDM outperforms neural networks by a factor of four, despite consisting of only 120 trainable parameters and being trained on the 0.1% of the charge density points. The neural network error is then reduced to 13.04 meV/atom only when the training is performed on both high-temperature solids and liquids [64]. Certainly, we could systematically improve the accuracy of our our JLCDM model to perform the JLCDM model.
\begin{table}
\begin{tabular}{c c c c c c} \hline \multirow{2}{*}{\# atoms} & \multirow{2}{*}{Condition} & \multicolumn{2}{c}{Total energy} & \multicolumn{2}{c}{Forces} \\ & & (meV/atom) & & (eV/Å) \\ \cline{3-5} & & MAE & RMSE & MAE & RMSE \\ \hline
32 & solid (300 K) & 0.046 & 0.054 & 0.007 & 0.009 \\
256 & solid (298 K) & 0.843 & 0.908 & 0.025 & 0.031 \\
256 & solid (933 K) & 6.976 & 7.526 & 0.068 & 0.862 \\
256 & liquid (933 K) & 35.062 & 36.498 & 0.164 & 0.203 \\ \hline \end{tabular}
\end{table}
Table 2: Performance of the JLCDM for Al, trained over 32-atom supercells at room temperature, against 256-atom supercells at various conditions. The configurations for the 256-atom supercells are taken from Ref. [64, 76], and the test error is computed over 10 samples for each different condition.
Figure 4: Box and violin plots for the error on the total energy (a) and the forces (b) computed from JLCDM-predicted charge density. The fully converged DFT values provide the ground truth. The insets show a magnified version of the results for Al and Mo, whose distribuitons are very narrow on the global scale. The associated absolute mean values are reported in Table 1. The lines in the middle of the boxes mark the medians. The boxes are plotted from the first to the third quartile, with the line marking the median. The whiskers extend to 1.5 times the box length.
\begin{table}
\begin{tabular}{l c c c c} \hline System & \multicolumn{2}{c}{Total energy} & \multicolumn{2}{c}{Forces} \\ & & (meV/atom) & & (eV/Å) \\ \cline{2-5} & \multicolumn{2}{c}{MAE} & RMSE & MAE & RMSE \\ \hline Benzene & 4.021 & 4.065 & 0.031 & 0.046 \\ Al & 0.046 & 0.054 & 0.007 & 0.009 \\ Mo & 0.203 & 0.212 & 0.019 & 0.024 \\ \(1\)T\({}^{\prime}\)-MoS\({}_{2}\) & 8.058 & 8.845 & 0.078 & 0.104 \\ \hline \end{tabular}
\end{table}
Table 1: JLCDM performance metrics on the task of predicting total energy and forces. These are obtained through non-self-consistent DFT using the JLCDM-predicted charge density. The force error is computed over all and all atoms.
prove the JLCDM by adding more distorted supercells in our training set or by including both solid and liquid configurations at 933 K. However, here, we wish to point out that the smooth description of the local environment allows us to achieve very competitive accuracy (35 meV/atom for liquid Al at 933 K) even for a such compact model.
## IV Conclusion
Inspired by the recently developed Jacobi-Legendre potentials [66], we have designed a grid-based many-body linear expansion of the charge density, where the local external potential is described by Jacobi and Legendre polynomials. The method, combined with a charge-density targeted sampling strategy, produces highly accurate charge densities despite being constructed over an extremely limited number of trainable coefficients. We have demonstrated the efficacy of the JLCDM for diverse examples, namely a benzene molecule, solid and liquid Al, solid Mo and different phases of 2D MoS\({}_{2}\). In all cases, simple two-body JLCDMs accurately predict the charge density and can be transferred to different phases not originally included in the training set. For instance, training over the 1H and 1T phases of 2D MoS\({}_{2}\) is enough to predict the charge density of the 1T\({}^{\prime}\) phase, and so is the case for liquid Al, whose density can be constructed from a model trained over solid-state configurations at room temperature. The JLCDM-predicted densities can then be used to compute total energy and forces, achieving accuracy comparable to state-of-the-art machine learning force fields and, in some cases, even to fully converged DFT calculations.
As it stands, the methodology introduced here could readily be used in a diverse set of applications where a fast screening of the energy landscape at the DFT level is desirable. Applications such as crystal structure prediction, phase diagram construction, reaction path search, and other computer-intensive tasks could be greatly accelerated by using JLCDM-predicted charge densities as the starting point of DFT calculations. In addition, the predicted charge density can be easily employed as the starting density for the fast evaluation of materials' properties, such as the band structure, charge transfer, electrical polarization, and topology, and even the starting density for computationally expensive hybrid functional calculations.
## V Methods
### DFT calculations and dataset generation
All single-point and _ab initio_ molecular dynamics (AIMD) calculations are performed using density functional theory (DFT) [1; 2] as implemented in the Vienna _ab initio_ simulation package (VASP) [67; 68]. Exchange and correlation interactions are treated by the generalized gradient approximation (GGA) [3] with the Perdew-Burke-Ernzerhof (PBE) [4] exchange and correlation functional. We use the projector augmented wave (PAW) [77] pseudopotentials. Single-point self-consistent calculations are performed with a 600 eV kinetic-energy cutoff for the plane-wave expansion, and the Brillouin zone is sampled over a \(k\)-point density of 12 /A\({}^{-1}\). AIMD runs are performed with a 2 fs time-step, and the Nose-Hoover thermostat [78; 79; 80] maintains the \(NVT\) ensemble. All AIMD runs are at least 4 ps long, and snapshots are taken from the simulation's last 3 ps. For benzene and 2D MoS\({}_{2}\) sufficient vacuum space, at least 15 A is included in the non-periodic directions so to avoid spurious interaction between periodic images.
### Benzene data generation
Data for benzene are extracted from the dataset available at [http://quantum-machine.org/datasets/](http://quantum-machine.org/datasets/) [62]. For the training set, we randomly select 30 snapshots from a MD run at 300 K and 400 K, available in the "benzene_300K-400K.tar.gz" file, and for the test set, 30 snapshots are randomly sampled from MD at 300 K, available in "benzene_300K-test.tar.gz". The charge density for the selected snapshots is then calculated using VASP with the settings described above. Using 600 eV as the kinetic-energy cutoff for the plane-wave expansion, this results in the charge density being represented over a \(180\times 180\times 180\) grid (5,832,000 grid points).
### Al, Mo, and 2D MoS\({}_{2}\) data generation
For Al, Mo and MoS\({}_{2}\), we randomly extract snapshots from AIMD runs at 300 K. For Al, we use a \(2\times 2\times 2\) conventional _fcc_ supercell containing 32 atoms, while a \(3\times 3\times 3\) conventional _bcc_ supercell containing 54 atoms described Mo. A \(3\times 3\times 1\) supercell is used for the 1H and 1T phases of MoS\({}_{2}\) (27 atoms), while for the \(1T^{\prime}\), we consider a \(4\times 2\times 1\) supercell (48 atoms). For Al and Mo, we extract 10 snapshots for training and 10 for testing. For MoS\({}_{2}\), we extract 10 snapshots for each phase, with the 1H and 1T structures used for training and 1T\({}^{\prime}\) for testing. The charge densities are represented over a \(140\times 140\times 140\times 140\) grid (2,744,000 grid points) for Al, a \(160\times 160\times 160\) grid (4,096,000 grid points) for Mo, \(160\times 160\times 300\) grid (7,680,000 grid points) for MoS\({}_{2}\) 1H and 1T, and \(216\times 192\times 300\) grid (12,441,600 grid points) for 1T\({}^{\prime}\)-MoS\({}_{2}\).
In order to investigate the transferability of the JLCDM for Al, we use the snapshots reported in Ref. [64], as available in [76]. These Al are 256-atom Al supercells whose charge density has been recalculated with VASP. The energy cutoff for those is lowered to 360 eV so as to match the same real-space grid used in ref. [64],
\(200\times 200\times 200\times 200\) (8,000,000 grid points), and only the \(\Gamma\)-point is used to sample the BZ.
### DFT calculations with fixed charge density
In order to use the ML charge density to compute total energies and forces, we use KS-DFT while keeping the charge density fixed and using the same settings as specified for the data generation. The ML charge density is kept constant at each step of an iterative diagonalization of the Kohn-Sham Hamiltonian. In particular, the Kohn-Sham eigenstates and eigenvalues are optimized during five steps with no updates to the charge density.
While using PAW pseudopotentials, one is required to provide the augmentation on-site occupancies at the start of a calculation. For Al, we ignore one-centre correction terms evaluated on the radial support grid, a strategy that allows us to use the charge-density predictions for unknown structures or arbitrary sizes. For the other systems, we reuse the already known one-centre occupation DFT-computed terms together with our ML charge density to start the new calculations for configurations on the test set. In the future, the augmentation occupancies can also be learned with a similar scheme as designed here. This will allow the use of the ML charge density as a starting point for DFT calculations of any structure.
### Model training and hyperparameter optimization
We fit the linear models by using singular value decomposition to find the pseudo-inverse of \(A\) solving the matrix equation, \(A\hat{x}=\hat{b}\), for the coefficients \(\hat{x}\). Training and inference are performed using the Ridge class (with \(\alpha=0\)) from the scikit-learn library [81].
Hyperparameter optimization is performed through Bayesian optimization using Gaussian Processes (gp_minimize), as implemented in the scikit-optimize library [82]. This is done solely on part of the training set. For the Al and Mo JLCMDs, 8 training snapshots are used for training and the remaining 2 are for validation. For benzene, 27 are used for training, and 3 for validation. On MoS\({}_{2}\), we take one training snapshot for each phase (1H and 1T) as the validation set, and the remaining training snapshots are used for training. The optimization targets the minimization of the mean absolute error (MAE). Table 3 shows the hyperparameters for each model.
### Grid sampling
The grid points included in the training set are selected by randomly sampling the real-space charge density according to a combination of uniform sampling and targeted sampling on the grid. Targeted sampling is performed by assigning to each grid point, \(\mathbf{r}_{g}\), a probability \(P\), given by a normal distribution of the inverse of the charge density at that grid point:
\[P(\mathbf{r}_{g})=\frac{1}{\sqrt{2\pi}\sigma}\exp\left(-\frac{(1/n(\mathbf{r}_ {g}))^{2}}{2\sigma^{2}}\right) \tag{10}\]
Targeted sampling is combined with uniform sampling across the simulation cell, composing the training data for each snapshot. The number of grid points sampled through targeted and uniform sampling is manually tuned to better sample the features of each example's charge density. The hyperparameter \(\sigma\) is also tuned manually for each example. Table 4 shows the parameters used for sampling and the percentage of the available grid point used to train the models. As shown in the Results section, our model requires a very modest data set size compared to other grid-based approaches present in the literature while attaining accurate predictions.
## Data availability
The data used to train and test the models (DFT charge density, structure files, and trained models) is available at <zenodo link>. Scripts for calculating the Jacobi-Legendre grid-based linear expansion are available at <github link>.
\begin{table}
\begin{tabular}{l c c c} \hline \hline System & \(\sigma\) & uniform sampling (\%) & data used (\%) \\ \hline Benzene & 90 & 50 & 0.10 \\ Al & 40 & 60 & 0.50 \\ Mo & 30 & 40 & 0.12 \\ \(17^{\prime}\)-MoS\({}_{2}\) & 40 & 60 & 0.05 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Sampling hyperparameters and percentage of used data from the total data available. The percentage of uniform sampling is retrieved out of the percentage of used data.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline System & Body & \(r_{\text{cut}}\) & \(n_{\text{max}}\) & \(l_{\text{max}}\) & \(r_{\text{min}}\) & \(\alpha\) & \(\beta\) & \# features \\ \hline Benzene & 1b & 2.80 & 27 & - & \(-0.78\) & 7.00 & 0.00 & 1572 \\ & 2b & 2.80 & 12 & 5 & 0.00 & 7.00 & 0.00 & 1572 \\ Al & 1b & 4.08 & 15 & - & \(-0.74\) & 7.87 & 3.62 & 120 \\ & 2b & 4.08 & 6 & 6 & 0.00 & 5.87 & 1.75 & 120 \\ Mo & 1b & 4.04 & 20 & - & \(-1.09\) & 4.02 & 5.46 & 1812 \\ & 2b & 4.04 & 12 & 11 & 0.00 & \(-0.08\) & 2.38 & 1812 \\
2D MoS\({}_{2}\) & 1b & 4.76 & 18 & - & \(-0.93\) & 6.72 & 6.97 & 2346 \\ & 2b & 4.76 & 11 & 10 & 0.00 & 5.07 & 2.69 & 2346 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Optimized hyperparameters and corresponding feature size for each model generated.
## Author Contributions
S.S. conceived the idea of a machine learning model for the charge density as starting guess for DFT calculations. M.D. developed the Jacobi-Legendre representation of the grid points and the many-body linear expansion of the charge density. B.F. and U.P. implemented the grid-based JL representation and linear model. B.F. performed all DFT calculations, ML training, and testing and implemented related code for results analysis. S.S. and A.F. supervised the work. All authors contributed to discussions, writing, and revision of the manuscript.
###### Acknowledgements.
This work was supported by Sao Paulo Research Foundation (FAPESP) (Grants no. 2021/12204-6, 2019/04527-0, and 2017/02317-2), and by the Irish Research Council Advanced Laureate Award (IRCLA/2019/127). We acknowledge the DJEI/DES/SFI/HEA Irish Centre for High-End Computing (ICHEC) and Trinity Centre for High Performance Computing (TCHPC) for the provision of computational resources. We acknowledge support from ICHEC via the academic flagship program (Project Number - EuroCC-AF-3). We acknowledge NVIDIA Academic Hardware Grant Program for providing graphics processing units.
|
2309.07307 | Using Pi-Calculus Names as Locks | Locks are a classic data structure for concurrent programming. We introduce a
type system to ensure that names of the asynchronous pi-calculus are used as
locks. Our calculus also features a construct to deallocate a lock once we know
that it will never be acquired again. Typability guarantees two properties:
deadlock-freedom, that is, no acquire operation on a lock waits forever; and
leak-freedom, that is, all locks are eventually deallocated.
We leverage the simplicity of our typing discipline to study the induced
typed behavioural equivalence. After defining barbed equivalence, we introduce
a sound labelled bisimulation, which makes it possible to establish equivalence
between programs that manipulate and deallocate locks. | Daniel Hirschkoff, Enguerrand Prebet | 2023-09-13T20:51:52Z | http://arxiv.org/abs/2309.07307v1 | # Using \(\pi\)-Calculus Names as Locks
###### Abstract
Locks are a classic data structure for concurrent programming. We introduce a type system to ensure that names of the asynchronous \(\pi\)-calculus are used as locks. Our calculus also features a construct to deallocate a lock once we know that it will never be acquired again. Typability guarantees two properties: deadlock-freedom, that is, no acquire operation on a lock waits forever; and leak-freedom, that is, all locks are eventually deallocated.
We leverage the simplicity of our typing discipline to study the induced typed behavioural equivalence. After defining barbed equivalence, we introduce a sound labelled bisimulation, which makes it possible to establish equivalence between programs that manipulate and deallocate locks.
## 1 Introduction
The \(\pi\)-calculus is an expressive process calculus based on the notion of name, in which name-passing is the primitive notion of interaction between processes. Processes of the \(\pi\)-calculus have been used to represent several aspects of programming, like data structures, protocols, or constructs such as functions, continuations, objects, and references. The \(\pi\)-calculus also comes with a well-developed theory of behavioural equivalence. This theory can be exploited to reason about contextual equivalence in programming languages, by translating programs as \(\pi\)-calculus processes.
In this work, we follow this path for locks, a basic data structure for concurrent programming. We study how \(\pi\)-calculus names can be used to represent locks. We show that the corresponding programming discipline in the \(\pi\)-calculus induces a notion of behavioural equivalence between processes, which can be used to reason about processes manipulating locks. This approach has been followed to analyse several disciplines for the usage of \(\pi\)-calculus names: linearity [16], receptiveness [26], locality [17], internal mobility [25], functions [24, 6], references [8, 22].
It is natural to represent locks in \(A\pi\), the asynchronous version of the \(\pi\)-calculus [2, 10]. A lock is referred to using a \(\pi\)-calculus name. It is represented as an asynchronous output: the release of the lock. Dually, an input represents the acquire operation on some lock.
In this paper, we introduce \(\pi\ell\mathrm{w}\), a version of the asynchronous \(\pi\)-calculus with only lock names. Two properties should be ensured for names to be used as locks: first, a lock can appear at most once in released form. Second, acquiring a lock entails the obligation to release it. For instance, process \(\ell_{1}(x).\,(\overline{\ell_{1}}\langle x\rangle\mid\overline{\ell_{2}} \langle x\rangle)\) has these properties: the process acquires lock \(\ell_{1}\), then releases it, together with lock \(\ell_{2}\). We remark that this this process owns lock \(\ell_{2}\), which is released after \(\ell_{1}\) is acquired. We show that a simple type system can be defined to guarantee the two properties mentioned above.
When manipulating locks, it is essential to avoid the program from getting stuck in a state where a lock needs to be acquired but cannot be released. Consider the following process:
\[P_{\mathsf{dl}}\quad\stackrel{{\mathrm{def}}}{{=}}\quad\ell_{1} (x).\,(\overline{\ell_{1}}\langle x\rangle\mid\overline{\ell_{2}}\langle x \rangle)\quad|\quad\ell_{2}(y).\,(\overline{\ell_{1}}\langle y\rangle\mid \overline{\ell_{2}}\langle y\rangle).\]
The subprocess on the left needs to acquire lock \(\ell_{1}\), which is owned by the other subprocess, and symmetrically: this is a deadlock. Our type system rules out processes that exhibit this kind of cyclic dependency between locks. This is achieved by controlling parallel composition: two processes in parallel can share at most one lock name. Process \(P_{\mathsf{dl}}\) thus cannot be typed, because names \(\ell_{1}\) and \(\ell_{2}\) are shared between the two subprocesses. The acyclicity property enjoyed by typable processes yields deadlock-freedom.
To avoid situations where a lock is in released state and cannot be accessed, \(\pi\ell\)w also features a construct to deallocate a lock, called _wait_, inspired from [13]. Process \(\ell((x)).P\) waits until no acquire is pending on lock \(\ell\), at which point it deallocates \(\ell\), reading the final value stored in \(\ell\) as \(x\). The reduction rule for wait is
\[(\nu\ell)\big{(}\overline{\ell}\langle v\rangle\;\mid\;\ell((x)).P\big{)}\; \rightarrow\;P\big{\{}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
one for \(\pi\ell\) not only because it takes wait into account, but also because it makes it possible to transmit the obligation of releasing or deallocating a lock via another lock. For instance, it is possible, depending on the type of \(\ell\), that in process \(\ell(\ell^{\prime}).P\), the continuation \(P\) has the obligation not only to release lock \(\ell\), but also to deallocate \(\ell^{\prime}\), or release \(\ell^{\prime}\), or both.
To define typed barbed equivalence in \(\pi\ell\), written \(\simeq\), we must take into account deadlock-freedom, which has several consequences. First, we observe complete processes: intuitively, computations in \(\pi\ell\) make sense only for such processes, and a context interacting with a process should not be able to block a computation by never performing some release operation. Second, all barbs are always observable in \(\pi\ell\). In other words, if \(\ell\) is a free name of a complete typable process \(P\), then \(P\) can never loose the ability to release \(\ell\). This is in contrast with barbed equivalence in the \(\pi\)-calculus, or in CCS, where the absence of a barb can be used to observe behaviours. We therefore adopt a stronger notion of barb, where the value stored in a lock, and not only the name of the lock, can be observed.
The ideas behind \(\simeq\) are used to define \(\simeq_{w}\), typed barbed equivalence in \(\pi\ell\)w. A challenge when defining typed bisimilarity in \(\pi\ell\)w is to come up with labelled transitions corresponding to the reduction in (1). Intuitively, if \(P\xrightarrow{\ell(v)}P^{\prime}\) (\(P\) deallocates \(\ell\) and continues as \(P^{\prime}\)), we must make sure that this transition is the last interaction at \(\ell\). We define a typed LTS to handle name deallocation, and show that bisimilarity is sound for barbed equivalence in \(\pi\ell\)w.
Paper outline.We study \(\pi\ell\) in Section 2. We first expose the essential ideas of our deadlock-freedom proof in CCS\(\ell\), a simple version of the Calculus of Communicating Systems [19] with lock names. After extending these results to \(\pi\ell\), we define barbed equivalence for \(\pi\ell\), written \(\simeq\). We provide a labelled semantics that is sound for \(\simeq\), and present several examples of behavioural equivalences in \(\pi\ell\). In Section 3, we add the wait construct, yielding \(\pi\ell\)w. We show how to derive leak-freedom, and define a labelled semantics, building on the ideas of Section 2. We discuss related and future work in Section 4.
## 2 \(\pi\ell\), a Deadlock-Free Asynchronous \(\pi\)-Calculus
We present deadlock-freedom in the simple setting of CCS\(\ell\) in Section 2.1. This approach is extended to handle higher-order locks in \(\pi\ell\) (Section 2.2). We study behavioural equivalence in \(\pi\ell\) in Section 2.3.
### CCS\(\ell\): Ensuring Deadlock-Freedom using Composition
CCS\(\ell\) is a simplification of \(\pi\ell\), to present the ideas underlying the type system and the proof of deadlock-freedom. CCS\(\ell\) is defined as an asynchronous version of CCS with acquire and release operations. We postulate the existence of an infinite set of _lock names_, written \(\ell,\ell^{\prime},\ell_{1},\ldots\), which we often simply call names. CCS\(\ell\) processes are defined by the following grammar:
\[P\::=\ \ell.P\ \big{|}\ \overline{\ell}\ \big{|}\ (\nu\ell)P\ \big{|}\ P_{1}\ \big{|}\ P_{2}.\]
\(\overline{\ell}\) is the release of lock \(\ell\). Process \(\ell.P\) acquires \(\ell\) and then acts as \(P\)--we say that \(P\) performs an _acquire on \(\ell\)_. There is no **0** process in CCS\(\ell\), intuitively because we do not take into consideration processes with no lock at all. Restriction is a binder, and we write \(\text{fln}(P)\) for the set of free lock names in \(P\). If \(\mathbb{S}=\{\ell_{1},\ldots,\ell_{k}\}\) is a set of lock names, we write \((\nu\mathbb{S})P\) for \((\nu\ell_{1})\ldots(\nu\ell_{k})P\).
The definition of structural congruence, written \(\equiv\), and reduction, written \(\rightarrow\), are standard. They are given in Appendix A.1. Relation \(\Rightarrow\) is the transitive reflexive closure of \(\rightarrow\).
Type System.To define the type system for CCS\(\ell\), we introduce typing environments. We use \(\gamma\) to range over sets of lock names. We write \(\gamma_{1}\#\gamma_{2}\) whenever \(\gamma_{1}\cap\gamma_{2}=\emptyset\). We write \(\gamma,\ell\) for the set \(\gamma\uplus\{\ell\}\): the notation implicitly imposes \(\ell\notin\gamma\).
_Typing environments_, written \(\Gamma\), are sets of such sets, with the additional constraint that these should be pairwise disjoint. We write \(\Gamma=\gamma_{1},\ldots,\gamma_{k}\), for \(k\geq 1\), to mean that \(\Gamma\) is equal to \(\{\gamma_{1},\ldots,\gamma_{k}\}\), with \(\gamma_{i}\#\gamma_{j}\) whenever \(i\neq j\). The \(\gamma_{i}\)s are called the _components_ of \(\Gamma\) in this case, and \(\operatorname{dom}(\Gamma)\), the domain of \(\Gamma\), is defined as \(\gamma_{1}\cup\cdots\cup\gamma_{k}\). We write \(\Gamma_{1}\#\Gamma_{2}\) whenever \(\operatorname{dom}(\Gamma_{1})\cap\operatorname{dom}(\Gamma_{2})=\emptyset\).
As for components \(\gamma\), the notation \(\Gamma,\gamma\) stands for a set (of sets) that can be written as \(\Gamma\uplus\{\gamma\}\). Using these two notations together, we can write \(\Gamma,\gamma,\ell\) to refer to a typing environment containing a component that contains \(\ell\). We sometimes add parentheses, writing e.g. \(\Gamma,(\gamma,\ell,\ell^{\prime})\), to ease readability.
The typing judgement is of the form \(\Gamma;\mathbb{R}\vdash P\), where \(\mathbb{R}\) is a set of lock names. If \(\Gamma;\mathbb{R}\vdash P\), then \(\mathbb{R}\) is the set of locks owned by \(P\), that must be released. Moreover any component \(\gamma\) of \(\Gamma\) intuitively corresponds to a subprocess of \(P\) that only accesses the names in \(\gamma\). Here, accessing a lock name \(\ell\) means either releasing \(\ell\) or performing an acquire on \(\ell\), or both. The typing rules are as follows:
\[\begin{array}{llll}\operatorname{\mathsf{Acq-C}}&\quad\text{\small\small \begin{array}{l}\text{\small\begin{array}{l}\text{\small\begin{array}{l} \text{\small\begin{array}{l}\text{\small\begin{array}{l}\text{\small \begin{array}{l}\text{\small\begin{array}{l}\text{\small\begin{array}{l} \text{\small\begin{array}{l}\text{\small\begin{array}{l}\text{\small \begin{array}{l}\text{\small\begin{array}{l}\text{\small\begin{array}{l} \text{\small\begin{array}{l}\text{\small\begin{array}{l}\text{\small\begin{array}{ l}\text{\small\begin{array}{l}\text{\small\begin{array}{l} \text{\small\begin{array}{l}\text{\small\begin{array}{l}\text{\small\begin{array}{ l}\text{\small\begin{array}{l}\text{\small\begin{array}{l} \text{\small\begin{array}{l}\text{\small\begin{array}{l}\text{\small\begin{array}{l} \text{\small\begin{array}{l}\text{\small\begin{array}{l}\text{\small\begin{array}{l} \text{\small\begin{array}{l}\text{\small\begin{array}{l}\text{\small\begin{array}{l} \text{\small\begin{array}{l}\text{\small\begin{array}{l}\text{\small\begin{array}{l} \text{\small\begin{array}{l}\text{\small\begin{array}{l}\text{\small\begin{array}{l} \text{\small\begin{array}{l}\text{\small\begin{array}{l}\text{\small\begin{array}{l} \text{\small\begin{array}{l}\text{\small\begin{array}{l}\text{\small\begin{array}{l} \text{\small\begin{array}{l}\text{\small\begin{array}{l}\text{\small\begin{array}{l} \text{\small\begin{array}{l}\text{\small\begin{array}{l}\text{\small\begin{array}{l} \text{\small\begin{array}{l}\text{\small\begin{array}{l}\text{\small\begin{array}{l} \text{\small\begin{array}{l}\text{\small\begin{array}{l}\text{\small\begin{array}{l} \text{\small\begin{array}{l}\text{\small\begin{array}{l}\text{\small\begin{array}{l} \text{\small\begin{array}{l}\text{\small\begin{array}{l}\text{\small\begin{array}{l} \text{\small\begin{array}{l}\text{\small\begin{array}{l}\text{\small\begin{array}{l} \text{\small\begin{array}{l}\text{\small\begin{array}{l}\text{\small\begin{array}{l} \text{\small\begin{array}{l}\text{\small\begin{array}{l}\text{\small\begin{array}{l} \text{\small\begin{array}{l}\text{\small\begin{array}{l}\text{\small\begin{array}{l} \text{\small\begin{array}{l}\text{\small\begin{array}{l}\text{\small\begin{array}{l} \text{\small\begin{array}{l}\text{\small\begin{array}{l}\text{\small\begin{array}{l} \text{\small\begin{array}[{l}\text{\small\begin{array}{l}\text{\small\begin{array}{l} \text{\small\begin{array}[{l}\text{\begin{array}{l}\text{\small\begin{array}[{l} \text{\begin{array}{l}\text{\small\begin{array}{l}\text{\small\begin{array}[ ]{l}\text{\begin{array}[{l}\text{\begin{array}{l}\text{\small\begin{array}[ ]{l}\text{\begin{array}[}\text{\small\begin{array}{l}\text{\small\begin{array}[ ]{l}\text{\begin{array}{l}\text{\begin{array}[{l}\text{\begin{array}{l}\text{\begin{array}[ ]{l}\text{\begin{array}{l}\text{\begin{array}{l}\text{\begin{array}[ \begin{array}{l}\text{\begin{array}{l}\text{\begin{array}{l}\text{\begin{array}[ \begin{array}{l}\text{\begin{array}{l}\text{\begin{array}{l}\text{\begin{array}[ \begin{array}{l}\text{\begin{array}{l}\text{\begin{array}{l}\text{\begin{array}[ \begin{array}{l}\text{\begin{array}{l}\text{\begin{array}{l}\text{\begin{array}[ \begin{array}{l}\text{\begin{array}\begin{array}{l}\text{\begin{array}{l}\text{\begin{array}[ \begin{array}{l}\text{\begin{array}{l}\text{\begin{\begin{array}{l}\begin{\begin{array}[ \begin{\}\begin{array}{\begin{\lfloor}\begin{array}{l}\begin{\begin{array}{l}\begin{ \begin{array}{\lfloor}\begin{\lfloor}\begin{array}{l}\text{\begin{array}[ \begin{\lfloor}\begin{array}{l}\text{\begin{array}{l}\text{\begin{array}{l}\text{\begin{array} []{l}\text{\begin{array}\begin{array}{l}\text{\begin{array}[\lfloor}\begin{array}]{l}\text{\begin{array} []{l}\text{\begin{array}\begin{array}{l}\text{\begin{array}[\lfloor}\begin{array}]{\text{\begin{array} [\lfloor}\begin{array}\begin{array}{l}\text{\begin{array}\begin{array}\lfloor}\text{\begin{array}{l} \text{\begin{array}\begin{array}{l}\text{\begin{array}\lfloor}\text{\begin{array}{l}\text{\begin{array} []{l}\text{\begin{array}\begin{array}{l}\text{\begin{array}\begin{array}{l}\text{\begin{array} []{l}\begin{array}\lfloor}\text{\begin{array}\begin{array}{l}\text{\begin{array}\begin{array} []{l}\text{\begin{array}\begin{array}\lfloor}\begin{array}{l}\text{\begin{array}\begin{array}{l} \text{\begin{array}\begin{array}\begin{array}{l}\text{\begin{array}\begin{array}{l}\text{\begin{array} \begin}[]{l}\text{\begin{array}\begin{array}\begin{array}{l}\begin{array}\lfloor}\text{\begin{array} \begin{array}{l}\text{\begin{array}\begin{array}\begin{array}{l}\begin{array}\lfloor}\text{\begin{array} \begin{array}{l}\text{\begin{array}\begin{array}{l}\text{\begin{array}\begin}[]{l}\text{\begin{array} \begin}[]{l}\text{\begin{array}\begin{array}\begin}[]{l}\text{\array}\end{\array}\end{array}\end{array}\end{array}\end{array}\end{array}\end{array}\end{array}\end{array}\end{array}\end{array}\end{array}\end{array}\end{array}\end{array}\end{array}\end{array}\end{array}\end{array}\end{array}\end{array}\end{array}\end{array}\end{array}\end{array}\end{array}\end{array}\end{array}\end{array}\end{array}\]
In rule \(\operatorname{\mathsf{Acq-C}}\), operator flatten has the effect of merging all components in a typing environment into a single component. In particular, if \(\Gamma=\{\gamma_{1},\ldots,\gamma_{k}\}\), then flatten\((\Gamma)\) stands for \(\gamma_{1}\uplus\cdots\uplus\gamma_{k}\). Intuitively, the causal dependency introduced by the prefix \(\ell.P
\(\{\gamma_{12}\};\{\ell_{2}\}\vdash P_{3}\). Crucially, components \(\{\ell_{1}\}\) and \(\{\ell_{2}\}\) are not merged in the second derivation for the composition to be possible. Using similar ideas, we can define a typable process made of three parallel components \(P_{1},P_{2},P_{3}\) sharing a single lock, say \(\ell\), as long as each of the \(P_{i}\) uses its own locks besides \(\ell\)._
_We can derive \(\{\gamma_{12}\};\emptyset\vdash P_{4}\) with \(P_{4}\stackrel{{\text{def}}}{{=}}\ell_{1}.\ell_{2}.\,(\overline{ \ell_{2}}\mid\overline{\ell_{1}})\). We observe that \(P_{4}\mid P_{4}\) cannot be typed, although \(P_{4}\mid P_{4}\) is 'no more deadlocked' than \(P_{4}\) alone._
The typing rules enforce \(\mathbb{R}\subseteq\text{dom}(\Gamma)\) when deriving \(\Gamma;\mathbb{R}\vdash P\). We say that \(\ell\) is _available_ in process \(P\) if \(P\) contains a release of \(\ell\) which is not under an acquire on \(\ell\) in \(P\). Intuitively, when \(\Gamma;\mathbb{R}\vdash P\) is derivable, \(P\) is a well-typed process in which all lock names in \(\mathbb{R}\) are available in \(P\). The type system thus guarantees a linearity property on the release of names in \(\mathbb{R}\). However, lock names are not _linear names_ in the sense of [16], since there can be arbitrarily many acquire operations on a given lock. When all free lock names are available in \(P\), i.e. \(\Gamma;\operatorname{fin}(P)\vdash P\), we say that \(P\) is _complete_.
**Lemma 2**.: _The type system enjoys invariance under \(\equiv\) and subject reduction: \((i)\) If \(\Gamma;\mathbb{R}\vdash P\) and \(P\equiv P^{\prime}\), then \(\Gamma;\mathbb{R}\vdash P^{\prime}\). \((ii)\) If \(\Gamma;\mathbb{R}\vdash P\) and \(P\to P^{\prime}\), then \(\Gamma;\mathbb{R}\vdash P^{\prime}\) and \(\operatorname{fin}(P^{\prime})=\operatorname{fin}(P)\)._
Deadlock-Freedom.Intuitively, a deadlock in CCS\(\ell\) arises from an acquire operation that cannot be performed. We say that a _terminated process_ is a parallel composition of release operations possibly under some restrictions. A process that contains at least an acquire and cannot reduce is a _stuck process_. So in particular \(\ell.\,\overline{\ell}\) is stuck; the context may provide a release of \(\ell\), triggering the acquire on \(\ell\). On the other hand, if \(P\) is a stuck process and complete, then \(P\) is _deadlocked_: intuitively, the context cannot interact with \(P\) in order to trigger an acquire operation of \(P\). Process \(P_{\text{dl}}\) from Section 1 is an example of a deadlock. We show that a complete process can only reduce to a terminated process, avoiding deadlocks.
The proof of deadlock-freedom for CCS\(\ell\) provides the structure of the proofs for deadlock-freedom in \(\pi\ell\) and leak-freedom in \(\pi\ell\)W. It relies on progress: any typable process can reduce to reach a terminated process. We first present some lemmas related to the absence of cyclic structures in CCS\(\ell\).
**Lemma 3** (Lock-connected processes).: _We say that \(P\) is lock-connected if \(\Gamma;\mathbb{R}\vdash P\) implies \(\Gamma=\Gamma^{\prime},\gamma\) for some \(\Gamma^{\prime},\gamma\), with \(\operatorname{fin}(P)\subseteq\gamma\). In this situation, we also have \(\{\gamma\};\mathbb{R}\vdash P\). If \(P\) and \(Q\) are lock-connected and \(\operatorname{fin}(P)\cap\operatorname{fin}(Q)\) contains at least two distinct names, then \(P\mid Q\) cannot be typed._
The property in Lemma 3 does not hold if \(P\) and \(Q\) are not lock-connected: take for instance \(P=Q=\ell_{1}.\,\overline{\ell_{1}}\mid\ell_{2}.\,\overline{\ell_{2}}\), then we can derive \(\{\{\ell_{1}\},\{\ell_{2}\}\};\emptyset\vdash P\mid Q\). By the typing rule Acq-C, any process of the form \(\ell.\,P\) is lock-connected. A typical example of a lock-connected process is \(\ell_{1}.\,(\overline{\ell_{1}}\mid\overline{\ell_{2}})\mid\ell_{2}.\,( \overline{\ell_{2}}\mid\overline{\ell_{3}})\): here \(\gamma=\{\ell_{1},\ell_{2},\ell_{3}\}\). Processes similar to this one are used in the following lemma.
**Lemma 4** (No cycle).: _We write \(P\xleftarrow{\ell}Q\) when \(\ell\in\operatorname{fin}(P)\cap\operatorname{fin}(Q)\). Suppose there are \(k>1\) pairwise distinct names \(\ell_{1},\ldots,\ell_{k}\), and processes \(P_{1},\ldots,P_{k}\) such that \(\ell_{i}.P_{i}\xleftarrow{\ell_{i}}\,\
We construct a graph having one vertex for each of the \(Q_{j}\)s. We draw an edge between \(Q_{j}\) and \(Q_{j}\) when \(\ell_{j}\) is available in \(Q_{j^{\prime}}\). By the reasoning we just made, each vertex is related to at least one other vertex. So the graph necessarily contains a cycle. We can apply Lemma 4 to derive a contradiction.
We make two remarks about the construction of the graph. First, two \(Q_{j}\)s may start with an acquire at the same name. The corresponding vertices will have edges leading to the same \(Q_{j^{\prime}}\), and the construction still works. Second, if there is only one \(Q_{j}\), then the available release of \(\ell_{j}\) can synchronise with \(Q_{j}\).
By Lemma 5, we have that any typable process is not deadlocked. Thus, by subject reduction, we can prove deadlock-freedom.
**Proposition 6** (Deadlock-freedom).: _If \(\Gamma;\mathbb{R}\vdash P\) and \(P\Rightarrow P^{\prime}\), then \(P^{\prime}\) is not deadlocked._
**Remark 7**.: _As \(\mathrm{CCS}\ell\) is finite, deadlock-freedom ensures that no acquire operation waits forever in a complete typable process, and every complete process reduces to a terminated process: if \(\Gamma;\mathsf{fin}(P)\vdash P\), then \(P\Rightarrow(\nu\widetilde{\ell})\prod_{i}\overline{\ell_{i}}\) where the \(\ell_{i}\)s are pairwise distinct._
### \(\pi\ell\): Deadlock-Freedom for Higher-Order Locks
Syntax and Operational Semantics of \(\pi\ell\).\(\pi\ell\) extends \(\mathrm{CCS}\ell\) with the possibility to store _values_, which can be either booleans or locks, in locks. In this sense, \(\pi\ell\) features higher-order locks. Processes in \(\pi\ell\) are defined as follows:
\[P\quad::=\quad\ell(\ell^{\prime}).\,P\,\left|\,\,\overline{\ell}\langle v \rangle\,\right|\,(\nu\ell)P\,\left|\,\,P_{1}\,\left|\,\,P_{2}\,\left|\,\, \mathbf{0}\,\right|\,\right[v=v^{\prime}]P_{1},P_{2}.\]
\(v,v^{\prime}\) denote _values_, defined by \(v\ ::=\ \ell\,\left|\,\,\mathsf{b}\), where \(\mathsf{b}\ ::=\ \mathsf{tt}\,\left|\,\,\mathsf{ff}\) is a boolean value. In addition to \(\ell,\ell^{\prime}\ldots\), we sometimes use also \(x,y\ldots\) to range over lock names, to suggest a specific usage, like, e.g. in \(\ell(x).\,P\).
Process \(\overline{\ell}\langle\ell^{\prime}\rangle\) is a _release of \(\ell\)_, and \(\ell(\ell^{\prime}).\,P\) is an _acquire on \(\ell\)_; we say in both cases that \(\ell\) is the _subject_ (or that \(\ell\) occurs in subject position) and \(\ell^{\prime}\) is the _object_. Restriction and the acquire prefix act as binders, giving rise to the notion of bound and free names. As in \(\mathrm{CCS}\ell\), we write \(\mathsf{fin}(P)\) for the set of free lock names of \(P\). \(P\{v/\ell\}\) is the process obtained by replacing every free occurrence of \(\ell\) with \(v\) in \(P\). We say that an occurrence of a process \(Q\) in \(P\) is _guarded_ if the occurrence is under an acquire prefix, otherwise it is said _at top-level_ in \(P\). Additional operators w.r.t. \(\mathrm{CCS}\ell\) are the inactive process, \(\mathbf{0}\), and value comparison: \([v=v^{\prime}]P_{1},P_{2}\) behaves like \(P_{1}\) if values \(v\) and \(v^{\prime}\) are equal, and like \(P_{2}\) otherwise.
Structural congruence in \(\pi\ell\) is defined by adding the following axioms to \(\equiv\) in \(\mathrm{CCS}\ell\):
\[P\,\left|\,\,\mathbf{0}\,\equiv\,P\right.\qquad\quad(v\ell)\mathbf{0}\,\equiv \,\mathbf{0}\qquad\quad\quad[v=v]P_{1},P_{2}\,\equiv\,P_{1}\qquad\quad[v=v^{ \prime}]P_{1},P_{2}\,\equiv\,P_{2}\text{ if }v\neq v^{\prime}\]
The last axiom above cannot be used under an acquire prefix: see Appendix A.3 for the definition of \(\equiv\). _Execution contexts_, are defined by \(E\ ::=\ [\cdot]\,\left|\,\,E\,\left|\,\,P\,\right|\,\left(\nu\ell\right)E\). The axiom for reduction in \(\pi\ell\) is:
\[\overline{\overline{\ell}\langle v\rangle\,\,\left|\,\,\ell(\ell^{\prime}).\,P \,\rightarrow\,P\{v/\ell^{\prime}\}\right.}\]
\(\Rightarrow\) is the reflexive transitive closure of \(\rightarrow\). Labelled transitions, written \(P\xrightarrow{\mu}P^{\prime}\), use actions \(\mu\) defined by \(\mu\ ::=\ \ell(v)\,\left|\,\,\overline{\ell}\langle v\rangle\,\left|\,\, \overline{\ell}(\ell^{\prime})\,\right|\,\tau\), and are standard [26]--we recall the definition in Appendix A.3.
The type system.We enforce a sorting discipline for names [18], given by \(V::=\text{bool}\mid L\) and \(\Sigma(L)=V\): values, that are stored in locks, are either booleans or locks. We consider that all processes we write obey this discipline, which is left implicit. This means for instance that when writing \(\overline{\ell}\langle v\rangle\), \(\ell\) and \(v\) have appropriate sorts; and similarly for \(\ell(\ell^{\prime}).P\). In \([v=v^{\prime}]P_{1},P_{2}\), we only compare values with the same sort.
The typing judgement is written \(\Gamma;\mathbb{R}\vdash P\), where \(\Gamma\) and \(\mathbb{R}\) are defined like for \(\text{CCS}\ell\). We adopt the convention that if \(v\) is a boolean value, then \(\gamma,v\) is just \(\gamma\), and similarly, \(\gamma,\ell\) is just \(\gamma\) if the sort of \(\ell\) is \(\mathsf{bool}\). The operation \(\Gamma_{1}\bullet\Gamma_{2}\) is the same as for \(\text{CCS}\ell\).
The typing rules for \(\pi\ell\) are presented in Figure 1. Again, in rules 1 and 2, writing \(\mathbb{R},\ell\) imposes \(\ell\notin\mathbb{R}\), otherwise the rule cannot be applied. Similarly, the notation \(\gamma,\ell,\ell^{\prime}\) is only defined when \(\gamma\#\{\ell,\ell^{\prime}\}\) and \(\ell\neq\ell^{\prime}\). Rule 1 describes the release of a lock containing either a lock or a boolean value: in the latter case, using the convention above, the conclusion of the rule is \(\{\{\ell\}\};\{\ell\}\vdash\overline{\ell}\langle\mathsf{b}\rangle\). In rules 1 and 2, the subject and the object of the operation should belong to the same component. In \(\text{CCS}\ell\), only prefixing yields such a constraint.
In rule 2, we do not impose \(\{v,v^{\prime}\}\in\text{dom}(\Gamma)\). A typical example of a process that uses name comparison is \([\ell_{1}=\ell_{2}]\overline{\ell}\langle\mathsf{tt}\rangle,\overline{\ell} \langle\mathsf{ff}\rangle\): in this process, \(\ell_{1}\) and \(\ell_{2}\) intuitively represent no threat of a deadlock.
Before presenting the properties of the type system, we make some comments on the discipline it imposes on \(\pi\)-calculus names when they are used as locks.
**Remark 8** (An acquired lock cannot be stored).: _In \(\pi\ell\), the obligation to release a lock cannot be transmitted. Accordingly, \(\ell^{\prime}\notin\mathbb{R}=\{\ell\}\) in rule 1, and a process like \(\ell(\ell^{\prime}).\overline{\ell_{1}}\langle\ell\rangle\) cannot be typed. We return to this point after Proposition 11._
**Remark 9** (Typability of higher-order locks).: _Locks are a particular kind of names of the asynchronous \(\pi\)-calculus (\(A\pi\)). Acquiring a lock that has been stored in another lock boils down to performing a communication in \(A\pi\). We discuss how such communications can occur between typed processes._
_In rule 1, \(\ell\) and \(\ell^{\prime}\) must belong to the same component of \(\Gamma\). So intuitively, if a process contains \(\overline{\ell}\langle\ell^{\prime}\rangle\), this release is the only place where these locks are used 'together'. A reduction involving a well-typed process containing this release therefore looks like_
\[(\overline{\ell}\langle\ell^{\prime}\rangle\mid P)\ \ \ |\ \ \ (\ell(x).Q\mid Q^{ \prime})\ \ \ \rightarrow\ \ \ \ P\ |\ Q\{\ell^{\prime}/x\}\ |\ Q^{\prime}.\]
_Parentheses are used to suggest an interaction between two processes; \(\overline{\ell}\langle\ell^{\prime}\rangle\mid P\) performs the release, and \(\ell(x).Q\mid Q^{\prime}\) performs the acquire. Process \(P\), which intuitively is the continuation of the release, may use locks \(\ell\) and \(\ell^{\prime}\), but not together, and similarly for \(Q^{\prime}\). For instance we may have \(P=P_{\ell}\mid P_{\ell^{\prime}}\), where \(\ell^{\prime}\) does not occur in \(P_{\ell}\), and vice-versa for \(P_{\ell^{\prime}}\). Note also that \(\ell^{\prime}\) is necessarily fresh for \(\ell(x).Q^{\prime}\): otherwise,
Figure 1: Typing rules for \(\pi\ell\)
typability of \(\ell(x)\). \(Q^{\prime}\) would impose \(\ell\) and \(\ell^{\prime}\) to be in the same component, which would forbid the parallel composition with \(\overline{\ell}\langle\ell^{\prime}\rangle\)._
_Depending on how \(P,Q\) and \(Q^{\prime}\) are written, we can envisage several patterns of usages of locks \(\ell\) and \(\ell^{\prime}\). A first example is ownership transfer (or delegation): \(\ell^{\prime}\notin\operatorname{fin}(P)\), that is, \(P\) renounces usage of \(\ell^{\prime}\). \(\ell^{\prime}\) can be used in \(Q\). Note that typing actually also allows \(\ell^{\prime}\in\operatorname{fin}(Q^{\prime})\), i.e., the recipient already knows \(\ell^{\prime}\)._
_A second possibility could be that \(\ell\) is used linearly, in the sense that there is exactly one acquire on \(\ell\). In this case, we necessarily have \(\ell\notin\operatorname{fin}(P)\cup\operatorname{fin}(Q^{\prime})\)--note that a release of \(\ell\) is available in \(Q\), by typing. Linearity of \(\ell\) means here that exactly one interaction takes place at \(\ell\). After that interaction, the release on \(\ell\) contained in \(Q\) is inert, in the sense that no acquire can synchronise with it. We believe that this form of linearity can be used to encode binary session types in an extended version of \(\pi\ell\), including variants and polyadicity, along the lines of [15, 4, 5]._
The type system for \(\pi\ell\) satisfies the same properties as in CCS\(\ell\) (Lemma 2): invariance under structural congruence, merging components and subject reduction. We also have progress and deadlock-freedom:
**Lemma 10** (Progress).: _Suppose \(\Gamma;\operatorname{fin}(P)\vdash P\), and \(P\) is not structurally equivalent to \(\mathbf{0}\). Then_
* _either there exists_ \(P^{\prime}\) _such that_ \(P\to P^{\prime}\)_,_
* _or_ \(P\equiv(\nu\overline{\ell})(\Pi_{i}\overline{\ell_{i}}v_{i})\) _where the_ \(\ell_{i}\)_s are pairwise distinct._
Like in CCS\(\ell\), a deadlocked process in \(\pi\ell\) is defined as a complete process that is stuck.
**Proposition 11** (Deadlock-freedom).: _If \(\Gamma;\mathbb{R}\vdash P\) and \(P\Rightarrow P^{\prime}\), then \(P^{\prime}\) is not deadlocked._
The proof of deadlock-freedom is basically the same as for CCS\(\ell\). The reason for that is that although the object part of releases plays a role in the typing rules, it is not relevant to establish progress (Lemma 10). This is the case because in \(\pi\ell\), it is not possible to store an acquired lock in another lock (Remark 8).
It seems difficult to extend the type system in order to allow processes that transmit the release obligation on a lock. This would make it possible to type-check, e.g., process \(\ell(\ell^{\prime})\). \(\overline{\ell_{1}}\langle\ell\rangle\), that does not release lock \(\ell\) but instead stores it in \(\ell_{1}\). Symmetrically, a process accessing \(\ell\) at \(\ell_{1}\) would be in charge of releasing both \(\ell_{1}\) and \(\ell\). In such a framework, a process like \((\nu\ell_{1})(\overline{\ell_{1}}\langle\ell\rangle\mid\ell(x).\overline{\ell }\langle x\rangle)\) would be deadlocked, because the inert release \(\overline{\ell_{1}}\langle\ell\rangle\) contains the release obligation on \(\ell_{1}\). The type system in Section 3 makes it possible to transmit the obligation to perform a release (and similarly for a wait).
**Remark 12**.: _Similarly to Remark 7, we have that \(\Gamma;\operatorname{fin}(P)\vdash P\) implies \(P\Rightarrow(\nu\overline{\ell})(\Pi_{i}\overline{\ell_{i}}v_{i})\) where the \(\ell_{i}\)s are pairwise distinct. As a consequence, the following holds: if \(\Gamma;\operatorname{fin}(P)\vdash P\), then for any \(\ell\in\operatorname{fin}(P)\), \(P\Rightarrow\xrightarrow{\mu}\), where \(\mu\) is a release of \(\ell\). This statement would be better suited if infinite computations were possible in \(\pi\ell\). We leave the investigation of such an extension of \(\pi\ell\) for future work._
### Behavioural Equivalence in \(\pi\ell\)
We introduce typed barbed equivalence (\(\simeq\)) and typed bisimilarity (\(\approx\)) for \(\pi\ell\). We show that \(\approx\) is a sound technique to establish \(\simeq\), and present several examples of (in)equivalences between \(\pi\ell\) processes.
#### 2.3.1 Barbed Equivalence and Labelled Semantics for \(\pi\ell\)
A typed relation in \(\pi\ell\) is a set of quadruples of the form \((\Gamma,\mathbb{R},P,Q)\) such that \(\Gamma;\mathbb{R}\vdash P\) and \(\Gamma;\mathbb{R}\vdash Q\). When a typed relation \(\mathcal{R}\) contains \((\Gamma,\mathbb{R},P,Q)\), we write \(\Gamma;\mathbb{R}\vdash P\mathcal{R}Q\). We say that a typed relation \(\mathcal{R}\) is symmetric if \(\Gamma;\mathbb{R}\vdash P\mathcal{R}Q\) implies \(\Gamma;\mathbb{R}\vdash Q\mathcal{R}P\).
Deadlock-freedom has two consequences regarding the definition of barbed equivalence in \(\pi\ell\), noted \(\simeq\). First, only complete processes should be observed, because intuitively a computation in \(\pi\ell\) should not be blocked by an acquire operation that cannot be executed.
Second, Proposition 11 entails that all weak barbs in the sense of \(\mathrm{A}\pi\) can always be observed in \(\pi\ell\). In \(\mathrm{A}\pi\), a weak barb at \(n\) corresponds to the possibility to reduce to a process in which an output at channel \(n\) occurs at top-level. We need a stronger notion of barb, otherwise \(\simeq\) would be trivial. That behavioural equivalence in \(\pi\ell\) is not trivial is shown for instance by the presence of non-determinism. Consider indeed process \(P_{c}\stackrel{{\mathrm{def}}}{{=}}(\nu\ell)\big{(}\,\ell(x).\,( \overline{\nu}\langle x\rangle\mid\overline{\ell}\langle x\rangle)\mid\ell(y ).\,\overline{\ell}\langle\mathrm{f}\rangle\mid\,\overline{\ell}\langle \mathrm{t}\rangle\,\big{)}\). Then \(P_{c}\Rightarrow\overline{c}\langle\mathrm{t}\rangle\) and \(P_{c}\Rightarrow\overline{c}\langle\mathrm{f}\rangle\) (up to the cancellation of an inert process of the form \((\nu\ell)\overline{\ell}\langle\mathrm{b}\rangle\)). We therefore include the object part of releases in barbs. We write \(P\downarrow_{\overline{\ell}\langle\ell^{\prime}\rangle}\) if \(P\xrightarrow{\ell\langle\ell^{\prime}\rangle}\), and \(P\downarrow_{\overline{\ell}\langle\nu\rangle}\) if \(P\xrightarrow{\overline{\ell}\langle\ell^{\prime}\rangle}\). We use \(\eta\) to range over barbs, writing \(P\downarrow_{\eta}\); the weak version of the predicate, defined as \(\Rightarrow\downarrow_{\eta}\), is written \(P\Downarrow_{\eta}\).
**Definition 13** (Barbed equivalence in \(\pi\ell\), \(\simeq\)).: _A symmetric typed relation \(\mathcal{R}\) is a typed barbed bisimulation if \(\Gamma;\mathbb{R}\vdash\mathcal{P}\mathcal{R}Q\) implies the three following properties:_
1. _whenever_ \(P,Q\) _are complete and_ \(P\to P^{\prime}\)_, there is_ \(Q^{\prime}\) _s.t._ \(Q\Rightarrow Q^{\prime}\) _and_ \(\Gamma;\mathbb{R}\vdash P^{\prime}\mathcal{R}Q^{\prime}\)_;_
2. _for any_ \(\eta\)_, if_ \(P,Q\) _are complete and_ \(P\downarrow_{\eta}\) _then_ \(Q\Downarrow_{\eta}\)_;_
3. _for any_ \(E,\Gamma^{\prime},\mathbb{R}^{\prime}\) _s.t._ \(\Gamma^{\prime};\mathbb{R}^{\prime}\vdash E[P]\) _and_ \(\Gamma^{\prime};\mathbb{R}^{\prime}\vdash E[Q]\)_, and_ \(E[P],E[Q]\) _are complete, we have_ \(\Gamma^{\prime};\mathbb{R}^{\prime}\vdash E[P]\,\mathcal{R}E[Q]\)_._
Typed barbed equivalence, _written_\(\simeq\)_, is the greatest typed barbed bisimulation._
**Lemma 14** (Observing only booleans).: _We use \(o,o^{\prime},\dots\) for lock names that are used to store boolean values. We define \(\simeq_{o}\) as the equivalence defined as in Definition 13, but restricting the second clause to barbs of the form \(\downarrow_{\overline{\sigma}\langle\mathrm{b}\rangle}\) and \(\Downarrow_{\overline{\sigma}\langle\mathrm{b}\rangle}\). Relation \(\simeq_{o}\) coincides with \(\simeq\)._
To define typed bisimilarity, we introduce _type-allowed transitions_. The terminology means that we select among the untyped transitions those that are fireable given the constraints imposed by types.
**Definition 15** (Type-allowed transitions).: _When \(\Gamma;\mathbb{R}\vdash P\), we write \([\Gamma;\mathbb{R};P]\xrightarrow{\mu}[\Gamma^{\prime};\mathbb{R}^{\prime};P ^{\prime}]\) if \(P\xrightarrow{\mu}P^{\prime}\) and one of the following holds:_
1. \(\mu=\tau\)_, in which case_ \(\mathbb{R}^{\prime}=\mathbb{R}\) _and_ \(\Gamma^{\prime}=\Gamma\)_;_
2. \(\mu=\overline{\ell}\langle v\rangle\)_, in which case_ \((\gamma,\ell,v)\in\Gamma\) _for some_ \(\gamma\)_, and_ \(\mathbb{R}^{\prime},\ell=\mathbb{R}\)_,_ \(\Gamma^{\prime}=\Gamma\)_;_
3. \(\mu=\overline{\ell}(\ell^{\prime})\)_, in which case_ \(\Gamma=\Gamma_{0},(\gamma,\ell)\) _for some_ \(\Gamma_{0},\gamma\)_,_ \(\Gamma^{\prime}=\Gamma_{0},(\gamma,\ell,\ell^{\prime})\)_, and we have_ \(\mathbb{R}^{\prime},\ell=\mathbb{R},\ell^{\prime}\)_;_
4. \(\mu=\ell(v)\)_, in which case there are_ \(\Gamma_{0},\mathbb{R}_{0}\) _s.t._ \(\Gamma_{0};\mathbb{R}_{0}\vdash P\mid\overline{\ell}\langle v\rangle\)_, and_ \(\Gamma^{\prime}=\Gamma_{0},\mathbb{R}^{\prime}=\mathbb{R}_{0}\)_._
In item 3, \(\ell\) is removed from the \(\mathbb{R}\) component, and \(\ell^{\prime}\) is added: it is \(P^{\prime}\)'s duty to perform the release of \(\ell^{\prime}\), the obligation is not transmitted. An acquire transition involving a higher-order lock merges two distinct components in the typing environment: if \([\Gamma_{0},(\gamma,\ell),(\gamma^{\prime},\ell^{\prime});\mathbb{R};P] \xrightarrow{\ell\langle\ell^{\prime}\rangle}[\Gamma^{\prime};\mathbb{R}^{ \prime};P^{\prime}]\) (item 4 above), then \(\Gamma^{\prime}=\Gamma_{0},(\gamma\uplus\gamma\uplus\{\ell,\ell^{\prime}\})\) and \(\mathbb{R}^{\prime}=\mathbb{R},\ell\) (and in particular \(\ell\notin\mathbb{R}\)).
**Lemma 16** (Subject Reduction for type-allowed transitions).: _If \([\Gamma;\mathbb{R};P]\xrightarrow{\mu}[\Gamma^{\prime};\mathbb{R}^{\prime};P ^{\prime}]\), then \(\Gamma^{\prime};\mathbb{R}^{\prime}\vdash P^{\prime}\)._
**Definition 17** (Typed bisimilarity, \(\approx\)).: _A typed relation \(\mathcal{R}\) is a typed bisimulation if \(\Gamma;\mathbb{R}\vdash\mathcal{P}\mathcal{R}Q\) implies that whenever \([\Gamma;\mathbb{R};P]\xrightarrow{\mu}[\Gamma^{\prime};\mathbb{R}^{\prime};P ^{\prime}]\), we have_
1. _either_ \(Q\xrightarrow{\hat{\mu}}Q^{\prime}\) _and_ \(\Gamma^{\prime};\mathbb{R}^{\prime}\vdash P^{\prime}\mathcal{R}Q^{\prime}\) _for some_ \(Q^{\prime}\)__
_
2. _or_ \(\mu\) _is an acquire_ \(\ell(v)\)_,_ \(Q\ |\ \overline{\ell}\langle v\rangle\Rightarrow Q^{\prime}\) _and_ \(\Gamma^{\prime};\mathbb{R}^{\prime}\vdash P^{\prime}\mathcal{R}Q^{\prime}\) _for some_ \(Q^{\prime}\)_,_
_and symmetrically for the type-allowed transitions of \(Q\)._
Typed bisimilarity, _written_ \(\approx\)_, is the largest typed bisimulation._
We write \(\Gamma;\mathcal{R}\vdash P\approx Q\) when \((\Gamma;\mathbb{R},P,Q)\in\approx\). If \(\Gamma;\mathbb{R}\vdash P\approx Q\) does not hold, we write \(\Gamma;\mathbb{R}\vdash P\not\approx Q\), and similarly for \(\Gamma;\mathbb{R}\vdash P\not\approx Q\).
Proposition 18 below states that relation \(\approx\) provides a sound proof technique for \(\simeq\). The main property to establish this result is that \(\approx\) is preserved by parallel composition: \(\Gamma_{0};\mathbb{R}_{0}\vdash P\approx Q\) implies that for all \(T\), whenever \(\Gamma;\mathbb{R}\vdash P\mid T\) and \(\Gamma;\mathbb{R}\vdash Q\mid T\), we have \(\Gamma;\mathbb{R}\vdash P\mid T\approx Q\mid T\).
**Proposition 18** (Soundness).: _For any \(\Gamma,\mathbb{R},P,Q\), if \(\Gamma;\mathbb{R}\vdash P\approx Q\), then \(\Gamma;\mathbb{R}\vdash P\simeq Q\)._
The main advantage in using \(\approx\) to establish equivalences for \(\simeq\) is that we can reason directly on processes, even if they are not complete.
#### 2.3.2 Examples of Behavioural Equivalence in \(\pi\ell\)
**Example 19**.: _We discuss some equivalences for \(\simeq\)._
_The equivalence \(\{\{\ell\}\};\emptyset\vdash\ell(x).\,\overline{\ell}\langle x\rangle\simeq \mathbf{0}\), which is typical of \(A\pi\), holds in \(\pi\ell\). This follows directly from the definition of typed bisimilarity, and soundness (Proposition 18)._
_We now let \(P\stackrel{{\mathrm{def}}}{{=}}\ell(x).\,(\overline{\ell_{0}} \langle\mathrm{tr}\rangle\mid\overline{\ell}\langle x\rangle)\) and \(Q\stackrel{{\mathrm{def}}}{{=}}\overline{\ell_{0}}\langle\mathrm{ tr}\rangle\), and consider whether we can detect the presence of a 'forwarder' at \(\ell\) when its behaviour is interleaved with another process. \(P\) and \(Q\) have different barbs--they are obviously not complete. It turns out that \(\{\{\ell,\ell_{0}\}\};\{\ell_{0}\}\vdash\ell(x).\,(\overline{\ell_{0}} \langle\mathrm{tr}\rangle\mid\overline{\ell}\langle x\rangle)\not\simeq \overline{\ell_{0}}\langle\mathrm{tr}\rangle\). Indeed, let us consider the context_
\[E\ \stackrel{{\mathrm{def}}}{{=}}\ |\ \ \ell_{0}(y).\,w(\_).\,( \overline{w}\langle\mathrm{tr}\rangle\mid\overline{\ell_{0}}\langle y\rangle) \ \ |\ \ w^{\prime}(\_).\,(\overline{w^{\prime}}\langle\mathrm{tr}\rangle\mid \overline{\ell}\langle v\rangle)\ \ |\ \ \overline{w}\langle\mathrm{fr}\rangle\mid\overline{w^{\prime}} \langle\mathrm{fr}\rangle,\]
_where \(w,w^{\prime}\) are fresh names and \(v\) is a value of the appropriate sort. We have \(E[Q]\Rightarrow Q^{\prime}\) with \(Q^{\prime}\mid\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
**Example 21**.: _Consider the following processes:_
\[\begin{array}{rcl}P_{1}&=&(\nu\ell_{1})\big{(}\,\ell_{1}.\,\ell_{2}.\,(\overline{ \ell_{1}}\mid\overline{\ell_{2}})\,\mid\,\ell(x).\,\ell_{1}.\,x.\,(\overline{ \ell_{1}}\mid\overline{x}\mid\overline{\ell}\langle x\rangle)\,\mid\,\overline {\ell_{1}}\mid\overline{\ell_{2}}\big{)}\\ P_{2}&=&(\nu\ell_{1})\big{(}\,\ell_{1}.\,\ell_{2}.\,(\overline{\ell_{1}}\mid \overline{\ell_{2}})\,\mid\,\ell(x).\,x.\,\ell_{1}.\,(\overline{\ell_{1}} \mid\overline{x}\mid\overline{\ell}\langle x\rangle)\,\mid\,\overline{\ell_{1 }}\mid\overline{\ell_{2}}\big{)}\end{array}\]
_Here we use a CCS-like syntax, to ease readability. This notation means that acquire operations are used as forwarders, i.e., the first component of \(P_{1}\) and \(P_{2}\) should be read as \(\ell_{1}(y_{1}).\,\ell_{2}(y_{2}).\,(\overline{\ell_{1}}\langle y_{1}\rangle \mid\overline{\ell_{2}}\langle y_{2}\rangle).\) Moreover, the two releases available at top-level are \(\overline{\ell_{1}}\langle\tt{t}\rangle\mid\overline{\ell_{2}}\langle\tt{t}\rangle\), and similarly for \(\pi\langle\tt{t}\rangle\) (the reasoning also holds if \(\ell_{1}\) and \(\ell_{2}\) are higher-order locks)._
_In the pure \(\pi\)-calculus, \(P_{1}\) and \(P_{2}\) are not equivalent, because \(\ell_{2}\) can instantiate \(x\) in the acquire on \(\ell\). We can show \(\{\{\ell_{2},\ell\}\};\{\ell_{2}\}\vdash P_{1}\approx P_{2}\) in \(\pi\ell\), because the transition \(\xrightarrow{\ell(\ell_{2})}\) is ruled out by the type system._
## 3 \(\pi\ell\mathrm{w}\), a Leak-Free Asynchronous \(\pi\)-Calculus
### Adding Lock Deallocation
\(\pi\ell\mathrm{w}\) is obtained from \(\pi\ell\) by adding the _wait construct_\(\ell((\ell^{\prime})).\,P\) to the grammar of \(\pi\ell\). As announced in Section 1, the following reduction rule describes how wait interacts with a release:
\[\overline{(\nu\ell)(\overline{\ell}\langle v\rangle\,\mid\,\ell((\ell^{ \prime})).\,P)}\,\rightarrow\,P\{\nu/\ell^{\prime}\}\,\ell\notin\mathrm{fin}(P)\]
The wait instruction deallocates the lock. The continuation may use \(\ell^{\prime}\), the final value of the lock. We say that \(\ell((\ell^{\prime}))\) is a _wait on \(\ell\)_, and \(\ell^{\prime}\) is bound in \(\ell((\ell^{\prime})).\,P\).
Types in \(\pi\ell\mathrm{w}\), written \(\mathsf{T},\mathsf{T}^{\prime},\ldots\), are defined by \(\mathsf{T}::=\,\mathsf{bool}\,\big{|}\,\langle\mathsf{T}\rangle_{rw}\), and _typing hypotheses_ are written \(\ell:\mathsf{T}\). In \(\ell:\langle\mathsf{T}\rangle_{rw}\), \(rw\) is called the _usage_ of \(\ell\), and \(r,w\in\{0,1\}\) are the release and wait _obligations_, respectively, on lock \(\ell\). So for instance a typing hypothesis of the form \(\ell:\langle\mathsf{T}\rangle_{10}\) means that \(\ell\) must be used to perform a release and cannot be used to perform a wait. An hypothesis \(\ell:\langle\mathsf{T}\rangle_{00}\) means that \(\ell\) can only be used to perform acquire operations. This structure for types makes it possible to transmit the wait and release obligations on a given lock name via higher-order locks.
Our type system ensures that locks are properly deallocated. In contrast to \(\pi\ell\), this allows acquired locks to be stored without creating deadlocks. For example, a process like \((\nu\ell_{1})(\overline{\ell_{1}}\langle\ell\rangle\mid\ell(x).\,\overline{ \ell}\langle x\rangle)\) is deadlocked if \(\ell_{1}\) stores the release obligation of \(\ell\); however, it cannot be typed as it lacks the wait on \(\ell_{1}\). Adding a wait, e.g. \(\ell_{1}((\ell)).\,\overline{\ell}\langle v\rangle\) removes the deadlock.
Typing environments have the same structure as in Section 2, except that components \(\gamma\) are sets of typing hypotheses instead of simply sets of lock names. \(\mathrm{dom}(\Gamma)\) is defined as the set of lock names for which \(\Gamma\) contains a typing hypothesis. We write \(\Gamma(\ell)=\mathsf{T}\) if the typing hypothesis \(\ell:\mathsf{T}\) occurs in \(\Gamma\).
We reuse the notation for composition of typing environments. \(\Gamma_{1}\bullet\Gamma_{2}\) is defined like in Section 2.1, using the \(\mathsf{connect}\) operator, to avoid cyclic structures in the sharing of lock names. Additionally, when merging components, we compose typing hypotheses. For any \(\ell\), if \(\ell:\langle\mathsf{T}_{1}\rangle_{r_{1}w_{1}}\in\mathrm{dom}(\Gamma_{1})\) and \(\ell:\langle\mathsf{T}_{2}\rangle_{r_{2}w_{2}}\in\mathrm{dom}(\Gamma_{2})\), the typing hypothesis for \(\ell\) in \(\Gamma_{1}\bullet\Gamma_{2}\) is \(\ell:\langle\mathsf{T}\rangle_{(r_{1}+r_{2})(w_{1}+w_{2})}\), and is defined only if \(\mathsf{T}=\mathsf{T}_{1}=\mathsf{T}_{2}\), \(r_{1}+r_{2}\leq 1\) and \(w_{1}+w_{2}\leq 1\).
The typing rules are given in Figure 2. The rules build on the rules for \(\pi\ell\), and rely on usages to control the release and wait obligations. In particular, the set \(\mathbb{R}\) in Figure 1 corresponds to the set of locks whose usage is of the form \(1w\) in this system. To type-check an acquire, we can have usage 00, but also 01, as in, e.g., \(\ell(\ell^{\prime}).\,(\overline{\ell}\langle\ell^{\prime}\rangle\mid\ell((x)).\,P)\). In rule Rel-w, we impose that all typing hypotheses in \(\Gamma_{00}\) (resp. \(\gamma_{00}\)) have the form \(\ell:\langle\mathsf{T}\rangle_{00}\).
Several notions introduced for the type system of Section 2 have to be adapted in the setting of \(\pi\ell\)w. While in Section 2 we simply say that a lock \(\ell\) is available, here we distinguish whether a release of \(\ell\) or a wait on \(\ell\) is available. If \(P\) has a subterm of the form \(\ell((x)).Q\) that does not occur under a binder for \(\ell\), we say that a wait on \(\ell\) is _available_ in \(P\). If \(\overline{\ell}\langle v\rangle\) occurs in some process \(P\) and this occurrence is neither under a binder for \(\ell\) nor under an acquire on \(\ell\), we say that a release of \(\ell\) is _available_ in \(P\). In addition, a release of \(\ell\) (resp. wait on \(\ell\)) is available in \(P\) also if \(P\) contains a release of the form \(\overline{\ell_{0}}\langle\ell\rangle\), which does not occur under a binder for \(\ell\), and if \(\ell\)'s type is of the form \(\langle\mathsf{T}\rangle_{1w}\) (resp. \(\langle\mathsf{T}\rangle_{r1}\)).
Like in \(\pi\ell\), a deadlocked process in \(\pi\ell\)w is a complete process that is stuck. The notion of complete process has to be adapted in order to take into account the specificities of \(\pi\ell\)w. First, the process should not be stuck just because a restriction is missing in order to trigger a name deallocation. Second, we must consider the fact that release and wait obligations can be stored in locks in \(\pi\ell\)w. As a consequence, when defining complete processes in \(\pi\ell\)w, we impose some constraints on the free lock names of processes.
In \(\pi\ell\)w, we say that \(\Gamma\)_is complete_ if for any \(\ell\in\operatorname{dom}(\Gamma)\), either \(\Gamma(\ell)=\langle\mathsf{bool}\rangle_{10}\) or \(\Gamma(\ell)=\langle\langle\mathsf{T}\rangle_{00}\rangle_{10}\) for some \(\mathsf{T}\). To understand this definition, suppose \(\Gamma\vdash P\) with \(\Gamma\) complete. Then we have, for any free lock name \(\ell\) of \(P\): \((i)\) the release of \(\ell\) is available in \(P\); \((ii)\) this release does not carry any obligation; \((iii)\) the wait on \(\ell\) is _not_ available in \(P\). The latter constraint means that if a \(P\) contains a wait on some lock, then this lock should be restricted.
The notion of leak-freedom we use is inspired from [13]. In our setting, a situation where some lock \(\ell\) is released and will never be acquired again can be seen as a form of memory leak. We say that \(P\)_leaks_\(\ell\) if \(P\equiv(\nu\ell)(P^{\prime}\mid\overline{\ell}\langle v\rangle)\) with \(\ell\notin\operatorname{fin}(P^{\prime})\). \(P\)_has a leak_ if \(P\) leaks \(\ell\) for some \(\ell\), and is _leak-free_ otherwise.
**Lemma 22** (Progress).: _If \(\Gamma\vdash P\) and \(\Gamma\) is complete, then either \(P\to P^{\prime}\) for some \(P^{\prime}\), or \(P\equiv(\nu\overline{\ell})(\Pi_{i}\overline{\ell}_{i}v_{i})\) where the \(\ell_{i}\)s are pairwise distinct._
For lack of space, the proof is presented in Appendix B. Again, it follows the lines of the proof of Lemma 5. To construct a graph containing necessarily a cycle, we associate to every acquire of the form \(\ell(x).Q\) an available release of \(\ell\), which might occur in a release of the form \(\overline{\ell^{\prime}}\langle\ell\rangle\), if \(\ell^{\prime}\) carries the release obligation. Similarly, to every wait \(\ell((x)).Q\), we associate an available release, or, if a release \(\overline{\ell}\langle v\rangle\) occurs at top-level, an acquire on \(\ell\), that necessarily exists otherwise a reduction could be fired. Finally, using a similar reasoning, to every release of \(\ell\) at top-level, we associate a wait on \(\ell\), or an acquire on \(\ell\).
A consequence of Lemma 22 is that \(P\Rightarrow\mathbf{0}\) when \(\emptyset\vdash P\).
**Proposition 23** (Deadlock- and Leak-freedom).: \(\Gamma\vdash P\) _and \(P\Rightarrow P^{\prime}\), then \(P^{\prime}\) neither is deadlocked, nor has a leak._
**Corollary 24**.: _Suppose \(\Gamma,\gamma,\ell:\langle\mathsf{bool}\rangle_{10}\vdash P\), and suppose that the usage of all names in \(S=\operatorname{dom}(\Gamma,\gamma)\) is \(11\). Then \((\nu S)P\Downarrow_{\overline{\ell}(\mathsf{b})}\) for some \(\mathsf{b}\)._
Proof.: Immediate by Lemma 22 and subject reduction.
This property is used to define barbed equivalence below. It does not hold for higher-order locks: simply discarding \(x\), the lock stored in \(\ell\), might break typability if \(\ell\) carries an obligation.
### Typed Behavioural Equivalence in \(\pi\ell\mathsf{w}\)
#### 3.2.1 Barbed Equivalence
In barbed equivalence in \(\pi\ell\) (Definition 13), we compare complete \(\pi\ell\) processes, intuitively to prevent blocked acquire operations from making certain observations impossible. Similarly, in \(\pi\ell\mathsf{w}\), we must also make sure that all wait operations in the processes being observed will eventually be fired. For this, we need to make the process complete (in the sense of Lemma 22), and to add restrictions so that wait transitions are fireable.
However, in order to be able to observe some barbs and discriminate processes, we rely on Corollary 24, and allow names to be unrestricted as long as their type is of the form \(\langle\mathsf{bool}\rangle_{10}\). This type means that the lock is first order, and that the context has the wait obligation. In such a situation, interactions at \(\ell\) will never be blocked, the whole process is deadlock-free, and eventually reduces to a parallel composition of releases typed with \(\langle\mathsf{bool}\rangle_{10}\). Accordingly, we say that a \(\pi\ell\mathsf{w}\) process \(P\) is _wait-closed_ if \(\Gamma\vdash P\) and for any \(\ell\in\operatorname{dom}(\Gamma)\), \(\Gamma(\ell)=\langle\mathsf{bool}\rangle_{10}\).
A typed relation in \(\pi\ell\mathsf{w}\) is a set of triples \((\Gamma,P,Q)\) such that \(\Gamma\vdash P\) and \(\Gamma\vdash Q\), and we write \(\Gamma\vdash P\mathcal{R}Q\) for \((\Gamma,P,Q)\in\mathcal{R}\). Barbed equivalence in \(\pi\ell\mathsf{w}\) is defined like \(\simeq\) (Definition 13), restricting observations to wait-closed processes.
**Definition 25** (Barbed equivalence in \(\pi\ell\mathsf{w}\), \(\simeq_{w}\)).: _A symmetric typed relation \(\mathcal{R}\) is a typed barbed bisimulation if \(\Gamma\vdash P\mathcal{R}Q\) implies the three following properties:_
1. _whenever_ \(P,Q\) _are wait-closed and_ \(P\to P^{\prime}\)_, there is_ \(Q^{\prime}\) _s.t._ \(Q\Rightarrow Q^{\prime}\) _and_ \(\Gamma\vdash P^{\prime}\mathcal{R}Q^{\prime}\)_;_
2. _if_ \(P,Q\) _are wait-closed and_ \(P\downarrow_{\eta}\) _then_ \(Q\Downarrow_{\eta}\)_;_
3. _for any_ \(E,\Gamma^{\prime}\) _s.t._ \(\Gamma^{\prime}\vdash E[P]\) _and_ \(\Gamma^{\prime}\vdash E[Q]\)_, and_ \(E[P],E[Q]\) _are wait-closed, we have_ \(\Gamma^{\prime}\vdash E[P]\,\mathcal{R}E[Q]\)_._
Typed barbed equivalence _in_ \(\pi\ell\mathsf{w}\)_, written_ \(\simeq_{w}\)_, is the greatest typed barbed bisimulation._
In the second clause above, \(\eta\) can only be of the form \(\overline{\ell}(\mathsf{b})\), for some boolean value \(\mathsf{b}\). Lemma 14 tells us that we could proceed in the same way when defining \(\simeq\).
#### 3.2.2 Typed Transitions for \(\pi\ell\mathsf{w}\), and Bisimilarity
We now define a LTS for \(\pi\ell\mathsf{w}\). Transitions for name deallocation are not standard in the \(\pi\)-calculus. To understand how we deal with these, consider \(\ell((\ell^{\prime})).\,P\mid Q\): this process can do \(\xrightarrow{\ell((\nu))}\) only if \(Q\) does not use \(\ell\). Similarly, in \(\ell((\ell^{\prime})).\,P\mid\ell(x).\,Q\mid\overline{\ell}\langle v\rangle\), the acquire can be fired, and the wait cannot.
Instead of selecting type-allowed transitions among the untyped transitions like in Section 2.3, we give an inductive definition of typed transitions, written \([\Gamma;P]\xrightarrow{\mu}[\Gamma^{\prime};P^{\prime}]\). This allows us to use the rules
for parallel composition in order to control the absence of a lock, when a lock deallocation is involved. Technically, this is done by refining the definition of the operator to compose typing contexts.
Actions of the LTS are defined as follows: \(\mu\ ::=\ \ell(v)\ \big{|}\ \overline{\ell}\langle v\rangle\ \big{|}\ \overline{\ell}(\ell^{\prime})\ \big{|}\ \tau\ \big{|}\ \ell((v))\ \big{|}\ \tau/\ell\). Name \(\ell\) plays a particular role in transitions along wait actions \(\ell((v))\) and _wait synchronisations_\(\tau/\ell\): since \(\ell\) is deallocated, we must make sure that it is not used elsewhere in the process. We define \(\Gamma_{1}\bullet_{\mu}\Gamma_{2}\) as being equal to \(\Gamma_{1}\bullet\Gamma_{2}\), with the additional constraint that \(\ell\notin\operatorname{dom}(\Gamma_{1})\cup\operatorname{dom}(\Gamma_{2})\) when \(\mu=\ell((v))\) or \(\mu=\tau/\ell\), otherwise \(\Gamma_{1}\bullet_{\mu}\Gamma_{2}\) is not defined. The rules defining the LTS are given on Figure 3. We define \(\operatorname{fin}(\overline{\ell}(\ell^{\prime}))=\operatorname{fin}(\tau/ \ell)=\{\ell\}\), and \(\operatorname{fin}(\ell(v))=\operatorname{fin}(\ell((v)))=\operatorname{fin}( \overline{\ell}(v))=\{\ell,v\}\) (with the convention that \(\{\ell,v\}=\{\ell\}\) if \(v\) is a boolean value).
We comment on the transition rules. Rules TR, TA and TW express the meaning of usages (respectively, 01, 0\(w\) and 10). In rule TT, \(\ell\) is deallocated, and the restriction on \(\ell\) is removed. In rules TPT, TPTB we rely on operation \(\Gamma_{1}\bullet_{\mu}\Gamma_{2}\) to make sure that \(\ell\) does not appear in both parallel components of the continuation process, and similarly for TPP in the case where \(\mu\) involves deallocation of \(\ell\).
Typability is preserved by typed transitions: if \(\Gamma\vdash P\) and \([\Gamma;P]\xrightarrow{\mu}[\Gamma^{\prime};P^{\prime}]\), then \(\Gamma^{\prime}\vdash P^{\prime}\).
Bisimilarity in \(\pi\ell\)w takes into account the additional transitions w.r.t. \(\pi\ell\), and is sound for \(\simeq_{w}\).
**Definition 26** (Typed Bisimilarity in \(\pi\ell\)w, \(\approx_{w}\)).: _A typed relation \(\mathcal{R}\) is a typed bisimulation if \(\Gamma\vdash P\mathcal{R}Q\)
Figure 3: \(\pi\ell\)w, Typed LTS. We omit symmetric versions of rules involving parallel compositions
implies that whenever \([\Gamma;P]\xrightarrow{\mu}[\Gamma^{\prime};P^{\prime}]\), we have_
1. _either_ \(Q\xrightarrow{\hat{\mu}}Q^{\prime}\) _and_ \(\Gamma^{\prime}\vdash P^{\prime}\mathcal{R}Q^{\prime}\) _for some_ \(Q^{\prime}\)__
2. _or_ \(\mu\) _is an acquire_ \(\ell(v)\)_,_ \(Q\mid\overline{\ell}\langle v\rangle\Rightarrow Q^{\prime}\) _and_ \(\Gamma^{\prime}\vdash P^{\prime}\mathcal{R}Q^{\prime}\) _for some_ \(Q^{\prime}\)_,_
3. _or_ \(\mu\) _is a wait_ \(\ell((v))\)_,_ \((\nu\ell)(Q\mid\overline{\ell}\langle v\rangle)\Rightarrow Q^{\prime}\) _and_ \(\Gamma^{\prime}\vdash P^{\prime}\mathcal{R}Q^{\prime}\) _for some_ \(Q^{\prime}\)_,_
4. _or_ \(\mu=\tau/\ell\)_,_ \((\nu\ell)Q\Rightarrow Q^{\prime}\) _and_ \(\Gamma^{\prime}\vdash P^{\prime}\mathcal{R}Q^{\prime}\) _for some_ \(Q^{\prime}\)_,_
_and symmetrically for the typed transitions of \(Q\)._ Typed bisimilarity in \(\pi\ell\mathrm{w}\), _written \(\approx_{w}\), is the largest typed bisimulation._
**Proposition 27** (Soundness).: _For any \(\Gamma,P,Q\), if \(\Gamma\vdash P\approx_{w}Q\), then \(\Gamma\vdash P\simeq_{w}Q\)._
**Example 28**.: _The law \(\ell(x).\,\overline{\ell}\langle x\rangle=\mathbf{0}\) holds in \(\pi\ell\mathrm{w}\), at type \(\ell:\langle\Upsilon\rangle_{00}\), for any \(\mathsf{T}\)._
_Suppose \(\Gamma\vdash\ell(x).\,P\mid\ell((y)).\,Q\). Then we can prove \(\Gamma\vdash\ell(x).\,P\mid\ell((y)).\,Q\approx_{w}\ \ell(x).\,(P\mid\ell((y)).\,Q)\)._
_Using this equivalence and the law of asynchrony, we can deduce \(\ell((x)).\,P\ \simeq_{w}\ \ell(x).\,(\overline{\ell}\langle x\rangle\mid\ell((x)).\,P)\)._
An equivalence between \(\pi\ell\) processes is also valid in \(\pi\ell\mathrm{w}\). To state this property, given \(P\) in \(\pi\ell\), we introduce \([\![P]\!]_{w}\), its translation in \(\pi\ell\mathrm{w}\). The definition of \([\![P]\!]_{w}\) is simple, as we just need to add wait constructs under restrictions for \([\![P]\!]_{w}\) to be typable.
**Lemma 29**.: _Suppose \(\Gamma;\mathbb{R}\vdash P\approx Q\). Then \(\Gamma_{w}\vdash[\![P]\!]_{w}\approx_{w}[\![Q]\!]_{w}\) for some \(\pi\ell\mathrm{w}\) typing environment \(\Gamma_{w}\)._
This result shows that the addition of wait does not increase the discriminating power of contexts. We refer to Appendix B for the definition of \([\![P]\!]_{w}\) and a discussion of the proof of Lemma 29.
## 4 Related and Future Work
The basic type discipline for lock names that imposes a safe usage of locks by always releasing a lock after acquiring it is discussed in [14]. This is specified using _channel usages_ (not to be confused with the usages of Section 3.1). Channel usages in [14] are processes in a subset of CCS, and can be defined in sophisticated ways to control the behaviour of \(\pi\)-calculus processes. The encoding of references in the asynchronous \(\pi\)-calculus studied in [8] is also close to how locks are used in \(\pi\ell\mathrm{w}\). A reference is indeed a lock that must be released _immediately_ after the acquire. The typed equivalence to reason about reference names in [8] has important differences w.r.t. \(\simeq_{w}\), notably because the deadlock- and leak-freedom properties are not taken into consideration in that work.
The type system for \(\pi\ell\mathrm{w}\) has several ideas in common with [13]. That paper studies \(\lambda_{\mathrm{lock}}\), a functional language with higher-order locks and thread spawning. The type system for \(\lambda_{\mathrm{lock}}\) guarantees leak- and deadlock-freedom by relying on duality and linearity properties, which entail the absence of cycles. In turn, this approach originates in work on binary session types, and in particular on concurrent versions of the Curry-Howard correspondence [11, 7, 29, 2, 28, 30].
\(\pi\ell\mathrm{w}\) allows a less controlled form of interaction than functional languages or binary sessions. Important differences are: names do not have to be used linearly; there is no explicit notion of thread, neither a fork instruction, in \(\pi\ell\mathrm{w}\); reduction is not deterministic. The type system for \(\pi\ell\mathrm{w}\) controls parallel composition to rule out cyclic structures among interacting processes.
The simplicity of the typing rules, and of the proofs of deadlock- and leak-freedom, can be leveraged to develop a theory of typed behavioural equivalence for \(\pi\ell\) and \(\pi\ell\mathrm{w}\). Soundness of bisimilarity provides a useful tool to establish equivalence results. Proving completeness is not obvious, intuitively because |
2309.03161 | ACEpotentials.jl: A Julia Implementation of the Atomic Cluster Expansion | We introduce ACEpotentials.jl, a Julia-language software package that
constructs interatomic potentials from quantum mechanical reference data using
the Atomic Cluster Expansion (Drautz, 2019). As the latter provides a complete
description of atomic environments, including invariance to overall translation
and rotation as well as permutation of like atoms, the resulting potentials are
systematically improvable and data efficient. Furthermore, the descriptor's
expressiveness enables use of a linear model, facilitating rapid evaluation and
straightforward application of Bayesian techniques for active learning. We
summarize the capabilities of ACEpotentials.jl and demonstrate its strengths
(simplicity, interpretability, robustness, performance) on a selection of
prototypical atomistic modelling workflows. | William C. Witt, Cas van der Oord, Elena Gelžinytė, Teemu Järvinen, Andres Ross, James P. Darby, Cheuk Hin Ho, William J. Baldwin, Matthias Sachs, James Kermode, Noam Bernstein, Gábor Csányi, Christoph Ortner | 2023-09-06T17:00:22Z | http://arxiv.org/abs/2309.03161v2 | # ACEpotentials.jl : A Julia Implementation of the Atomic Cluster Expansion
###### Abstract
We introduce ACEpotentials.jl, a Julia-language software package that constructs interatomic potentials from quantum mechanical reference data using the Atomic Cluster Expansion _(Drautz, 2019)_. As the latter provides a complete description of atomic environments, including invariance to overall translation and rotation as well as permutation of like atoms, the resulting potentials are systematically improvable and data efficient. Furthermore, the descriptor's expressiveness enables use of a linear model, facilitating rapid evaluation and straightforward application of Bayesian techniques for active learning. We summarize the capabilities of ACEpotentials.jl and demonstrate its strengths (simplicity, interpretability, robustness, performance) on a selection of prototypical atomistic modelling workflows.
## I Introduction
Machine-learning interatomic potentials (MLIPs) continue to revolutionize the fields of molecular and materials simulation [1; 2]. MLIPs provide the means to simulate atomistic systems at or close to the accuracy of electronic structure methods, while being computationally cheaper by orders of magnitude. They make the simulation of large-scale systems and long time-scales at high model accuracy accessible and have therefore become an indispensable tool for atomic-scale simulation. Recent reviews of the the field are provided in [3; 4; 5; 6]. Of particular relevance to the present work are the methods introduced in [7; 2; 8; 9].
To create an MLIP, one begins with a flexible functional form, constrained only to comply with the natural symmetries of the potential energy in three-dimensional space, then estimates its parameters using reference data, typically in the form of energies, forces, and virial stresses for a set of representative atomic configurations. Ordinarily, the data are generated with quantum mechanical techniques, such as density functional theory calculations, which may be performed only for relatively small structures. A well-trained MLIP is then expected to provide accurate predictions of processes on similar but also much larger spatial scales.
The _Atomic Cluster Expansion_ (ACE) introduced in [9] is a particular MLIP flavor that is flexible, theoretically well founded, interpretable, and for which it is straightforward to tune the cost-accuracy balance. It is establishing itself as a successful MLIP approach for a wide range of tasks, especially but not exclusively in materials simulation; see e.g. [10; 11; 12; 13; 14; 15; 16; 17]. Linear variants of the ACE model have been found remarkably data efficient and computationally efficient and as such have proven particularly useful for active learning (AL) workflows [14] as Sec. III and Sec. IV will demonstrate. Linearity in particular enables sensitivity analysis and a path towards reliable uncertainty quantification.
This article describes ACEpotentials.jl, which ties together a collection of Julia-language packages to expose a user-oriented interface facilitating the convenient construction of ACE MLIPs. To highlight the ease of use of our package, Listing 1 provides a complete Julia-language example that produces an ACE potential for a TiAl dataset.
At the time of writing, ACEpotentials.jl provides interfaces for _linear_ ACE models, which give good accuracy as well as performance both in parameter estimation and prediction. We have incorporated a range of geometric and analytical priors into the default model parameters that have proven robust in a range of tasks, including the challenging low data regime arising in active learning workflows. ACEpotentials.jl models can be used for molecular dynamics simulation in LAMMPS [18], ASE [19] and Molly.jl [20].
The Julia-language codes on which ACEpotentials.jl builds are written with ease-of-use, performance, and flexibility of model development in mind. Several variations and extensions of the ACE model implementations discussed in this article are under active development. The choice of Julia as the development language enables seamless transition from rapid prototyping to performance optimization. Moreover, Julia is establishing itself as leader in _scientific machine learning_ (see, e.g., [21]), facilitating highly customized model architectures with novel computational kernels.
Finally, we emphasize that the aim of this article is to illustrate the capabilities of ACEpotentials.jl but not to precisely document its use; for the latter see the reference material at [22], which will evolve along with the software. While the examples and code snippets provided throughout this article are compatible with the present version of ACEpotentials.jl, they should be taken primarily as illustrations of how the package may be used. The documentation will be kept up-to-date for the foreseeable future and will continually expand to describe additional options and features.
## II Methods
### Review of the linear ACE framework
#### ii.1.1 Model Specification
An atomic structure is described by a collection of position-element pairs \((\mathbf{r}_{i},Z_{i})\), and the computational unit cell (with open or periodic boundary conditions). In the ACE model, the total potential energy of such a structure is decomposed into site energies,
\[E=\sum_{i}\varepsilon_{i}, \tag{1}\]
where the summation ranges over all atoms belonging to the computational cell and each \(\varepsilon_{i}\) depends on its atomic neighbourhood containing all atoms within a cutoff radius \(r_{\text{cut}}\) from \(\mathbf{r}_{i}\), taking into account the boundary conditions. The ACE framework provides a design space to construct systematic models for the site energy \(\varepsilon_{i}\) in terms of a complete linear basis of body-ordered symmetric polynomials.
For convenience we introduce the new variables \(\mathbf{x}_{i}:=(\mathbf{r}_{i},Z_{i})\) for the state of an atom and \(\mathbf{x}_{ij}:=(\mathbf{r}_{ij},Z_{i},Z_{j})\), where \(\mathbf{r}_{ij}=\mathbf{r}_{j}-\mathbf{r}_{i}\), for the state of a bond between atoms \(\mathbf{x}_{i},\mathbf{x}_{j}\). In terms of these variables the site energy is expanded in body-order, in two different formulations:
\[\varepsilon_{i} =V^{(0)}(Z_{i})+\sum_{j_{1}}V^{(1)}(\mathbf{x}_{ij_{1}})+\sum_{j_{1} <j_{2}}V^{(2)}(\mathbf{x}_{ij_{1}},\mathbf{x}_{ij_{2}})+\cdots+\sum_{j_{1}<\cdots<j_{ \rho}}V^{(\bar{\nu})}(\mathbf{x}_{ij_{1}},\ldots,\mathbf{x}_{ij_{\rho}}) \tag{2a}\] \[=V^{(0)}(Z_{i})+\sum_{j_{1}}U^{(1)}(\mathbf{x}_{ij_{1}})+\frac{1}{2!} \sum_{j_{1},j_{2}}U^{(2)}(\mathbf{x}_{ij_{1}},\mathbf{x}_{ij_{2}})+\cdots+\frac{1}{ \bar{\nu}!}\sum_{j_{1},\ldots,j_{\rho}}U^{(\bar{\nu})}(\mathbf{x}_{ij_{1}},\ldots,\mathbf{x}_{ij_{\rho}}). \tag{2b}\]
We call the first formulation (2a) the _canonical cluster expansion_. It can be transformed [9] into the second formulation (2b), where the sums run over all possible combinations of atoms, including all permutation-equivalent clusters and even "artificial clusters" with repeated particles. This transformation introduces unphysical self-interaction terms such as \(V^{(2)}(\mathbf{x}_{ij},\mathbf{x}_{ij})\), but this counter-intuitive choice leads to a tensor product structure that can be exploited in constructing a highly efficient evaluation scheme. Our code is unique in that it implements the transformation between the two descriptions and also allows the evaluation of the canonical formulation (2a). Indeed the default ACEpotentials.jl model specification uses a combination of the two formulations. We will briefly review the challenges involved in evaluating cluster expansion models in Appendix A.
Both series in (2) are truncated versions of an exact body-order expansion. An exact expansion would include terms up to the number of atoms in the system, while here the maximum body-order is \(\bar{\nu}+1\) (corresponding to a correlation order of \(\bar{\nu}\)), which constitutes the first approximation parameter. In practice, the truncation is performed at low to moderate \(\bar{\nu}\) (typically 5 or less) for several reasons, including control of model complexity and computational cost.
Each potential \(V^{(\nu)}\) (or, \(U^{(\nu)}\)) is parameterized by a linear model, a process for which we give details below in the following sections. This then results in a parameterisation of the site energy that is also linear,
\[\varepsilon_{i}=\mathbf{c}\cdot\mathbf{B}_{i}, \tag{3}\]
where \(\mathbf{c}\) is a vector of parameters and \(\mathbf{B}_{i}\) a vector of basis functions (or, features) involved in the expansion of the many-body potentials \(V^{(\nu)}\) or \(U^{(\nu)}\). The basis functions are by construction invariant under rotations, reflections and permutations of like atoms. The representation is also _complete_ (or, universal) in the sense that when the approximation parameters (body-order, cutoff radius, and expansion resolution) are taken to infinity, the model can in principle represent an arbitrary smooth site-energy potential. Linearity of the model allows us to employ a vast range of established tools for parameter estimation and uncertainty quantification, and enables rapid model development by refitting to new training data or with adjusted hyperparameters.
The basis functions \(\mathbf{B}_{i}\) specify the model. In a typical example this can be done as demonstrated in Listing 2.
```
1using ACEpotentials
2model=acemodel(;elements=[:Ti,:Al],
3order=3,
4totaldegree=12,
5rcut=5.5,
6Eref=[:Ti=>-1586.0195,:Al=>-105.5954]) elements list of chemical elements occurring in the system of interest order maximum correlation order, \(\bar{\nu}\) in the article text; cf. Eq. (2) totaldegree spatial resolution of the \(\nu\)-body potentials; cf. Eq. (9) rcut (optional) cutoff radius; cf. Sec. II.2 Eref (optional) reference energies specifying \(V^{(0)}(Z_{i})\)
```
Listing 2: A typical construction of an ACE model and description of parameters.
The model object specifies the model site energy potential, from which derived properties such as potential energy, forces and virial stresses can be computed that are used in molecular statics, molecular dynamics or sampling algorithms.
There are many additional parameters and options available to specify an ACE model, some of which we discuss throughout the remainder of this paper. For a complete list of options we refer to the documentation [22]. We only remark briefly on the Eref parameter: We recommend the explicit specification of the one-body term \(V^{(0)}\). We observed in many tests that constraining \(V^{(0)}(Z_{i})\) to be the energy of a single isolated atom with atomic number \(Z_{i}\) yields more chemically realistic potentials that are more robust in practical molecular dynamics and molecular statics simulations, especially those involving breaking and forming bonds. One provides this information to an ACE model as shown in Listing 2, line 6.
In the remainder of this section we maintain a focus on high level intuitive understanding of options and parameters and avoid details and technicalities of the ACE framework as much as possible. For those details we refer to Appendix A and to the many publications now available on the subject [6; 9; 11; 12].
#### Parameter Estimation
Having specified a physically reasonable model architecture, we must now estimate its parameters. To that end we require a training set, which typically consists of a list of atomic structures, \(\mathbf{R}=\{R\}\), for which the total potential
energy \(\mathscr{E}_{R}\in\mathbb{R}\), forces \(\mathscr{F}_{R}\in\mathbb{R}^{3\times N_{R}}\) (with \(N_{R}\) the number of atoms in the computational unit cell) and possibly also virial stresses \(\mathscr{V}_{R}\in\mathbb{R}^{6}\) (in Voigt notation) have been evaluated with an electronic structure model. We define \(E(\mathbf{c};R),F(\mathbf{c};R),V(\mathbf{c};R)\) be the corresponding energies, forces and virials for the structure \(R\) in the ACE model, with parameters \(\mathbf{c}\). The simplest way to estimate those parameters is then to minimize the least squares loss function
\[L(\mathbf{c})=\sum_{R\in\mathbf{R}}\Big{(}w_{E,R}^{2}|E(\mathbf{c};R)-\mathscr{E}_{R}|^{2}+ w_{F,R}^{2}|F(\mathbf{c};R)-\mathscr{F}_{R}|^{2}+w_{V,R}^{2}|V(\mathbf{c};R)-\mathscr{V}_{ R}|^{2}\Big{)}. \tag{4}\]
The weights \(w_{E,R},w_{F,R},w_{V,R}\) can be used to give more or less relative "importance" to certain structures or observations. They are usually highly structured (e.g., \(w_{E,R},w_{V,R}\) are scaled with the number of atoms in a structure \(R\)), which will be discussed in more detail in Section II.5. Since the ACE model is linear in \(\mathbf{c}\) it follows that \(L(\mathbf{c})\) is quadratic, which means that minimizing \(L\) is a linear least squares problem. A wide range of efficient numerical techniques exist for its solution. In particular we will normally employ regularized or Bayesian variations of the naive least squares minimization, which are discussed in Sections II.5 and II.6.
In Listing 3 we read in such a prepared training set provided in the extended XYZ format and then estimate the model parameters with a default solver (Bayesian Linear Regression; cf. Section II.6). Several steps are combined and hidden from the user in the acefit! convenience function, but all these steps can in principle also be performed manually, e.g., to explore different parameter estimation algorithms that are currently not interfaced by ACEpotentials.jl. In line 5 of the listing, the fitted model is exported to a format that can be used for molecular dynamics simulations in LAMMPS.
```
1model=...#cf.Listing2P=smoothness_prior(model)
3data,_,_,_ ACE=ACEpotentials.example_dataset("TiAl_tutorial")
4acefit!(model,data;prior=P,solver=ACEfit.BLR())
5export2lammps("TiAl.yace",model)
6smoothness_priorspecifies a model prior / regularizer; cf. Section II.3pathtodataabsolutepathtoasmalltrainingsetusedfortesting
7datacollectionofstructurescontainingtrainingdata;cf. Section II.4acfitlessandsolvestheleast squaresystem;cf. Section II.5ACEfit.BLR()defaultsolverforparameterestimation;cf. Section II.6export2lammpsexportsthemodeltoasmallormsreadableformat.
```
Listing 3: A representative example loading a training dataset and estimating ACE model parameters.
In the remainder of Section II we will dive slightly deeper into some the steps we outlined above. Then, in Section III we will demonstrate how the framework can be used to fit potential energy models for realistic materials and molecular systems of scientific interest.
### Choice of basis functions & Geometric priors
The parameters in the model specification in Listing 2 specify a basis in which the \(V^{(\nu)}\) potentials are expanded. In the current section we will detail the _basis functions_ that are employed, while in Section II.3 we will then explain how to select a finite subset from the infinite complete basis set.
#### One-particle basis
To begin we must select a _one-particle basis_\(\phi_{k}\) in which all smooth functions \(f(\mathbf{x}_{ij})=f(\mathbf{r}_{ij},Z_{i},Z_{j})\) can be expanded. The most general form we consider is
\[\phi_{znlm}(\mathbf{r}_{ij},Z_{i},Z_{j})=R_{nl}(r_{ij},Z_{i},Z_{j})Y_{l}^{m}(\hat{ \mathbf{r}}_{ij})\delta_{zZ_{j}}, \tag{5}\]
where \(\delta\) denotes the Kronecker symbol and we have identified \(k=(z,n,l,m)\). The \(Y_{l}^{m}\) are the standard complex spherical harmonics, while \(R_{nl}\) is called the _radial basis_. The choice of \(Y_{l}^{m}\) to embed the angular component \(\hat{\mathbf{r}}_{ij}\) facilitates the exact symmetrization of the parameterisation with respect to rotations. Since \((r_{ij},Z_{i},Z_{j})\) is already invariant under rotations, the choice of \(R_{nl}\) is extremely general. Nevertheless we will below outline a heuristic that
leads to a narrow class of choices that have proven successful in many applications. However, we note that the optimal choice of \(R_{nl}\) remains an active area of research and will likely also evolve within ACEpotentials.jl.
Once \(\phi_{k}\) is selected, each potential \(V^{(\nu)}\) (or, \(U^{(\nu)}\)) is expanded in terms of a tensor product many-body basis,
\[\begin{split} V^{(1)}\big{(}\mathbf{x}_{ij_{1}}\big{)}&= \sum_{k_{1}}c_{k_{1}}^{(Z_{i})}\phi_{k_{1}}(\mathbf{x}_{ij_{1}})\\ V^{(2)}\big{(}\mathbf{x}_{ij_{1}},\mathbf{x}_{ij_{2}}\big{)}& =\sum_{k_{1},k_{2}}c_{k_{1}k_{2}}^{(Z_{i})}\phi_{k_{1}}\big{(}\bm {x}_{ij_{1}}\big{)}\phi_{k_{2}}\big{(}\mathbf{x}_{ij_{2}}\big{)}\\ \vdots&\vdots\\ V^{(\bar{\nu})}\big{(}\mathbf{x}_{ij_{1}},\ldots,\mathbf{x}_{ij_{2}} \big{)}&=\sum_{k_{1},\ldots,k_{\nu}}c_{k_{1}\cdots k_{\nu}}^{(Z_{ i})}\phi_{k_{1}}\big{(}\mathbf{x}_{ij_{1}}\big{)}\cdots\phi_{k_{\nu}}\big{(}\mathbf{x}_{ij_{ \nu}}\big{)}\end{split} \tag{6}\]
The model parameters \(c_{k_{1}\cdots k_{\nu}}^{(Z_{i})}\) will be estimated from data. Note that we choose individual model parameters for each center-atom element \(Z_{i}\). During the parameter estimation, the parameters will be constrained to guarantee invariance of the resulting potentials under rotations and reflections of an atomic environment. Invariance under permutations is already ensured through the summation in (2). Appendix A reviews additional details of this invariant basis construction, resulting in the specification of \(\mathbf{B}_{i}\) in terms of which site energy is defined in (3).
To complete the model specification two steps remain: (i) the choice of radial basis \(R_{nl}\); and (ii) the selection of basis functions \((k_{1},\ldots,k_{\nu})\) that we employ in the expansions (6). In the remainder of this section we discuss (i) while (ii) will be discussed in Section II.3.
#### Radial basis
There is considerable freedom in the choice of the radial basis \(R_{nl}\), which can be thought of as a _geometric prior_. For example, it incorporates the interaction range (cutoff radius, \(r_{\text{cut}}\)) and can be tuned to capture rough qualitative information about interacting atoms. In the following we describe a class of radial bases, available through ACEpotentials.jl, that require no data-driven optimization and thus leads to genuinely linear models. At the time of writing this article, ACEpotentials.jl supports radial bases indexed by \(n\) only, i.e. \(R_{nl}=R_{n}\) for all \(l\). This class is described by
\[R_{n}(r_{ij},Z_{j},Z_{i})=f_{\text{env}}(r_{ij},Z_{j},Z_{i})P_{n}\big{(}y(r_{ ij},Z_{j},Z_{i})\big{)}, \tag{7}\]
with the following components:
* \(y\) is an element-dependent distance transform, which can be used to impose increased spatial resolution where needed, especially near the equilibrium bond-length. We typically employ \[y(r_{ij},Z_{i},Z_{j})=\bigg{(}1+a\frac{(r/r_{0})^{q}}{1+(r/r_{0})^{q-p}}\bigg{)} ^{-1},\] where \(r_{0}\) is an estimate of the equilibrium bond-length in the system and \(a\) is chosen to maximize the gradient of \(y\) at \(r=r_{0}\), thereby maximizing resolution for nearest-neighbour interaction. The idea behind this transform is that it behaves as \(r^{-q}\) for large \(r\) and as \(1-r^{p}/a\) for small \(r\) thereby decreasing resolution in those two limits at rates determined by the parameters \(p,q\). The reduction in resolution in the small \(r\) regime is desirable when no data is available to specify the model in that regime; see also Figure 1.
* \(P_{n}\) is an orthogonal basis in \(y\)-coordinates. Our default choice is the Legendre orthogonal polynomial basis, which implicitly assumes equidistribution of resolution in \(y\)-coordinates.
* Finally, \(f_{\text{env}}\) is an envelope that specifies the cutoff radius \(r_{\text{cut}}\).
* The default and canonical choice for the many-body basis is \[f_{\text{env}}(r_{ij},Z_{i},Z_{j})=y^{2}(y-y_{\text{cut}})^{2},\] where \(y_{\text{cut}}=y(r_{\text{cut}},Z_{i},Z_{j})\).
* The default choice of envelope for the pair potential \(U^{(1)}\) or \(V^{(1)}\) is Coulomb potential tilted to ensure a smooth cutoff, \[f_{\text{env}}(r_{ij},Z_{i},Z_{j})=\big{(}\tfrac{r_{ij}}{r_{0}})^{-1}-\big{(} \tfrac{r_{\text{cut}}}{r_{0}}\big{)}^{-1}+\big{(}\tfrac{r_{\text{cut}}}{r_{0}} \big{)}^{-2}\big{(}\tfrac{r_{ij}}{r_{0}}-\tfrac{r_{\text{cut}}}{r_{0}}\big{)},\] which is repulsive as \(r_{ij}^{-1}\) as \(r\to 0\) but continuously differentiable at the cutoff. While the envelope for the many-body potential is canonical, for the pair potential envelope there is significant scope for inserting prior modelling knowledge of the system of interest. For example, one could replace the \(r^{-1}\) type behaviour with \(r^{-p}+r^{-q}\) to obtain different behaviour as \(r\to 0\) and \(r\to r_{\text{cut}}\), or in fact one could incorporate the ZBL potential [23] to obtain asymptotically exact repulsion. The effect of the distance transform \(y=y(r)\) and of the envelope function are visualized in Figure 1.
* Repulsion restraint: The construction outlined above means that, in the canonical cluster expansion formulation, the pair potential is given by \[V^{(1)}(r_{ij},Z_{i},Z_{j})=f_{\text{env}}(r_{ij},Z_{i},Z_{j})p_{Z_{i}Z_{j}}( y_{ij}),\] where \(p_{Z_{i}Z_{j}}\) is a polynomial in transformed \(y\) coordinates. By imposing the constraint that \(p_{Z_{i}Z_{j}}(y_{0})=1\), where \(y_{0}=y(0,Z_{i},Z_{j})\), we ensure that \(E\sim f_{\text{env}}(r_{ij})\) as \(r_{ij}\to 0\). This _guarantees_ repulsive behaviour of the total energy, independently of whether or not this is provided through the training data. In practice we enforce this weakly through a mild restraint to give the potential more flexibility.
Figure 1: **Center:** a typical interaction potential \(V(r)\), plotted in \(r\)-coordinates. **Left:** a coordinate transform \(y=y(r)\) to a non-dimensional variable \(y\) that increases resolution near \(r=r_{0}\) where the potential minimum is located and decreases resolution below \(r_{\text{min}}\) (the radial distance occuring in the training dataset), to zero near \(r=0\) where there is no data (and the envelope \(f_{\text{env}}\) becomes relevant) and near \(r=r_{\text{cut}}\) where the potential converges to a constant. The histograms show the distribution of a typical dataset in both \(r\)- and \(y\)-coordinates. **Right:** the interaction potential plotted (i) in transformed coordinates \(V(r(y))\), (ii) with the default pair envelope removed and (iii) with the theoretically optimal, typically unknown, envelope removed. The parameterisation and the smoothness priors are not applied to the original potential \(V(r)\) but to the transformed potential \(V(y)/f_{\text{env}}(y)\).
\begin{tabular}{r l} \hline \hline \(1\) & using ACEpotentials \\ \(2\) & elements = [:Ti, :Al] \\ \(3\) & totaldegree = \(12\) \\ \(r0\) = (rmn:(T:N) + rnn(:Al)) / 2 \\ \(r\)cut = 2 * r0 \\ \(6\) & trans = AgnesiTransform(; r0=r0, p = 2) \\ \(7\) & fenv = PolyEnvelope(1, r0, rcut) \\ \(8\) & radbasis = transformed\_jacobi\_env(totaldegree, trans, fenv, rcut) \\ \(9\) & model = acemodel(elements = elements, \\ \(10\) & order = 3, \\ \(11\) & totaldegree = totaldegree, \\ \(12\) & radbasis = radbasis) \\ \hline \hline \end{tabular} Listing 4: A example demonstrating more fine-grained control over the choice of radial basis \(R_{nl}\). The function transformed_jacobi_env constructs the polynomial basis from which the radial basis is constructed, which can be within the general class of Jacobi polynomials, but is normally taken to be the Legendre basis in transformed \(y\) coordinates.
### A priori sparsification & Smoothness prior
We now turn towards the second aspect of basis construction: how to select which of the infinitely many tensor product basis functions
\[\phi_{k_{1}}\otimes\cdots\otimes\phi_{k_{\nu}}, \tag{8}\]
specified by the tuples \((k_{1},\ldots,k_{\nu})\), we wish to incorporate into the expansion of the \((\nu+1)\)-body potential \(V^{(\nu)}\).
#### Sparse basis selection
Recall that \(k_{t}=(z_{t},n_{t},l_{t},m_{t})\), and that the bound \(|m_{t}|\leq l_{t}\) on \(m_{t}\) automatically gives a selection of possible \(m_{t}\) values once \(l_{t}\) bounds are chosen. Roughly speaking, \(n_{t},l_{t}\) measure how oscillatory the corresponding basis functions are in, respectively, the radial \(r_{t}\) and angular \(\hat{\mathbf{r}}_{t}\) coordinates. Therefore one typically puts upper bounds \(n_{t}\leq n_{\rm max}\) and \(l_{t}\leq l_{\rm max}\) in the basis selection, i.e. one chooses all basis functions \((k_{1},\ldots,k_{\nu})\) in the expansion for which these bounds are satisfied. Lower bounds lead to a smaller basis, but also less flexibility and correspondingly lower accuracy on the training set.
This simple strategy is available in ACEpotentials.jl but the default usage takes the notion of regularity a step further and bounds the _mixed regularity_ of the basis functions we select. This is done by choosing a maximum _total_ degree totaldegree\((\nu)\) for each correlation order \(\nu\) and choosing all basis functions \((k_{1},\ldots,k_{\nu})\) such that
\[1\leq\nu\leq\bar{\nu}\quad\text{and}\quad\sum_{t=1}^{\nu}n_{t}+w_{\rm L}l_{t} \leq\text{totaldegree}(\nu). \tag{9}\]
The additional weight \(w_{\rm L}\) allows us to select whether we require lower or higher resolution of the angular versus radial components of the interaction. Note that a higher weight \(w_{\rm L}\) decreases the angular resolution. The resulting selected basis is much sparser and is appropriate for parameterising very smooth functions in high dimension.
The default usage is that totaldegree\((\nu)\) takes the same value for all \(\nu\) but one may also specify a separate total degree for each correlation order \(\nu\). For example, Listing 5 demonstrates how to select a stronger weight \(w_{\rm L}=2.0\) thus providing less angular resolution, as well as how to select total polynomial degrees \(25,23,20,10\) for, respectively, parameterising \(V^{(1)},V^{(2)},V^{(3)},V^{(4)}\).
```
1usingACEpotentials
2model=acemodel(elements=[:Ti,:Al],
3order=4,
4wL=2.0,
5totaldegree=[25,23,20,10]) wLw specifies the relative resolution in angular and radial basis totaldegree specify seperate degrees for each correlation order
```
Listing 5: Construct an ACE model with finer control on the sparse selection of basis functions.
Significant further fine-tuning of the basis specification is possible, e.g. choosing different total degrees and \(w_{\mathrm{L}}\) parameters for different interacting species. This is explained in the package documentation [22].
#### Smoothness Prior
The foregoing discussion concludes the model _architecture_ specification. An issue closely related to the sparse basis selection (9) is the definition of a smoothness prior that may be employed for ridge regression (regularized least squares) which we discuss in Section II.5 or in the Bayesian framework of Section II.6. As explained above, the value
\[\sum_{t=1}^{\nu}n_{t}+w_{\mathrm{L}}l_{t}\]
is a qualitative estimate for how oscillatory or smooth a basis function (8) is. We can extend this definition slightly by adding another parameter \(p\) and defining
\[\gamma_{\mathbf{znlm}}:=\sum_{t=1}^{\nu}n_{t}^{p}+w_{\mathrm{L}}l_{t}^{p}, \tag{10}\]
where \(\mathbf{z}=(z_{t})_{t=1}^{\nu},\mathbf{n}=(n_{t})_{t=1}^{\nu},\mathbf{l}=(l_{t})_{t=1}^{\nu}\) and \(\mathbf{m}=(m_{t})_{t=1}^{\nu}\). We then collect these parameters into a diagonal matrix \(\Gamma\) with \(\Gamma_{\mathbf{kk}}=\gamma_{\mathbf{k}}\). If \(\mathbf{c}\) are the model parameters then \(\|\mathbf{\Gamma}\mathbf{c}\|_{2}\) will be a rough estimate for how smooth the potential energy surface is.
The matrix \(\Gamma\) also serves as a smoothness prior within the Bayesian interpretation of ridge regression: the prior distribution for the model parameters \(\mathbf{c}\) is given by a multivariate normal distribution that is centered at the origin and has variance proportional to \(\Gamma^{-2}\); see Sections II.5 and II.6. In ACEpotentials.jl this operator can be constructed as shown in Listing 6, with \(p=4,w_{\mathrm{L}}=1\) the default.
```
1model=...#cf.Listing2
2\(\Gamma\)=smoothness_prior(model;p=4,wL=1)
```
Listing 6: Construct an operator that estimates the smoothness of the MLIP model, to be used as a Tikhonov regulariser, or prior in a Bayesian framework.
The resulting operator \(\Gamma\) may now be used to specify the regularizer (or prior) of parameter estimation algorithms, e.g., in Listing 3, line 2 and explained in more detail in Sections II.5 and II.6. A key point is that \(\Gamma\) is a _rigorous_ smoothness prior for the canonical cluster expansion (2a) but only a heuristic for the self-interacting expansion (2b).
It is interesting in general, but in particular in the low-data regime, to explore different choices of priors. Two particular variants that are also available in ACEpotentials.jl are the exponential and Gaussian priors
\[\gamma_{\mathbf{znlm}}^{\exp}=\exp\Big{(}\alpha_{1}\sum_{t}l_{t}+\alpha_{n}\sum_{t }n_{t}\Big{)},\qquad\text{and}\qquad\gamma_{\mathbf{znlm}}^{\text{gauss}}=\exp \Big{(}\sigma_{1}\sum_{t}l_{t}^{2}+\sigma_{n}\sum_{t}n_{t}^{2}\Big{)},\]
which enforce even stronger smoothness requirements than the algebraic prior (10) and are currently still experimental features.
### Training data
In the foregoing sections we discussed in some depth how an ACE interatomic potential architecture can be conveniently specified. The next task is to estimate the parameters matching the model to training data.
A training dataset consists of a collection of reference structures, \(\mathbf{R}=\{R\}\), each with associated potential energy \(\mathscr{E}_{R}\in\mathbb{R}\), forces \(\mathscr{F}_{R}\in\mathbb{R}^{3\times N_{R}}\) and, when appropriate, spirals \(\mathscr{V}_{R}\in\mathbb{R}^{6}\) (Voigt notation). The reference energies, forces and virials are typically obtained by evaluating a "high fidelity" reference potential energy surface for which we wish to obtain an ACE surrogate model. Density Functional Theory is a common choice, but higher levels of theory such as Coupled-Cluster methods are also used, especially for non-periodic systems. In addition each training structure should be given a label that specifies related sub-groups. For example, these subgroups could indicate different phases of a material, and the resulting labels might be "bcc", "fcc", "liquid". The label could also indicate the MD temperature from the which the structures were generated, e.g. "fcc500K" or "liquid2500K". This allows convenient filtering of the training set, e.g., for assigning training weights (cf. Section II.5) or fitting to subsets.
Acquisition of training data need not be performed within the ACEpotentials.jl package, but can be undertaken in any simulation software that makes it convenient to generate and manipulate atomic structures, perform molecular dynamics or Monte Carlo simulations, and to evaluate structures using a high fidelity electronic structure model. Because of the general ease of use and in particular ease of interoperability with the Julia molecular simulation eco-system, we often use the Atomic Simulation Environment [24].
The standard format for storing and retrieving a training set in ACEpotentials.jl is the extended XYZ format and can be read as shown in Listing 7. This results in a list of atomic structures storing the structure information as well as the training data.
```
1usingACEpotentials
2pathtodata="path/to/data.xyz"
3data=read_extxyz(pathtodata)
```
Listing 7: Reading a training set from an extended XYZ file.
#### Overview of Training Set Acquisition
The acquisition of training data is often the most time-consuming aspect of MLIP development. An in-depth discussion goes beyond the scope of this software review article; important details can be found for example in [5; 12; 25; 26; 27]. In the remainder of this section we give an outline of general strategies to consider, while in Section III we go into practical aspects how training sets can be constructed in a few prototypical applications and what kind of tools ACEpotentials.jl provide to support that task.
The overarching requirements are that training sets (1) must contain small enough atomic structures that they can be evaluated using high-fidelity electronic structure models; and (2) must contain snapshots of all possible local atomic configurations one expects to encounter during simulation and prediction tasks. Thus, generating a training set reduces to generating representative atomic structures which are then evaluated with the reference model to obtain target potential energies, forces and virials. While the latter is usually straightforward and varies little between projects, there is no standard way yet to generate the training structures. The choice will depend on the atomic system at hand, and the simulation tasks that the model must be able to perform reliably, e.g. which system properties (observables) are to be modelled.
As a first step, one should "sketch out" the parts of the potential energy landscape that are of interest, e.g. construct one representative structure for each distinct energy minimum of interest. This might include different phases or material defects that the final model should be able to describe. Next, one generates random samples from those sketches for example by displacing the atom positions (randomly, along normal modes, volume scans, and so forth), or by subsampling an _ab initio_ molecular dynamics trajectory. After collecting a seemingly adequate number of training structures (the total number of observations should normally exceed that number of parameters) one can fit a first model and test that model's accuracy with respect to some target property. If the accuracy is inadequate, or the model not robust (e.g., an MD simulation is unstable), then a good strategy is to proceed with an iterative model refinement process. In each iteration additional training structures are selected to converge the model's accuracy with respect to the target properties of interest. One might add hand-crafted structures to fix a particular flaw (e.g. to improve description of inter-molecular interaction in a molecular liquid or include supercells with vacant atomic sites) or model-driven MD to less computationally expensively explore relevant parts of Potential Energy Surface (for example, low potential energy regions to bring potential-Boltzmann-sample closer to reference-Boltzmann-sample
and wider temperature/pressure range than intended for application of interest to make the model-driven simulations more stable).
Iterative model refinement is closely related to _active learning_. That strategy assumes that there is an accurate and efficient way available to estimate model uncertainty. During a simulation task, for example a molecular dynamics simulation, when a structure with high uncertainty is encountered it is evaluated with a reference method and added to the training data. To accelerate this process, we developed Hyper-Active Learning [14], which biases molecular dynamics simulation towards high-uncertainty and high predicted error regions. This strategy is sometimes capable of more rapidly generating many independent training samples. Section III will go into some details how this strategy is used in practice.
### Parameter estimation: ridge regression
Recall from Section II.1 that the linear ACE models are parameterized linearly as shown in (3). As described in Section II.4 we estimate parameters by matching the model to observations of total energies, forces and virials evaluated via a high fidelity reference model on different training structures \(R\in\mathbf{R}\), where \(\mathbf{R}\) denotes the training set. To estimate the parameters we minimize the loss function (4). In the current section, we go into further details of the parameter estimation process once the model and training set have been specified.
First, we discuss the regression weights \(w_{E,R}\), \(w_{F,R}\) and \(w_{V,R}\), which allow users to specify the relative importance of different observations and structures. In principle one could specify individual weights for each structure \(R\) and observation type \(E,F,V\). In practice, it has proven convenient to label all structures \(R\) with a _configuration type_ as described in Section II.4 and to assign weights according to such groups. In addition the weights \(w_{E,R},w_{V,R}\) should scale like \(1/\sqrt{N_{R}}\) where \(N_{R}\) denotes the number of atoms in the structure \(R\)[2; 12]. Thus, the weights \(w_{E,R},w_{V,R}\) take the form
\[w_{E,R}=\frac{\tilde{w}_{E,\text{cftype}(R)}}{\sqrt{N_{R}}},\qquad w_{F,R}= \tilde{w}_{F,\text{cftype}(\text{R})},\qquad w_{V,R}=\frac{\tilde{w}_{V, \text{cftype}(R)}}{\sqrt{N_{R}}},\]
with \(\tilde{w}_{*,\text{cftype}}\) defined by the user as follows: Suppose, for example, that a training set contains several solid phase structures as well as liquid structures, then we may wish to demand a higher fit accuracy on the solid structures. In addition we typically find that energies must be given higher weights in order to achieve the best possible balance of accuracy. This might result in weight specifications as shown in Listing 8, lines 4-5.
```
1model=...#specifymodel;seee.g.Listing2
2data=...#loadtrainingdata;seee.g.Listing7
3P=smoothness_prior(model)#regularisationoperator;seegIIC
4weights=Dict("default"=Dict("E"=>30.0,"F"=>1.0,"V"=>1.0),
5"liquid"=Dict("E"=>5.0,"F"=>0.5,"V"=>0.25))
6solver=BLR(tol=1e-3,P=P)#specifythesolver,seeTable1foroptions
7accif!(model,data,solver;weights=weights)#solvelsqproblem,updatemodelparameters
8
9#modelaccuracyonastestest
10testdata=...#loadtestdata
11errors(testdata,model)
12
13#exportthefittedpotential
14export2json("model.json",model)
15export2lammps("model.yace",model)
```
Listing 8: Prototypical parameter estimation script, using some simple control over regression weights and solver parameters.
Next we discuss the minimization of the loss. Since all observations we consider here are linear, the minimization of \(L(\mathbf{c})\) can be rewritten in the form
\[\operatorname*{arg\,min}_{\mathbf{c}}\ \big{\|}\mathbf{W}(\mathbf{y}-\mathbf{A}\mathbf{c}) \big{\|}^{2}, \tag{11}\]
where \(\mathbf{y}\) is a vector containing the observation values \(\mathcal{E}_{R},\mathcal{F}_{R},\mathcal{V}_{R}\), \(\mathbf{A}\) is the design matrix containing the ACE basis values corresponding to those observations and \(\mathbf{W}\) a diagonal matrix containing the weights \(w_{E,R},w_{F,R},w_{V,R}\). Solving
the linear least squares system (11) often results in overfitting, hence one almost always employs regularized methods, for example the ridge regression formulation,
\[\operatorname*{arg\,min}_{\mathbf{c}}\,\left\|\mathbf{W}(\mathbf{y}-\mathbf{A} \boldsymbol{c})\right\|^{2}+\lambda\big{\|}\boldsymbol{\Gamma}\boldsymbol{c} \big{\|}^{2}, \tag{12}\]
where \(\boldsymbol{\Gamma}\) specifies the form of the regularizer and \(\lambda\) a scaling parameter determining the relative weight of the regularisation. This formulation of the least squares problem is often also called regularized least squares, and the \(\lambda\|\boldsymbol{\Gamma}\boldsymbol{c}\|^{2}\) term is often called generalized Tikhonov regularisation. The default for \(\boldsymbol{\Gamma}\) is zero or the identity, depending on the choice of solver. Our recommendation is to use the smoothness prior introduced in (10) instead for most solvers. Automatic relevance determination (ARD) is unique amongst the ridge regression solvers available in ACEpotentials.jl in that it estimates a regularizer \(\boldsymbol{\Gamma}\) from the sensitivity of the parameters to the training data, at additional computational cost; see Section II.6 for more details.
To solve the ridge regression problem (12), ACEpotentials.jl employes the package ACEfit.jl1, which offers a range of such algorithms. In the simplest setting, it can be used as shown in Listing 8, lines 6-7. For a list of the most important solvers, see Table I. For large models and/or large datasets, the parameter estimation task can be computationally challenging and may have to be performed on a cluster.
Footnote 1: [https://github.com/ACEsuit/ACEfit.jl](https://github.com/ACEsuit/ACEfit.jl)
For small and moderate datasets we normally recommend the BLR method. For large datasets. when finely tuned regularisation is often less important, the random matrix sketching RRQR and iterative LSQR may be more appropriate.
Once the model parameters are determined as shown above, we typically wish to perform two tasks: (1) confirm the model accuracy on a test set; and (2) export the model to a format that can be used in standard MD codes, e.g., LAMMPS and ASE. Suppose that we are provided with a test data set testdata, then we can determine the model errors on that test set as seen in Listing 8, lines 9-11. This will print tables of RMSE and MAE errors for individual configuration types. If we wish to store and/or export the fitted potential for later use, we typically save it in.json format which can be read by ACEpotentials.jl as well as its Python interface to ASE, and in.yace format which can be read by the pace extension to LAMMPS; cf. Listing 8, lines 13-15.
\begin{table}
\begin{tabular}{l|l} QR & **QR decomposition:** Direct solution of the ridge regression problem (12). Tikhonov regularisation is imposed by extending the linear system. This method should rarely be used in practice and is included mostly for theoretical interest and the sake of completeness. \\ solver = QR(lambda = 0.0) \\ LSQR & **Krylov method:** the standard iterative Krylov algorithm to solve the ridge regression problem (12). Tikhonov regularisation is imposed implicitly in the algorithm, with damp corresponding to the parameter \(\lambda\). Early termination, by adjusting atol provides an additional and different form of regularisation. This algorithm is suitable for very large-scale parameter estimation problems. \\ solver = LSQR(damp = 1e-4, atol = 1e-6) \\ RRQR & **Rank-revealing QR decomposition:** A random matrix sketching approach, which is computationally more efficient than the standard QR decomposition. In addition, the parameter rtol is closely related to \(\lambda\) in (12) but not identical. Instead of adding a Tikhonov term, RRQR regularisation is imposed by removing highly sensitive subspaces as determined by rtol. For large problems, this algorithm is more performant than the standard QR decomposition. \\ solver = RRQR(rtol = 1e-5) \\ BLR & **Bayesian Linear Regression:** (or, Bayesian ridge regression) specifies a class of solvers that estimate regularisation hyperparameters, depending on the setting it estimates the scaling parameter \(\lambda\) or the entire Tikhonov matrix \(\boldsymbol{\Gamma}\). This solver also determines a posterior model distribution that can be used for uncertainty quantification. See Section II.6 for further details. This algorithm is more robust than QR, LSQR, RRQR, but computationally more intensive. It is highly recommended for relatively small datasets. \\ solver = BLR() \\ \end{tabular}
\end{table}
Table I: Table of solvers for the ridge regression problem (12).
### Bayesian framework for parameter estimation
Uncertainty estimates of model predictions are highly sought after tools to judge the accuracy of a prediction during simulation with a fitted model, but can also be employed to great effect during the model development workflow, e.g., in an active learning context. Such uncertainty estimates can be derived in a principled way by recasting the ridge regression problem (12) in a Bayesian framework where inference is based on the Bayesian posterior distribution
\[\text{post}(\mathbf{c})=p(\mathbf{c}\,|\,\mathbf{A},\mathbf{y})\propto p(\mathbf{A}, \mathbf{y}\,|\,\mathbf{c})\,p(\mathbf{c}). \tag{13}\]
Here, \(p(\mathbf{A},\mathbf{y}\,|\,\mathbf{c})\) denotes the likelihood of the observed data, and \(p(\mathbf{c})\) the prior distribution on the model parameters. The Bayesian analogue of (12) is a Bayesian Linear Regression model with Gaussian observational noise and prior,
\[p(\mathbf{A},\mathbf{y}\,|\,\mathbf{c}) \propto\exp\left(-\frac{1}{2}(\mathbf{y}-\mathbf{A}\mathbf{c})^{T}( \beta\mathbf{W}^{2})(\mathbf{y}-\mathbf{A}\mathbf{c})\right),\qquad\text{and} \tag{14}\] \[p(\mathbf{c}) \propto\exp\left(-\frac{1}{2}\mathbf{c}^{T}\mathbf{\Sigma}_{0}^{-1}\mathbf{c }\right), \tag{15}\]
where the covariance \(\beta^{-1}\mathbf{W}^{-2}\) of the observation noise depends on the regression weight matrix \(\mathbf{W}\) and a hyper-parameter \(\beta>0\). This choice of prior and noise model yields a Gaussian posterior distribution, \(p(\mathbf{c}\,|\,\mathbf{A},\mathbf{y})=\mathcal{N}(\mathbf{c};\,\mathbf{\mu},\mathbf{\Sigma})\), with mean and covariance given, respectively, by \(\mathbf{\mu}=\beta\mathbf{\Sigma}\mathbf{A}^{T}\mathbf{W}^{2}\mathbf{y}\) and \(\mathbf{\Sigma}=\left(\beta\mathbf{A}^{T}\mathbf{W}^{2}\mathbf{A}+\mathbf{\Sigma}_{0 }^{-1}\right)^{-1}.\) We assume that the prior covariance \(\mathbf{\Sigma}_{0}\) is of the form of a diagonal matrix. The above Bayesian model can be connected to the ridge regression formulation of equation (12) by noticing that maximising the posterior density (13) is equivalent to minimizing the regularized loss in (12) when \(\mathbf{\Sigma}_{0}^{-1}=\zeta\mathbf{\Gamma}^{2}\) for some \(\zeta>0\) and \(\lambda=\zeta/\beta\).
#### Solvers and model selection via evidence maximisation
The reliability of uncertainty estimates critically depends on the values of the model hyper-parameters, the noise and prior covariance matrices \(\beta^{-1}\mathbf{W}^{-2}\) and \(\mathbf{\Sigma}_{0}\). In ACE, it is sometimes difficult to make informed guesses of explicit values of these hyper-parameters that lead to good fits. We therefore commonly employ empirical Bayes approaches that infer appropriate values of these parameters directly from the training data by virtue of maximising the model evidence
\[p(\mathbf{A},\mathbf{y}\,|\,\mathbf{\Sigma}_{0},\beta) =\int p(\mathbf{A},\mathbf{y}\,|\,\mathbf{c},\beta)p(\mathbf{c}\,|\,\mathbf{ \Sigma}_{0})d\mathbf{c} \tag{16}\] \[=\sqrt{\frac{\beta(2\pi)^{-N_{\text{noise}}}|\mathbf{\Sigma}|}{|\mathbf{ \Sigma}_{0}||\mathbf{W}^{-2}|}}\exp\left(-\frac{1}{2}(\mathbf{y}-\mathbf{A}\bm {\mu})^{T}(\beta\mathbf{W}^{2})(\mathbf{y}-\mathbf{A}\mathbf{\mu})-\frac{1}{2}\bm {\mu}^{T}\mathbf{\Sigma}_{0}^{-1}\mathbf{\mu}\right)\]
as a function of \(\mathbf{\Sigma}_{0},\beta\). Intuitively, maximising the model evidence results in a model where the regularising effect of the covariance matrix \(\mathbf{\Sigma}_{0}\) and the degree of penalisation of model misfit--modelled by the noise covariance matrix \(\beta^{-1}\mathbf{W}^{-2}\)--are balanced against the degree to which the regression coefficients are determined by the data.
Within ACEpotentials.jl this is implemented in the BLR solver (cf. Table 1). Different solver options result in different constraints on the form of the prior covariance \(\mathbf{\Sigma}_{0}\), and we refer to the documentation [22] for further details.
#### Uncertainty estimates via committees
Formally, the Bayesian ridge solver provides not an optimal parameter vector \(\mathbf{c}\) but a posterior parameter distribution \(p(\mathbf{c})\). In practice, one then selects the mean parameter vector \(\mathbf{\mu}\) to specify the model. However, the posterior distribution remains important to estimate the uncertainty of predictions. Evaluating such uncertainties from the exact posterior distribution is computationally expensive; instead, ACEpotentials draws \(K\) samples \(\{\mathbf{c}_{k}\}_{k=1}^{K}\) from \(\text{post}(\mathbf{c})\) resulting in a committee of ACE models which can be used to obtain computationally efficient uncertainty estimates for predictions. For example, the standard deviation \(\sigma\) of a total energy prediction can be approximated by a committee via
\[\tilde{\sigma}^{2}=\frac{1}{\beta w_{E,R}^{2}}+\frac{1}{K}\sum_{k=1}^{K}(E^{k}- E^{\mathbf{\mu}})^{2}, \tag{17}\]
where \(E^{\mathbf{\mu}}\) is the prediction made by the mean model with parameters \(\mathbf{\mu}\), while \(E^{k}\) are the committee predictions from models with parameters \(\mathbf{c}_{k}\). Similarly, uncertainty estimates can be made for any partial derivative of the potential energy surface such as for committee forces \(F^{k}=\mathbf{c}_{k}\cdot\nabla\mathbf{B}_{i}\), or the mean force \(F^{\mathbf{\mu}}=\mathbf{\mu}\cdot\nabla\mathbf{B}_{i}\).
The first term in (17) refers to the aleatoric, or irreducible, uncertainty arising due to randomness of the system which is dominated by the complexity of the linear ACE convergence parameters such as correlation order, polynomial degree and cutoff. The second term is the epistemic, or reducible, uncertainty arising due to a lack of data or rather information. An example how a variance estimate of the epistemic uncertainty can be obtained in the linear ACE framework is shown in Listing 9.
```
1E,E_co=co_energy(model.potential,atoms)
2sigma=sqrt(mean((E_co_E).^2))
```
Listing 9: Example how to use a committee to estimate the uncertainty of a prediction. (Note that model.potential gives access to the calculator object.) Analogously, one can obtain committees of forces and virials.
## III Workflow examples
In this section, we present several practical examples of ACE usage, including simple benchmarks, practical potentials for materials and liquids to examples illustrating the hyperactive learning workflow. The scripts we used to generate the reported results are made available in a separate git repository2 that will be regularly updated as the ACEpotentials.jl package evolves.
Footnote 2: [https://github.com/ACEsuit/ACEworkflows](https://github.com/ACEsuit/ACEworkflows)
### Tests with pre-existing data sets
#### iii.1.1 Benchmarks with limited-diversity datasets
We test ACEpotentials.jl with default parameters on an early single-element benchmark dataset taken from [28]. This dataset was originally used to assess the relative strengths and weaknesses of four important MLIPs, the high-dimensional neural network potential (NNP)[29], the Gaussian approximation potential (GAP) [30], the Spectral Neighbor Analysis Potential (SNAP)[7], and moment tensor potentials (MTP)[8]. The benchmark contains six separate datasets corresponding to the six elements Li, Mo, Ni, Cu, Si and Ge, spanning a variety of chemistries (main group metal, transition metal and semiconductor), crystal structures (bcc, fcc, and diamond) and bonding types (metallic and covalent). For each element, the dataset contains the ground-state crystal structure, strained structures with strains of -10% to 10%, slab structures up to a maximum Miller index of three, and NVT ab initio molecular dynamics simulations of the bulk supercells with and without a single vacancy. These datasets contain a relatively large number of training structures, but only limited diversity.
In table 2 we see the comparison of the MAEs in energies and forces for the best performing potentials in the benchmark (GAP and MTP) with two linear ACE models trained with the default parameters and total degrees chosen to reach basis sizes of, respectively, 300 basis functions for **ACE(s)** and approximately 1000 basis functions for **ACE(I)**. We optimized none of the hyperparameters and solved used RRQR to estimate the parameters. We chose RRQR since the datasets are very large, hence a highly tuned regularisation is less important. This results in competitive accuracy across the entire benchmark. The only small exception is the slightly larger energy error for Mo-ACE(I), which suggests some fine-tuning of the model parameters could be beneficial in this particular case. Our aim with this experiment was to demonstrate that, with only minimal effort, linear ACE models can perform with (near-) best accuracy in a data set geared towards testing statistical generalization.
#### iii.1.2 Silicon
We used ACEpotentials.jl to fit a linear ACE potential to the silicon dataset introduced by Bartok et al [26] for fitting a Gaussian approximation potential (GAP). This extensive database contains a wide range of configurations
ranging from several bulk crystal structures (diamond, hcp, fcc, etc.), amorphous structures as well as liquid MD snapshots, aiming to cover as much of the silicon energy landscape as possible. The corresponding GAP model was shown to outperform a wide range of other (classical) interatomic potentials on a large selection of accuracy and property or generalisation tests ranging from surface formation energies as well as liquid and radial distribution functions. The current work benchmarks an ACEpotentials.jl model, with default model parameters, containing basis functions up to order \(\bar{\nu}=4\), polynomial total degree \(D^{\rm max}=20\) and 6 A cutoff against this silicon GAP potential.
The model was fitted using generalised Tikhonov regularisation (12) of \(\lambda\Gamma\), where \(\Gamma\) was constructed using an algebraic smoothness prior (10) with \(p=5\), whilst the BLR solver was used to estimate the scaling parameter \(\lambda\). This benchmark is formed of a series of property tests including bulk diamond elastic constants, vacancy formation energies, surface formation energies for the (100), (110), (111) surfaces and hexagonal, dumbbell and tetragonal point defect energies for bulk diamond. These results of these property tests for the CASTEP [31] DFT reference, GAP and ACE are shown in Figure 2 and indicate good accuracy across the range of property tests. Percentage errors relative to the DFT reference are also included, confirming similarly accurate performance between the GAP and the ACEpotentials.jl frameworks.
\begin{table}
\begin{tabular}{l c
We also used this silicon ACE potential to carry out a more challenging test, namely to simulate fracture in the \((111)[1\bar{1}0]\) cleavage system. We used the matscipy package to setup a \(12\times 11\times 1\) supercell containing 1586 atoms and to carry out structural optimisations with a Mode I crack anisotropic continuum linear elastic displacement field [32] applied with stress intensity factors ranging from \(0.6K_{G}\) to \(1.5K_{G}\) (where \(K_{G}\) is the Griffith load at which fracture becomes thermodynamically favourable). We observed spontaneous formation of the Pandey \(2\times 1\) reconstructed \((111)\) surface behind the crack tip, in good agreement with previous studies using DFT [33] and GAP [26]. The critical stress intensity factor was determined to be \(K_{I}=1.0\pm 0.02K_{G}\), which is very close to the expected Griffith value, indicating minimal lattice trapping. Overestimating the extent of lattice trapping is a common failure mode of previous interatomic potentials when applied to model fracture [34]. The total simulation time was around 30 minutes on a 28-core workstation.
To successfully carry out the fracture test it was crucial to produce a highly regular (smooth) ACE potential.
Figure 3: **Top Left**: The predicted energy of the Si-Si dimer is shown for a sequence of ACE potentials trained with varying strengths of smoothness prior but equal accuracy (Force RMSE \(\approx 0.075\) eV/Å). \(\Gamma=1\) corresponds to an equal prior for all basis functions whilst \(p\) indicates the strength of the algebraic smoothness prior defined in (10). The black curve shows the corresponding result using GAP. All curves are shifted for clarity. **Bottom**: The evolution of stress (\(S\)) as a function of separation (\(z\)) during rigid decohesion of bulk silicon into the unrelaxed (110) and (100) surfaces is shown for the same sequence of potentials. **Top Right**: Snapshot from Si(\(111\))\(|\bar{1}0]\) quasi-static fracture simulation at a stress intensity factor of \(1.8K_{G}\) using our ACE potential. The lower fracture surface shows a \(2\times 1\) Pandey reconstruction (alternating pentagons and heptagons), consistent with previous studies using DFT and GAP models, but at much reduced cost. The critical fracture toughness is very close to \(K_{G}\), showing minimal lattice trapping.
To illustrate the effect of changing the smoothness prior, a sequence of ACE potentials (order \(\bar{\nu}=4\), total degree \(D^{\text{max}}=21\) and 6 A cutoff), was fitted using no smoothness prior (\(\Gamma=1\)) and increasing strengths of algebraic smoothness prior (10), \(p=1,2,5\) and 10. In all cases the model parameters were estimated using generalized Tychonov regularisation (12) with the scale factor \(\lambda\) tuned such that all potentials achieved a force RMSE of approximately 0.075 eV/A, which is approximately 5% larger than without any regularisation. The effect of the prior on predicted Si-Si dimer curves and rigid bulk Si decohesion curves, which respectively probe smoothness of 2-body and many-body terms, is shown in Figure 3. Applying a moderate smoothness prior aids extrapolation into the close-approach region and reduces the amplitude of spurious oscillations seen in the stress (S) during decohesion.
#### iii.1.3 Water
We investigated the ability of ACEpotentials.jl to capture the interactions in complex molecular liquids and to perform robust molecular dynamics simulations in such systems, fitting a linear ACE potential to a dataset containing 1593 liquid water configurations [35]. We chose only default model parameters, containing basis functions up to correlation order \(\bar{\nu}=3\), polynomial total degree \(D^{\text{max}}=15\) and \(r_{\text{cut}}=5.5\) A cutoff. Parameter estimation was performed using ARD with relevance threshold set by minimising the Bayesian Information Criterion (BIC) [36]. The training RMSE were 1.732 meV/atom for energies and 0.099 eV/A for forces. To investigate the performance and robustness of the fitted ACE model, a series of mean squared displacement (MSD) simulation were performed under 1 bar NPT conditions at 300 K. The simulations were performed using 5184 atom simulation boxes, shown in Fig. 4 below, with PACE-LAMMPs [12]. The total simulation time for each of these simulation was 20 minutes utilising 1280 cores on ARCHER2, illustrating the efficiency of ACE potentials. The diffusion constant predicted by this simulation was 1.20 \(\pm\) 0.03 m/s\({}^{2}\). It should be noted that diffusion constants are notoriously difficult to accurately determine especially considering the absence of long-range interactions into these ACE models. This example is therefore mostly an illustration of robustness and performance.
### The Hyperactive Learning (HAL) Workflow
While fitting ACE potentials to pre-existing or "manually" assembled datasets, as discussed in Section III.1, the real benefit of the linear ACE framework is in the construction robust and computationally inexpensive ACE potentials from the ground up with automated dataset assembly. This is achieved through the use of an iterative loop employing an active learning (AL) type approach [37; 38], where relevant training configurations are sampled to form a training database. To accelerate this AL process, hyperactive learning (HAL) [14] we introduced, which adds a biasing term to a molecular dynamics simulation towards predicted high uncertainty \(\sigma\), as shown in (18). A tunable parameter
Figure 4: Mean squared displacement (MSD) for three liquid water simulation at 1 bar NPT simulations and 300 K. The simulation cell contained 5184 atoms.
\(\tau\) controls the strength of the biasing and thus the balance between physical exploration (molecular dynamics) and discovery of new structures (biasing).
\[E^{\text{HAL}}=E^{\text{ACE}}-\tau\sigma. \tag{18}\]
The HAL framework shares similarities with Bayesian Optimization (BO) as the biasing term is formally equivalent to a Lower Confidence Bound (LCB) acquisition function [39]. Similarly to BO, the parameter \(\tau\) adjusts the tradeoff between exploration and exploitation during the generation of training configurations using HAL. HAL-generated configurations are both energetically reasonable, guided by \(E^{\text{ACE}}\) (exploitation), and informative, predicted by a relatively large value of \(\sigma\) (exploration). The bias towards uncertainty, mediated by an emerging biasing force during HAL dynamics, can be viewed as a strategy to acquire information (gain) by seeking out unseen (local) environments. The HAL approach can also be viewed as an adversarial attack, aimed to destabilize a fitted ACE potential such that, after iteratively adding sufficiently many new configurations, the linear ACE model is robust to such attacks which all but guarantees stable dynamics over long timescales.
The biasing parameter \(\tau\) in HAL necessitates careful tuning, which HAL achieves through an adaptive scheme [14] that tunes \(\tau\) on the fly by balancing the magnitude of the biasing force relative to the forces obtained by \(E^{\text{ACE}}\). The _relative biasing parameter_\(\tau_{\tau}\) used in this scheme is typically set to 0.1 to 0.2 and ensures that the biasing strength is reduced or increased depending on the degree of predicted uncertainty explored during the dynamics.
To initiate HAL, an initial database is typically constructed consisting of 1-10 configurations that sketch out some aspects of the energy landscape that are of interest to the application at hand. An ACE potential is fitted using a variant of the BLR solver, after which committee parameterisations \(\{\mathbf{\epsilon}_{k}\}_{k=1}^{K}\), typically \(K=8\), are sampled from the posterior as discussed in Section II.6. Biased MD/MC dynamics are then performed on \(E^{\text{HAL}}\), using the dynamically tuned \(\tau\) parameter. During the dynamics the relative force uncertainty \(f_{i}\) is recorded and once it exceeds a predefined tolerance \(f^{\text{tol}}\) a DFT calculation is triggered, and the training database is extended. This relative force uncertainty \(f_{i}\) is defined as
\[f_{i}=\frac{\frac{1}{K}\sum_{k=1}^{K}\|F_{i}^{k}-F_{i}^{\mathbf{\mu}}\|}{\|F_{i}^{ \mathbf{\mu}}\|+\varepsilon}, \tag{19}\]
where \(F_{i}^{k}\) are the forces as obtained by the committee and \(F_{i}^{\mathbf{\mu}}\) the forces predicted by the mean \(\mathbf{\mu}\) of the posterior over the coefficients as outlined in Sec. II.6. \(\varepsilon\) is a regularising constant used to regularize the fraction typically set to 0.2-0.4 eV/A. Careful tuning of \(f_{\text{tol}}\) is required as it tunes the degree of extrapolation when adding new (unseen) configurations to the training database. Too large \(f_{\text{tol}}\) may lead to the sampling of energetically unreasonable configurations, whereas too small \(f_{\text{tol}}\) leads to suboptimal information gain during the HAL scheme resulting in sampling unnecessarily many configurations. The HAL scheme is outlined in Figure 5 illustrating how from a small initial training database containing a handful of configurations of interest a stable ACE potential is generated by performing biased MD and MC steps and iteratively triggering DFT calculations. For future reference, we define a _HAL iteration_ to consist of (i) a biased MD simulation run until a new unseen structure is flagged, (ii) evaluating energies, forces and virials on the new structure, and (iii) updating the ACE potential model.
Figure 5: Hyperactive Learning (HAL) protocol. Linear ACE potentials are fitted using BRR or ARD after which biased MD/MC steps are performed controlled by biasing parameter \(\tau\). Once the uncertainty metric \(f_{i}\) exceeds \(f^{\text{tol}}\) a DFT calculation is triggered a HAL iteration is completed and the training database extended.
AlSi10 melting temperature
The HAL framework was used to create an ACE potential for determining the melting temperature of the AlSi10 alloy. An initial dataset consisted of 32-atom random fcc lattice configurations, each containing 98 aluminium and 10 silicon atoms. This initial dataset was composed of 5 fcc random alloy configurations with lattice constants ranging from 14.3 to 16.6 A\({}^{3}\)/atom. The ACE basis set included interactions up to correlation order \(\bar{\nu}=2\) (3-body), and employed a cutoff of 5.5 A. The model was fitted using Automatic Relevance Determination (ARD) and its sparsity set by minimising BIC which resulted in increasingly complex ACE models as more configurations (or information) were. The chosen maximum polynomial degree \(D^{\max}\) during the HAL procedure increased from 4 to 12. The parameter estimation was carried out using ARD. The HAL relative biasing strength was set to \(\tau_{r}=0.2\), and the relative uncertainty threshold to \(f^{\text{tol}}=0.2\).
The HAL dynamics was used to melt the random alloy crystal structure, by ramping the temperature from 0 K to 1500 K at 1 GPa using a 1 fs timestep. Cell swapping and volume adjusting HAL-MC steps were taken to facilitate exploration of the (biased) energy landscape. After 18 HAL iterations, the ACE potential was already able to consistently perform 5000 HAL MD/MC timesteps without encountering new structures with high uncertainty. This final ACE potential contained 79 basis functions as selected using ARD pruning.
During these 18 HAL iterations the dimer curves are typically examined to ensure the potentials exhibit attraction at typical interatomic distances and short range repulsion as illustrated in Fig. 6.
The ACE potential obtained after HAL iteration 18 (fitted to 22 structures in total) was subsequently used to perform nested sampling (NS) simulations to model the liquid-solid phase transition. NS simulations were performed using 384 NS walkers, using a total decorrelation length of 512 formed by volume/shear/stretch/swap MC steps at a ratio of 4:4:4:4. The resulting heat capacity curves obtained by NS are presented in Figure 7 and are in close agreement to the melting temperature of 867 K as given by Thermo-Calc using the TCAL4 database [40].
#### iv.2.2 Polyethylene glycol
The HAL framework [41] was used to create a polyethylene glycol (PEG) model. To initilize HAL, 18 structures of PEG(\(n\)=32) formed of 32 monomer units in vacuum were evaluated using the ORCA code [42] with the \(\omega\)B97X DFT exchange correlation functional [43] and the 6-31G(d) basis set. ACE models were fitted to the initial and subsequent datasets with correlation order \(\bar{\nu}=3\), total degree \(D^{\max}=12\) and a cutoff radius 5.5 A, using the ARD algorithm. The HAL protocol used relative biasing parameter \(\tau_{r}=0.1\) and uncertainty tolerance \(f^{\text{tol}}=0.3\) and performed at 500 K. Unlike the previous AlSi10 example, no cell adjusting or atom swapping HAL-MC steps were performed as the configurations are isolated molecules in vacuum. It was also chosen to not change the ACE basis throughout the HAL procedure but rather to keep it constant (e.g. \(D^{\max}=12\)) as the initial database was relatively diverse. After 50 HAL iterations an ACE potential was generated that was deemed stable as it completed \(10^{4}\) HAL biased MD steps without triggering a DFT calculation. It was then used to determine the density of a PEG polymer formed of \(n=200\) monomer units in LAMMPS under periodic boundary conditions using the PACE evaluator [12]. The PEG(\(n\)=200)
Figure 6: ACE dimer curves for pair interactions for several HAL iterations. Stronger colours indicate later HAL iterations. They key observation to be drawn from this figure is that even in the early stages of the HAL process with very little available data, our priors ensure that the dimer curves are physically sensible, in particular smooth and repulsive.
density was determined at 300 K, 350 K and 400 K at 1 bar pressure over a timescale of 0.5 ns as shown in Figure 8. The density at 300 K is in good agreement with the experimental density of 1.2 g/cm\({}^{3}\)[44] at 293 K. This illustrates remarkable extrapolative performance by the linear ACE framework as the DFT reference (ORCA) does not support periodic boundary conditions itself, making determining the PEG density purely from first principles impossible.
#### iv.1.3 Perovskite CsPbBr\({}_{3}\)
We used the HAL framework [41] to create a training dataset for the lead-halide perovskite CsPbBr\({}_{3}\), which shows three relevant phases: orthorhombic at low temperatures, tetragonal at intermediate temperatures, and cubic at high temperatures, with experimental transition temperatures of 361 K and 403 K [45]. The HAL process was designed to sample all of these phases so that the resulting potential accurately represents energy and entropy of each phase and is hence capable of predicting the transition temperatures. To ensure consistent DFT energies and effective vibrational mode sampling, approximately cubic 40 atom supercells were created for all three phases.
Figure 8: PEG(\(n\)=200) density for HAL generated ACE potential under periodic boundary conditions using LAMMPs.
Figure 7: NS AlSi10 heat capacity curves for several runs indicating the liquid-solid transition as predicted by the HAL generated ACE potential.
This problem required some refinement of the standard HAL procedure, and careful testing of fitted ACE potentials for several basis sizes. We therefore give more detail about the process than in the previous cases.
The initial fit starting the HAL process used a set of 15 randomly perturbed (unit cell and atomic positions) 40-atom configurations, three from each of the high symmetry phases. The default ACE basis was used, with a cutoff of 8 A, a smoothness prior with \(p=3\), and the sklearn BayesianRidge linear solver. Automated basis selection was applied every 10 HAL iterations, with a maximum basis size of 2000, \(\bar{\nu}=3\), a maximum total polynomial degree of 16, and the model score as the selection criterion. To encourage exploration of a wide range of temperatures and configurations, over a maximum of \(10^{4}\) 1 fs HAL MD steps the temperature was ramped from 200 K to 600 K, and \(\tau_{r}\) from 0.1 to 0.5. New fitting configurations were selected when the fractional force error \(f^{\rm tol}\) exceeded 0.4. After 20 iterations starting from the three unperturbed high-symmetry 40-atom cells at fixed unit cell shape and size, the process was restarted from 9 80-atom high symmetry cells, doubling each of the three 40-atom cells along each cell vector, for 20 additional iterations. Then 20 additional iterations were carried out with variable unit cell and an applied pressure of 0.
At this point the model appeared to be stable enough for \(10^{5}\) steps without a HAL bias, so we switched to an unbiased sampling process to gather more data and improve the model accuracy. Starting the fit from the complete set of configurations from the HAL process, we generated fitting configurations from 2000 step runs with a maximum basis size of 4000. These used the same 80 atom starting configurations, but at fixed temperatures of 200 K to 500 K at 100 K intervals, and fixed shape but variable unit cell volume. To further refine the performance of the low energy parts of the PES around each high symmetry structure, we sampled 36 more configurations, each with 160 atoms (the three 40 atom supercells doubled along each of the three pairs of lattice vectors) at a range of lower temperatures, 150 K to 300 K at 50 K intervals.
The original set of 15 randomly perturbed configurations, another similar set of 15, and the 168 HAL configurations were used as the reference database for a set of fits to explore the performance of the model for a wide range of basis sizes. At this stage we filtered out physically unreasonable fitting data, as defined by a criterion that excluded any force larger than 10 eV/A, as well as the energies and virials from such configurations. To fit the model and evaluate its predictive accuracy we split the set of configurations into 75% fitting and 25% testing, stratifying the split by the HAL iteration (or initial random perturbation set) that produced the configuration. The same fitting procedure and basis as in the HAL run were used, with \(\bar{\nu}=2\) and \(\bar{\nu}=3\) and maximum polynomial degree 4 to 16, up to a maximum basis size of \(2\times 10^{4}\). We also compared three choices for the smoothness prior: none, \(p=2\), and \(p=4\).
The training set residuals, test set residuals, and BayesianRidge score (log marginal likelihood) are plotted as a function of basis size in Fig. 9. For each value of \(\bar{\nu}\) the fitting error improves monotonically as the basis size (and polynomial degree) increases, but at equal basis size the \(\bar{\nu}=2\) residuals are lower by as much as 25% (especially for moderately sized bases), indicating that for this system increasing the polynomial degree provides the basis with more useful flexibility as compared to increasing \(\bar{\nu}\). For the basis size range where the error is minimized, the testing set residuals are larger than the fitting set by at least about a factor of 2, indicating that some amount of overfitting is occurring. The smoothness prior is successful at limiting the extent of this overfitting.
The generally lower training and test errors for the \(\bar{\nu}=2\) models relative to the correlation order three models are reflected in their Bayesian ridge scores (log marginal likelihoods). However, within each correlation order the optimal choice of polynomial degree and corresponding basis size indicated by the minimum test error are not consistent with the score. Indeed, the results displayed in Figure 9 lead us to conclude that the Bayesian ridge score is not always a reliable tool for optimal basis selection and other options should be explored in the future.
We used the model with lowest test set error, generated by the fit with \(\bar{\nu}=2\), maximum polynomial degree 12, and smoothness prior \(p=4\), to simulate larger unit cells of CsPbBr\({}_{3}\) at a range of temperatures spanning its expected range of phase transition temperatures. We simulated 32 independent constant temperature, constant pressure,
Figure 9: Fitting set residual (left), testing set residual (center), and log marginal likelihood (right) as a function of basis size for CsPbBr\({}_{3}\) ACE model fit to a database generated with HAL. Symbol indicates correlation order \(\bar{\nu}\), and color indicates smoothness prior exponent \(p\).
MD trajectories at temperatures from 200 K to 355 K and zero pressure for \(10^{4}\) 10 fs time steps. Each trajectory started from an \(8\times 8\times 6\) supercell (7680 atoms) of the orthorhombic structure. To analyze the resulting structure we reconstructed the effective cubic lattice vectors and averaged their magnitudes over the last 8000 steps of each trajectory. A plot of these effective cubic cell lattice vector magnitudes as a function of temperature is shown in Fig. 10. We see the three expected phases as indicated by the degeneracy of the lattice constants: cubic at high temperature, tetragonal at intermediate temperatures, and orthorhombic at low temperatures. The transition temperatures are 240 K and 255 K, which are substantially shifted relative to the experimental results of 361 K and 403 K [45]. We expect that this deviation from experiment is primarily due to our choice of exchange correlation functional, the Perdew-Burke-Enzerhof generalized-gradient approximation, [46] as has been seen in similar simulations [47]. A direct comparison to DFT would be useful, but it would require an accurate calculation of the predicted phase transition temperatures directly from the DFT PES, which is too computationally demanding to be practical without additional approximations.
## IV Computational performance
The linearity of ACE potentials renders them not only interpretable but also efficient in terms of computational performance. To demonstrate this, a performance test was conducted on various linear ACE potentials referenced in this paper. The evaluation times, as well as some ACE hyperparameters used, are shown in Table 3 for the AlSi10, CsPbI3, H2O, PEG and Si potential developed in this work. The number of basis functions for each model is given too and may be fewer than a complete ACE basis parameterized by \(\bar{\nu}\) and \(D^{\text{max}}\) due to ARD pruning basis functions with low relevance. The timings were obtained using the LAMMPs-PACE implementation [12] using a 128 core ARCHER2 node, equivalent to two seperate AMD EPYC 7742 64-core at 2.25GHz. The \(10^{6}\) steps/day figures are equivalent to a ns/day and were obtained for varying cell sizes to illustrate scaling. A standardized performance figure in the form of core-\(\mu\)s/atom figure is also provided. The silicon database fitted originates from the silicon GAP potential, whereas the AlSi10, PEG and CsPbBr3 potentials were fitted using HAL generated databases containing 22, 68 and 198 configurations respectively as discussed in the previous subsections.
## V Conclusion and outlook
We introduced ACEpotentials.jl, a front-end for several Julia-language packages that implement Atomic Cluster Expansion (ACE) MLIPs and related functionality. This front-end provides a user-oriented interface, while the backend packages combine excellent performance with the flexibility for rapid model development and experimentation that is typical for the Julia language. The front-end ACEpotentials.jl exposes a relatively simple subset of ACE type
Figure 10: Effective cubic lattice constants at fixed temperature simulated using the ACE model with \(\bar{\nu}=2\), maximum polynomial degree 12, and \(p=4\). All three values are identical (to within the estimated error) at \(T>255\) K indicating a cubic structure. At lower temperatures these split into a single value and a group of two, consistent with a tetragonal structure, and at \(T<240\) K they split further into three distinct values, consistent with an orthorhombic structure.
models, linear models with robust priors, that we consider reliable in every-day use, especially in the context of an active learning type workflow.
However, we emphasize that the ACE framework allows for a much richer MLIPs design space [9; 12; 48; 49; 50] as well as parameterisation of many other types of particle systems [51; 52; 53; 54]. We therefore conclude by mentioning some of those extensions, as well as current short-comings, that require further development.
* Robust parameter estimation, in particular hyperparameter tuning, remains under-investigated in the MLIPs context. We regularly experience that hand-tuned hyperparameters can give superior results, basis sparsification remains poorly understood, and uncertainties are often only indicative of actual errors. Further research is required to resolve these closely related issues.
* The design space of the ACEpotentials.jl ACE models can be expanded to admit trainable radial embeddings, composition of ACE features with nonlinearities, or even multi-layer architectures such as [48; 49]. This comes at the cost of highly nonlinear and less efficient models, but some of those extensions, such as trainable radial embeddings, can be undertaken while keeping the spirit of our current ACE models: small models for rapid iterative development and low evaluation cost.
* The extension to highly nonlinear models would likely require that the computational kernels on which ACEpotentials.jl is built also be made GPU-capable. Towards that end a deep learning framework such as MACE [49] (see also the mace3 code) may be better suited. Footnote 3: [https://github.com/ACEwait/mace](https://github.com/ACEwait/mace)
* Finally, we note that there are already several related ACE software packages within ACEsuit4 that implement a variety of models for other particle systems at different stages of development: Hamiltonians ([51], ACEhamiltonians.jl); wave functions ([53; 54], ACEpsi.jl); jet tagging models ([52], BIPs.jl). These build on an experimental and significantly expanded Julia-language ACE package ACE.jl.
Footnote 4: [https://github.com/ACEsuit](https://github.com/ACEsuit)
###### Acknowledgements.
GC acknowledges support from EPSRC grant EP/X035956/1. CO, AR and TJ were supported by NSERC Discovery Grant GR019381 and NFRF Exploration Grant GR022937. WB was supported by US AFRL grant FA8655-21-1-7010. C vd O and GC acknowledge ARCHER2 for which access was obtained via the UKCP consortium and funded by EPSRC grant EP/P022065/1. NB was supported by the U. S. Office of Naval Research through the U. S. Naval Research Laboratory's fundamental research base program. EG acknowledges support from the EPSRC Centre for Doctoral Training in Automated Chemical Synthesis Enabled by Digital Molecular Technologies with grant reference EP/S024220/1. WCW was supported by the Schmidt Science Fellows in partnership with the Rhodes Trust, and additionally acknowledges support from EPSRC (Grant EP/V062654/1). JRK and CO acknowledge funding from the Leverhulme Trust under grant RPG-2017-191 and the EPSRC under grant EP/R043612/1. JRK, JPD and GC acknowledge support from the NOMAD Centre of Excellence funded by the European Commission under grant agreement 951786. JRK acknowledges support from the EPSRC under grants EP/P002188 and EP/R012474/1. This work was performed using resources provided by the Cambridge Service for Data Driven Discovery (CSD3) operated by the University of Cambridge Research Computing Service (www.csd3.cam.ac.uk), provided by Dell EMC and Intel using Tier-2 funding from the EPSRC (capital grant EP/T022159/1), and DiRAC funding from the STFC (www.dirac.ac.uk). Further computing facilities were provided by the Scientific Computing Research Technology Platform of the University of Warwick.
\begin{table}
\begin{tabular}{c|c c c c|c c|c} & \multicolumn{4}{c|}{ACE parameters} & \multicolumn{3}{c}{Performance} \\ \hline & \(\rho\) & \(D^{\text{max}}\) & \(r_{\text{cut}}\) & \(\#\) basis func. & \(10^{6}\) steps/day & atoms & core-\(\mu\)s/atom \\ \hline AlSi10 & 2 & 7 & 5.5 & 79 & 636 [32] & 23 \\ CsPbBr\({}_{3}\) & 2 & 12 & 5.5 & 544 & 334 [20] & 93 \\ PEG & 3 & 12 & 5.5 & 4897 & 10 [1400] & 227 \\ Si & 4 & 20 & 6 & 5434 & 7 [250] & 744 \\ \hline \end{tabular}
\end{table}
Table 3: Performance of linear ACE potentials for various systems using an ARCHER2 node utilising 128 cores for the \(10^{6}\) steps/day figures (equivalent ns/day using a 1 fs timestep). Core-\(\mu\)s/atom figures were obtained by performing simulations in serial.
_For the purpose of open access, the corresponding author has applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising from this submission._
## Appendix A Linear Scaling Cost and Computational Kernels
In Sections II.1 and II.2 we outlined some basic ideas behind the ACE model, in particular expressing the potential energy model in terms of the many-body expansion (2). A naive implementation of the many-body expansion results in prohibitive computational cost due to the exponential cost of the sums over clusters \((j_{1},\ldots,j_{\nu})\). However, after discretizing the \(U^{(\nu)}\)-body potential of the _self-interacting many-body expansion_ (2b) the sum can be rewritten to result in linear scaling cost. This is presented in detail, for example, in [9; 11; 12], hence we shall not review this process in full detail here. In order to outline what is involved in an implementation of an ACE potential, we only recall the form that the ACE model takes after this re-organisation of the many-body summation. The evaluation of the _self-interacting_ ACE basis then results in the following stages:
1. Evaluation of the embeddings, \(R_{nl}(r_{ij},Z_{i},Z_{j})\) and \(Y_{l}^{m}(\hat{\mathbf{r}}_{ij})\).
2. A pooling operation; also called called the atomic basis [9], or the density projection [2], \[A_{znlm}^{i}=\sum_{j\in\mathcal{N}(i)}\phi_{znml}(\mathbf{r}_{ij},Z_{j},Z_{i}),\] (10) where \(\mathcal{N}(i)\) denotes the set of indices of all atoms within the cutoff radius from atom \(i\).
3. Product basis: for lexicographically ordered tuples \((\mathbf{z},\mathbf{n},\mathbf{l},\mathbf{m})=(z_{t},n_{t},l_{t},m_{t})_{t=1}^{\nu}\) we define \[\mathbf{A}_{znlm}^{i}=\prod_{t=1}^{\nu}A_{z_{t}n_{t}l_{t}m_{t}}^{i}.\] (11) This operation can be thought of as a sparse symmetric tensor product, or as taking \(\nu\)-correlations.
4. Symmetrization: To ensure invariance one averages \(\mathbf{A}^{i}\) over all rotations, resulting in the \(O(3)\)-invariant basis \[\mathbf{B}^{i}=\mathcal{C}\mathbf{A}^{i},\] (12) employed in the definition of the linear ACE model (3). Here, \(\mathbf{A}^{i}\) is the vector of \((\mathbf{A}_{znlm}^{i})\) basis functions while \(\mathcal{C}\) a sparse matrix.
For each of these stages efficient computational kernels are implemented, designed in a modular way so that they can be independently optimized or composed into new model architectures.
#### a.0.1 Canonical Many-Body Expansion
Under the condition that the radial basis and envelope function are pure polynomials, it is possible to transform the self-interacting ACE basis \(\mathbf{B}^{i}\) defined in (12) into a basis for the canonical many-body expansion (2a). The idea behind this procedure is sketched out in [11]. The precise details of the implementation and a detailed study is not the purpose of this review. Here, we only mention that, upon slightly extending the \(R_{nl},A^{i}\) and \(\mathbf{A}^{i}\) bases, one can obtain a "purification operator" \(\mathcal{P}\) such that the linearly transformed \(\mathcal{P}\mathbf{A}^{i}\) becomes a basis for the canonical many-body expansion (2a). The symmetrisation \(\mathcal{C}\) can then be applied to obtain an \(O(3)\)-invariant basis \(\mathcal{B}^{i}:=\mathcal{C}\mathcal{P}\mathbf{A}^{i}\).
An important variation of the "purification operation" \(\mathcal{P}\) is to only purify the 2-body interaction. This entails replacing the fully self-interacting basis functions
\[\mathbf{A}_{\mathbf{k}}^{i}=\sum_{j_{1},\ldots,j_{\nu}}\prod_{t=1}^{\nu}\phi_{k_{t}}(x _{ij_{t}})\qquad\text{with}\qquad\sum_{\begin{subarray}{c}j_{1},\ldots,j_{ \nu}\\ j_{2}\neq j_{\nu}\end{subarray}}\prod_{t=1}^{\nu}\phi_{k_{t}}(x_{ij_{t}})\]
All three options (i) fully self-interacting, (ii) purified pair interaction, and (iii) canonical cluster expansion are available in ACEpotentials.jl. The package documentation should be reviewed on how to select the different basis sets.
|
2309.15065 | Language-EXtended Indoor SLAM (LEXIS): A Versatile System for Real-time
Visual Scene Understanding | Versatile and adaptive semantic understanding would enable autonomous systems
to comprehend and interact with their surroundings. Existing fixed-class models
limit the adaptability of indoor mobile and assistive autonomous systems. In
this work, we introduce LEXIS, a real-time indoor Simultaneous Localization and
Mapping (SLAM) system that harnesses the open-vocabulary nature of Large
Language Models (LLMs) to create a unified approach to scene understanding and
place recognition. The approach first builds a topological SLAM graph of the
environment (using visual-inertial odometry) and embeds Contrastive
Language-Image Pretraining (CLIP) features in the graph nodes. We use this
representation for flexible room classification and segmentation, serving as a
basis for room-centric place recognition. This allows loop closure searches to
be directed towards semantically relevant places. Our proposed system is
evaluated using both public, simulated data and real-world data, covering
office and home environments. It successfully categorizes rooms with varying
layouts and dimensions and outperforms the state-of-the-art (SOTA). For place
recognition and trajectory estimation tasks we achieve equivalent performance
to the SOTA, all also utilizing the same pre-trained model. Lastly, we
demonstrate the system's potential for planning. | Christina Kassab, Matias Mattamala, Lintong Zhang, Maurice Fallon | 2023-09-26T16:50:20Z | http://arxiv.org/abs/2309.15065v2 | # Language-EXtended Indoor SLAM (LEXIS):
###### Abstract
Versatile and adaptive semantic understanding would enable autonomous systems to comprehend and interact with their surroundings. Existing fixed-class models limit the adaptability of indoor mobile and assistive autonomous systems. In this work, we introduce LEXIS, a real-time indoor Simultaneous Localization and Mapping (SLAM) system that harnesses the open-vocabulary nature of Large Language Models (LLMs) to create a unified approach to scene understanding and place recognition. The approach first builds a topological SLAM graph of the environment (using visual-inertial odometry) and embeds Contrastive Language-Image Pretraining (CLIP) features in the graph nodes. We use this representation for flexible room classification and segmentation, serving as a basis for room-centric place recognition. This allows loop closure searches to be directed towards semantically relevant places. Our proposed system is evaluated using both public, simulated data and real-world data, covering office and home environments. It successfully categorizes rooms with varying layouts and dimensions and outperforms the state-of-the-art (SOTA). For place recognition and trajectory estimation tasks we achieve equivalent performance to the SOTA, all also utilizing the same pre-trained model. Lastly, we demonstrate the system's potential for planning.
## I Introduction
Scene understanding is a long-standing problem in robot perception. Over the last decade, SLAM systems have shifted from building purely geometric representations for localization, to semantic and interpretable representations for interaction [1, 2]. Semantic SLAM and object-based perception have made significant advances -- powered by progress in the machine learning and computer vision communities. _3D scene graphs_[3, 4] have more recently emerged as a unifying representation to integrate structure and semantics [5, 6]. Nonetheless, the usage of fixed-class semantic models in these applications limits the versatility of these systems.
The progress of LLM research offers a solution to this challenge, as they can bridge the gap between visual and textual information with their open vocabularies. Methods such as CLIP [7] and ViLD [8] have been used to enrich 3D reconstructions with semantics, as demonstrated by methods such as OpenScene [9], ConceptFusion [10], and NLMap [11]. These methods can identify objects and scene properties; and can even carry out navigation using human instructions [12]. However, open questions remain about integrating this capability into the modules of a robotic system. In particular, can embedded semantic understanding be harnessed for tasks such as place recognition and localization?
In this work, we combine the open-vocabulary capabilities of LLMs with classical localization and mapping methods to develop LEXIS (Language-EXtended Indoor SLAM). Unlike conventional approaches which employ separate models for room classification, place recognition and semantic understanding, our approach uses a single pre-trained model to efficiently execute all of these functions. The output is a semantically segmented pose graph as shown in Fig. 1. Our specific contributions are:
* A lightweight topological pose graph representation embedded with CLIP features.
* A method to leverage semantic features to achieve online room segmentation, capable of accomodating different room sizes, layouts and open-floor plans.
* A place recognition approach building on these room segmentations to propose hierarchical, room-aware loop closures.
* Extensive evaluation of the system for indoor real-time room segmentation and classification, place recognition, and as a unified visual SLAM system using standard and custom multi-floor datasets, with a demonstration for planning tasks.
Fig. 1: LEXIS enables pose graph segmentation from natural language. By exploiting the open-vocabulary capabilities of CLIP, we can segment room instances such as office, kitchen, and corridor directly from the pose graph without fine-tuning. The above dataset is from a two floor office environment and contains 7 rooms as well as 2 corridors and stars.
## II Related Work
### _Semantic Scene Representations_
The use of semantic information is motivated by the limitations of purely geometric representations to encode interpretable information and to support higher-level tasks.
Early research used boosting and Hidden Markov Models to label indoor locations based on vision and laser range data [13]. Later works shifted towards Convolutional Neural Network (CNN)-based methods for room classification and scene understanding. Goeddel et al. [14] used CNNs to classify LiDAR maps into rooms, corridors and doorways. Sunderhauf et al. [15] overcame the closed-set limitations of CNNs using a series of one-vs-all classifiers to allow recognition of new semantic classes, such as allowing generalization of door recognition to diverse settings.
These techniques emphasize the extraction of semantic information but do not capture contextual and higher-level understanding, such as relationships between objects or rooms. More recent studies are directed towards the incorporation of these semantic attributes directly into hierarchical map models, such as 3D scene graphs [3, 4]. The multi-layered graph represents entities such as objects, rooms, or buildings as graph nodes, while semantic relationships are established through graph edges. Hydra [5] presented a five-layered scene graph with a metric-semantic 3D mesh layer, object and agents layers, as well as, obstacle-free locations, rooms, and buildings. Semantics are obtained through a pretrained HR-Net [16] and Graph Neural Networks (GNNs) models to encode object relationships. S-Graphs+ [6] used a similar four-layered graph and employed geometry-based room segmentation using free-space clusters and wall planes, without using an explicit semantic segmentation method.
These methods rely on fixed-class models for tasks like room classification, and face challenges in generalizing to new environments [14, 17], leading to reduced performance and difficulties with unfamiliar room types. Approaches like Hydra need to segment the representation prior to classification and depend on geometric data (e.g., walls and doorways). This limits segmentation in open-floor plans or multi-functional spaces. Moreover, current 3D scene graph representations require multiple models for semantic segmentation, room classification, and place recognition. This requires extensive training data and further diminishes adaptability to varied environments.
With LEXIS we aim to address these limitations by exploiting the information encoded in LLMs. Their open-vocabulary features enable an arbitrary number of classes, and allow LEXIS to adapt to diverse indoor environments without the need for pre-training or fine-tuning. Our system does not require geometric information to perform room segmentation, enabling us to accommodate varying room sizes and layouts, and allowing us to segment open plan spaces effectively. Additionally, we leverage the same model for place recognition, thereby fully capitalizing on the capabilities of LLMs throughout the entire SLAM pipeline.
### _LLM-powered Representations_
Our system is inspired by other recent works exploiting LLMs for scene representation.
OpenScene [9] is an offline system that enhances 3D metric representations using CLIP visual-language features. Other approaches, such as LERF [18] and CLIP-Fields [19], embed visual-language features into neural fields to achieve 3D semantic segmentations from open-vocabulary queries. ConceptFusion [10] further advances these approaches by building a representation featuring multi-modal features from vision, audio, and language on top of a differentiable SLAM pipeline [20].
These representations have proven useful when interacting with 3D scenes, especially for planning and navigation tasks. Natural language commands have been employed to guide navigation tasks in indoor environments, combining natural language plan specifications with classical state estimation and local planning systems for navigation [19, 11, 12]. Other 3D scene understanding tasks, such as completing partially observed objects and localizing hidden objects have been explored [21].
LEXIS differs from the previous methods as they heavily rely on a metric representation of the environment, necessitating the embedding and fusion of LLMs features into a 2D or 3D map. This fusion process often needs to be performed offline or is limited to single-room environments.
In contrast, our system utilizes a topological representation--a pose graph--which streamlines feature embedding while preserving the ability to use natural language queries for segmentation in an online manner. Moreover, it allows us to apply well-established loop closing and pose graph optimization techniques to handle trajectory drift effectively.
## III Method
A system overview of LEXIS is presented in Figure 2. The main inputs are high-frequency 6 DoF odometry (for which we use our previous work Multi-Camera VILENS [22]), a stream of wide field-of-view (FoV) RGB images, and a list of potential room classes (for example: office, kitchen,
Fig. 2: LEXIS system overview: The only inputs are RGB images and an odometry estimate from a visual-inertial state estimator, as well as a prompt list of potential room classes. The output is a semantic pose graph that encodes room information.
corridor). The output of the system is a CLIP-enhanced semantically-segmented topological map of the environment. The main modules of LEXIS are explained in the following sections.
### _Front-end_
Using the high-frequency odometry estimate and RGB image stream, LEXIS builds an incremental pose graph of equally-spaced keyframes, based on a pre-set distance threshold. The state of the system at time \(t_{i}\) is defined as \(\mathbf{T}_{\texttt{WB}}^{i}\in SE(3)\), where W is the fixed world frame, and B is the moving base frame.
As well as the pose, each node also contains CLIP image encodings for semantic understanding which we define as \(\mathbf{f}_{\text{CLIP}}\). We also extract AKAZE [23] local features, \(\mathbf{f}_{\text{AKAZE}}\), for loop closure registration. We extract text encodings, denoted as \(\mathbf{f}_{\text{TEXT}}\), from the prior list of potential room classes by utilizing the same CLIP model. It is important to emphasize that the room labels are not limited to predefined categories associated with any particular dataset.
### _Room Estimation and Refinement_
As we build the graph, we compare the image encodings, \(\mathbf{f}_{\text{CLIP}}\), to the room text encodings, \(\mathbf{f}_{\text{TEXT}}\), using the cosine similarity defined as:
\[S_{c}(\mathbf{f}_{\text{CLIP}},\mathbf{f}_{\text{TEXT}})=\frac{\mathbf{f}_{ \text{CLIP}}\cdot\mathbf{f}_{\text{TEXT}}}{\|\mathbf{f}_{\text{CLIP}}\| \cdot\|\mathbf{f}_{\text{TEXT}}\|} \tag{1}\]
This provides an initial room segmentation for the pose graph, as shown in Fig. 3 (a). As this module is executed on a per-image basis without contextual information, it can make incorrect classifications, particularly in areas with room transitions or when images lack distinct semantic content. To mitigate this, we employ a nearest neighbour refinement.
The refinement approach is inspired by the Label Propagation algorithm [24], a well-known technique for finding communities in network structures. Considering the \(C\) closest neighbors of each node, Label Propagation identifies the most common label among them. If this label differs from the node's present label, the algorithm updates the node's label. However, in contrast to Label Propagation, which updates all the labels until convergence, we only run one forward pass every \(K\) new keyframes. This module ensures overall segmentation smoothness and promotes consistency in the final result.
An intuitive guideline to follow when setting \(C\) and \(K\) is that when dealing with larger rooms in an environment, a higher value for \(C\) should be considered. When the environment exhibits significant semantic variations, it is beneficial to also set a higher value for \(K\) to maximize the acquisition of information before refinement. Values of \(C\) used in our experiments typically varied between 3 and 7, and \(K\) between 7 and 12.
We also use height change (in the z-axis) to detect and segment staircases, even when they are not visibly present in the immediate surroundings. The outcome of the full refinement module, consisting of a room label for each pose in the pose graph, is shown in Fig. 3 (b).
### _Clustering_
Once we have allocated a room label to each pose graph node, the next step is to group the nodes into clusters representing individual rooms such as office 1 and office 2. For each new node with a room label not encountered previously, a new cluster is formed. Nodes are then added to the cluster if they possess the same room label and are within a certain distance threshold of the cluster's mean position. This approach enables continuous updates to the clusters during the refinement module, as allocated room labels evolve over time.
When dealing with rooms of significantly varying sizes, it is possible for multiple clusters to emerge within a single room. We merge clusters by assuming that transitioning directly from room to room is unlikely. Instead there typically exists an intermediate space like a corridor that must be traversed in between. The clustering outcome is presented
Fig. 3: Room segmentation and refinement on a pose graph with data from the uHumans2 Apartment scene (_uH2-Apt_). (a) Initial room labels are given by CLIP. (b) The room labels post refinement. (c) Clustering into room instances. (d) Segmentation into floors.
in Fig. 3 (c). Furthermore, by identifying clusters labelled as stairs we can further segment the pose graph into distinct floors (Fig. 3 (d)).
By employing this strategy, our system can organize the pose graph into a structured representation of meaningful room instances which can also enhance subsequent localization and loop closure modules. This adaptive clustering approach ensures robust segmentation and can accommodate various room layouts and sizes commonly encountered in real-world indoor environments.
### _Semantic Loop Closure Detection_
The semantic information encoded in the LEXIS graph allows for efficient place retrieval without using a dedicated place recognition model. Because of this we can reuse this information for loop closure candidate detection.
For each new keyframe added to the graph, we first determine its corresponding candidate room label using the image encoding, \(\mathbf{f}_{\text{CLIP}}\). We then search for candidate rooms by querying all the room clusters sharing the same label.
We use the current localization estimate provided by the odometry to choose the closest room cluster, and attempt geometric verification against all the keyframes within the room using PnP [25]. For efficiency, the query node's image encoding, \(\mathbf{f}_{\text{CLIP}}\), can also be compared to nodes within the cluster using cosine similarity, further refining the candidate set. All successful localization attempts are then added as loop closure edges in the pose graph, which is later optimized. The optimised poses are defined as \(\mathcal{X}:=\{\mathbf{T}_{\mathbf{i}\mathbf{i}\mathbf{j}}^{1},...,\mathbf{T}_{ \mathbf{i}\mathbf{i}\mathbf{j}}^{n}\}\) with the optimization formulated as a least squares minimization with a robust DCS loss \(\rho(\cdot)\)[26]:
\[\mathcal{X}=\operatorname*{argmin}_{\mathcal{X}}\sum_{i}\lVert\mathbf{r}_{ \text{\emph{odom}}}\rVert^{2}+\sum_{i,j}\rho\left(\mathbf{r}_{\text{\emph{ loop}}}\right) \tag{2}\]
where, \(\mathbf{r}_{\text{\emph{odom}}}\) refers to odometry edges and \(\mathbf{r}_{\text{\emph{loop}}}\) refers to loop closures.
## IV Experiments and Results
In this section, we demonstrate the capabilities of the system as applied to room classification, place recognition and as a unified SLAM system using indoor real-world and simulated datasets. We conclude with a demonstration of a mission planning application.
### _Experimental Setup_
LEXIS runs in real-time on a mid-range laptop with an Intel i7 11850H @ 2.50GHz x 16 with an Nvidia RTX A3000 GPU. The only module that requires GPU compute is the CLIP feature extractor. All other modules run on the CPU.
There are several pre-trained CLIP models which use different variants of a ResNet (RN) or a Vision Transformer (ViT) as a base. Two of the ResNet variants follow a EfficientNet-style model scaling and use approximately 4x and 16x the compute of ResNet-50. A full list of the models evaluated are available in Tab. I.
We evaluated LEXIS on three datasets:
* _uHumans2_ is a Unity-based simulated dataset provided by the authors of Kimera [27]. It has two indoor scenes: a small apartment (_uH2-Apt_ [49m, 4 rooms, 3 floors]) and an office (_uH2-Off_ [264m, 4 rooms, 1 floor]). The dataset provides visual-inertial data, ground truth trajectories and ground-truth bounding boxes for each room.
* _ORI_ [253m, 7 rooms, 2 floors] is a real-world dataset collected at the Oxford Robotics Institute and it includes offices, staircases and a kitchen. It was collected using a multi-sensor unit consisting of the Sevensense Alphasense Multi-Camera kit (Fig. 2) integrated with a Hesai Pandar LiDAR.
* _Home_ [118m, 7 rooms, 2 floors] is a dataset collected from a home environment, including kitchen, bedrooms, bathroom, living and dining areas, and a garden. This dataset was recorded and labeled using the same approach as the _ORI_.
For both _ORI_ and _Home_, the LiDAR sensor was used to generate ground truth. It was not used in LEXIS. Ground truth trajectories were determined via LiDAR ICP registration against prior maps built with a Leica BLK360, room labels were hand-labeled using the LiDAR map.
### _Results_
#### Iv-B1 Room Segmentation and Classification
We define classification accuracy as the ratio of accurately classified nodes relative to the total number of nodes in a dataset. A node is considered accurately classified if the bounding box that it falls into has the same room label as the node itself. The reported accuracy is an average of five runs.
An evaluation of room classification accuracy on a real dataset (_Home_) and a simulated one (_uH2-Apt_) using the available CLIP models is shown in Table I. RN50x64 and ViT-L/14@336px were excluded due to their larger size and longer inference times. Refinement parameters \(C\) and \(K\) were tuned for each model, with the best performing models being RN50x16 and ViT-L/14. For further evaluation, we selected RN50x16 for the uHumans2 datasets, as it required less refinement and more effectively preserved small-scale
\begin{table}
\begin{tabular}{l c c c} \hline \hline & _Home_ (\%) & _uH2-Apt_ (\%) & Inference time (ms) \\ \hline RN50 & \(73.40\pm 6.42\) & \(53.77\pm 1.56\) & \(21.62\pm 0.04\) \\ \hline RN101 & \(70.66\pm 2.65\) & \(52.49\pm 3.98\) & \(27.28\pm 0.75\) \\ \hline RN50x4 & \(74.58\pm 5.40\) & \(55.12\pm 5.75\) & \(28.37\pm 1.53\) \\ \hline RN50x16 & \(75.85\pm 2.97\) & \(\mathbf{57.36\pm 1.27}\) & \(37.14\pm 0.37\) \\ \hline RN50x64 & - & - & \(63.57\pm 0.25\) \\ \hline ViT-B/32 & \(74.96\pm 4.92\) & \(55.51\pm 2.05\) & \(20.74\pm 0.17\) \\ \hline ViT-B/16 & \(77.55\pm 3.66\) & \(56.90\pm 3.51\) & \(22.36\pm 0.09\) \\ \hline ViT-L/14 & \(\mathbf{78.92\pm 3.01}\) & \(\mathbf{57.47\pm 1.81}\) & \(35.61\pm 0.78\) \\ \hline ViT-L/14@336px & - & - & \(46.41\pm 0.25\) \\ \hline \hline \end{tabular}
\end{table} TABLE I: Classification accuracy (averaged over 5 runs) and inference time for extracting both image and text encodings for the available CLIP models. Models with inference time of over 40 ms were disregarded from further analysis.
changes in open-floor plans compared to ViT-L/14. We used ViT-L/14 for all evaluations on the _Home_ and _ORI_ datasets. Table II presents a comparison of LEXIS using both the initial segmentation from CLIP (LEXIS - Baseline) and the refined outcome (LEXIS - Refined) with Hydra [17]. We compare against Hydra as it is the only similar open-source visual system that reports room classification accuracy as far as we know. We do not provide results for Hydra on the _Home_ and _ORI_ datasets, as Hydra's room classification implementation is not yet open-sourced. Hydra employs two different 2D semantic segmentation models, using the ADE20k dataset label space [28]. The two segmentation models are HRNet [16] and OneFormer [29].
Our results achieve better performance than both of these Hydra variants. Across the datasets, the refinement procedure improves classification accuracy by an average of 10%. The key advantage of our open-vocabulary approach is its ability to avoid the constraints of fixed class sets, facilitating effective generalization to diverse environments and accurate segmentation of open-floor plans using semantics rather than geometry. For example, in _uH2-Apt_, the algorithm successfully segmented the living room and dining room despite the open floor-plan (Fig. 3). Similarly, within the _ORI_ dataset, LEXIS divided the kitchen area into kitchen and office spaces as it contains typical kitchen equipment as well as whiteboards and tables (Fig. 1).
The variation in the performance of LEXIS on _uH2-Apt_ and _uH2-Off_ can be attributed to the ground-truth bounding boxes provided with the dataset, where stairs are not considered a separate class but instead included as part of the dining room. Moreover, we hypothesize that the performance difference between Hydra and LEXIS on the _uH2-Off_ dataset can be attributed to Hydra considering only objects and edges between rooms in the classification. For instance, within this dataset, there are instances of chairs and water dispensers within certain corridors. Failing to take into account the broader contextual factors, such as the corridor's length and narrower dimensions, could lead to misclassification of these areas as offices.
Our results on the _uH2-Off_ dataset are visualized in Fig. 4, with orange regions \(\blacksquare\) indicating misclassifications. Incorrect classifications are typically clustered around room edges, e.g., when the camera faces into a room but is actually located within a corridor (Example A).
#### Iv-A2 Semantic Place Recognition
We compared LEXIS' place recognition method to DBoW [30], and NetVLAD [31]. DBoW provides a framework for feature quantization and indexing of large-scale visual vocabularies. We fed DBoW with ORB features [32], as used in the place recognition systems of ORB-SLAM [33] and Hydra [5]. NetVLAD is a neural network architecture pre-trained on Pitts30k [34].
We evaluated performance by counting true positives and false positives (Fig. 5). For each query, if \(N\) matches were situated within a distance/angle threshold, we counted it as a true positive; otherwise, a false positive count was registered. We conducted evaluations at \(N\) = 1, 3, and 5. We used the _Home_ and _ORI_ datasets, with the true positive distance threshold set at 1 m and angular threshold of 0.5 rad.
As illustrated in Fig. 5, our approach achieved more true positives and less false positives than DBoW across both
Fig. 4: Segmentations produced by LEXIS for the uHumans2 office (_uH2-Off_) dataset. Also shown are the ground-truth bounding boxes. Misclassifications occur during room transitions (example A and B); or areas with fewer features (C).
Fig. 5: Number of true positives and false positives (red \(\blacksquare\)) using three different VPR methods: DBoW, NetVLAD and LEXIS on the _Home_ (left) and _ORI_ (right) dataset averaged over 5 runs.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \% & _uH2-Apt_ & _uH2-Off_ & _ORI_ & _Home_ \\ \hline Hydra - & & & & \\ HRNet & \(38.0\pm 21.7\) & \(28.4\pm 6.9\) & - & - \\ Hydra - & & & & \\ OneFormer & & & & \\ LEXIS - & \(51.31\pm 3.24\) & \(68.99\pm 1.07\) & \(68.09\pm 1.64\) & \(61.21\pm 1.12\) \\ LEXIS - & & & & \\ Refined & \(\mathbf{57.36\pm 1.27}\) & \(\mathbf{76.03\pm 1.69}\) & \(\mathbf{79.22\pm 4.23}\) & \(\mathbf{78.92\pm 3.01}\) \\ \hline \hline \end{tabular}
\end{table} TABLE II: Room classification accuracy averaged over 5 runs.
datasets. The increased number of true positives can be attributed to the refinement of our search for loop closures to a relevant room or corridor.
Interestingly, our method retrieved a similar number of true positives to NetVLAD despite relying on CLIP, a pre-trained model, with no specific training for place recognition. We also found that LEXIS produced a slightly higher number of false positives than NetVLAD. This is primarily due to the viewpoint variations in the suggested matches produced by our method, as demonstrated in Fig. 6. Notably, due to CLIP relying solely on semantic information, opposing viewpoints are presented as potential matches.
The high number of false positives across all methods in the _ORI_ dataset could be attributed to there being many visually similar offices in the dataset. However, it is worth noting, particularly with robust graph optimization techniques and PnP verification, prioritizing the accurate identification of enough valid loop closures (true positives) is more important than avoiding incorrect loop closures (false positives) [35].
#### Iv-B3 Full System Evaluation
We conducted a comparison using LEXIS as a complete SLAM system, benchmarked against two state-of-the-art alternatives: ORB-SLAM3 [36] and VINS-Fusion [37]. In our experiments, we used the stereo-inertial configurations with loop closures enabled and assessed performance using the Absolute Trajectory Error (ATE). The results are summarized in Table III.
Despite the streamlined and minimal design of LEXIS -- combining the Multi-Camera VILENS VIO system [22], with classical pose graph optimization and our CLIP-based semantic place recognition module, it still achieves comparable performance to that of ORB-SLAM3 and VINS-Fusion. The incorporation of the Multi-Camera system, which analyzes images from two front-facing and two lateral-facing cameras, provides benefits as the system can avoid tracking issues in confined indoor environments.
#### Iv-B4 Planning Application
Finally, we demonstrated that the representation produced by LEXIS can be used for mission planning in a real-world environment encompassing multiple floors and rooms. From the pose graph, we constructed an adjacency matrix that establishes connections between consecutive nodes and nodes within the same cluster. We then computed the shortest path between initial and goal room labels using Dijkstra's algorithm [38]. An example path on the _Home_ dataset is illustrated in Fig. 7.
## V Conclusion
This work presents LEXIS, a real-time semantic visual SLAM system enhanced by open-vocabulary language models. Our system constructs a topological model of indoor environments that is enriched with embedded semantic understanding. This allows us to properly segment rooms and spaces across diverse contexts. Leveraging this representation, we demonstrated room-aware place recognition which achieves performance equivalent with established place recognition methods such as NetVLAD and DBoW. We evaluated our SLAM system in home and office environments and achieved comparable ATE to established systems (ORB-SLAM3 and VINS-Fusion). Finally, we demonstrated an example of how our representation can be used for other robotics tasks such as room-to-room planning. This work showcases how open-vocabulary models can enable autonomous systems to interact naturally with their environment. Future work will focus on enhancing room classification by integrating LEXIS with dense reconstruction techniques and considering uncertainty in the estimation for long term use of the system. We also intend to investigate per-pixel adaptations of the CLIP model.
## Acknowledgments
This work is supported in part by a Royal Society University Research Fellowship (Fallon, Kassab), and the ANID/BECAS CHILE/2019-72200291 (Mattamala). We thank Nathan Hughes for providing the ground truth for the uHumans2 dataset. For the purpose of Open Access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript (AAM) version arising from this submission.
Fig. 6: Examples of loop closures provided by CLIP in the _Home_ dataset. CLIP is able to provide matches from opposing viewpoints (left) and with significant viewpoint variations (right) as it relies on semantic information.
\begin{table}
\begin{tabular}{l l l} \hline \hline ATE (m) & _ORI_ & _Home_ \\ \hline ORB-SLAM3 & 0.22 & 0.10 \\ VINS-Fusion & 0.10 & 0.08 \\ LEXIS & 0.16 & 0.10 \\ \hline \hline \end{tabular}
\end{table} TABLE III: Comparison of ATE in the _ORI_ and _Home_ datasets.
Fig. 7: Segmentation of the _Home_ dataset with a topological plan, shown in black, from the bathroom to the garden. |
2309.08752 | Millimeter emission in photoevaporating disks is determined by early
substructures | [abridged]Photoevaporation and dust-trapping are individually considered to
be important mechanisms in the evolution and morphology of protoplanetary
disks. We studied how the presence of early substructures affects the evolution
of the dust distribution and flux in the millimeter continuum of disks that are
undergoing photoevaporative dispersal. We also tested if the predicted
properties resemble those observed in the population of transition disks. We
used the numerical code Dustpy to simulate disk evolution considering gas
accretion, dust growth, dust-trapping at substructures, and mass loss due to
X-ray and EUV (XEUV) photoevaporation and dust entrainment. Then, we compared
how the dust mass and millimeter flux evolve for different disk models. We find
that, during photoevaporative dispersal, disks with primordial substructures
retain more dust and are brighter in the millimeter continuum than disks
without early substructures, regardless of the photoevaporative cavity size.
Once the photoevaporative cavity opens, the estimated fluxes for the disk
models that are initially structured are comparable to those found in the
bright transition disk population ($F_\textrm{mm} > 30\, \textrm{mJy}$), while
the disk models that are initially smooth have fluxes comparable to the
transition disks from the faint population ($F_\textrm{mm} < 30\,
\textrm{mJy}$), suggesting a link between each model and population. Our models
indicate that the efficiency of the dust trapping determines the millimeter
flux of the disk, while the gas loss due to photoevaporation controls the
formation and expansion of a cavity, decoupling the mechanisms responsible for
each feature. In consequence, even a planet with a mass comparable to Saturn
could trap enough dust to reproduce the millimeter emission of a bright
transition disk, while its cavity size is independently driven by
photoevaporative dispersal. | Matías Gárate, Til Birnstiel, Paola Pinilla, Sean M. Andrews, Raphael Franz, Sebastian Markus Stammler, Giovanni Picogna, Barbara Ercolano, Anna Miotello, Nicolás T. Kurtovic | 2023-09-15T20:37:47Z | http://arxiv.org/abs/2309.08752v1 | # Millimeter emission in photoevaporating disks is determined by early substructures
###### Abstract
Context: Photoevaporation and dust-trapping are individually considered to be important mechanisms in the evolution and morphology of protoplanetary disks. However, it is not yet clear what kind of observational features are expected when both processes operate simultaneously.
Aims:We studied how the presence (or absence) of early substructures, such as the gaps caused by planets, affects the evolution of the dust distribution and flux in the millimeter continuum of disks that are undergoing photoevaporative dispersal. We also tested if the predicted properties resemble those observed in the population of transition disks.
Methods:We used the numerical code Dustpy to simulate disk evolution considering gas accretion, dust growth, dust-trapping at substructures, and mass loss due to X-ray and EUV (XEUV) photoevaporation and dust entrainment. Then, we compared how the dust mass and millimeter flux evolve for different disk models.
Results:We find that, during photoevaporative dispersal, disks with primordial substructures retain more dust and are brighter in the millimeter continuum than disks without early substructures, regardless of the photoevaporative cavity size. Once the photoevaporative cavity opens, the estimated fluxes for the disk models that are initially structured are comparable to those found in the bright transition disk population (\(F_{\rm{nm}}>30\) mJy), while the disk models that are initially smooth have fluxes comparable to the transition disks from the faint population (\(F_{\rm{nm}}<30\) mJy), suggesting a link between each model and population.
Conclusions:Our models include the efficiency of the dust trapping determines the millimeter flux of the disk, while the gas loss due to photoevaporation contribute the formation and expansion of a cavity, decoupling the mechanisms responsible for each feature. In consequence, even a planet with a mass comparable to Saturn could trap enough dust to reproduce the millimeter emission of a bright transition disk, while its cavity size is independently driven by photoevaporative dispersal.
## 1 Introduction
Observations of nearby star-forming regions reveal that the fraction of protoplanetary disks around young stellar objects decreases rapidly with age, indicating that the process of disk dispersal is relatively fast compared to the disk lifetime (Koepferl et al., 2013; Ribas et al., 2015). Theoretical models of disk evolution suggest that photoevaporation could explain this fast dispersal, which occurs when high energy photons in the far ultraviolet (FUV), extreme ultra-violet (EUV), and X-ray wavelength ranges of the spectra hit the gas particles on the disk surface and unbind them from the stellar gravitational potential. When the mass loss rate due to photoevaporation exceeds the local accretion rate, a cavity opens and the disk enters into the photoevaporative dispersal regime, which can clear the remaining material on timescales of \(10^{5}\) yrs from the inside out (e.g., Clarke et al., 2001; Alexander et al., 2006; Gorti and Hollenbach, 2009; Owen et al., 2010; Ercolano and Pascucci, 2017).
Photoevaporation has also been proposed as an explanation for transition disks, since the deficit of near-and mid-infrared (NIR and MIR) emission observed in the spectral energy distribution (SED) of these objects is linked to a lack of small grains in the inner regions (Strom et al., 1989; Skrutskie et al., 1990; Espaillat et al., 2014; van der Marel et al., 2016), which is consistent with a cavity in the inner disk such as those carved by photoevaporative dispersal (Alexander et al., 2006; Owen et al., 2011; Ercolano et al., 2018), even though not all SED-selected disks might show a cavity in the millimeter continuum, or they might be related to highly inclined disks (van der Marel et al., 2022). Mixed models of photoevaporation and dead zones (Morishima, 2012; Garate et al., 2021) have also succeeded in explaining the high accretion rates found in transition disks (e.g., Cieza et al., 2012; Alcala et al., 2014; Manara et al., 2014, 2016, 2016), and the presence of a compact inner disk inside their cavities (e.g., Kluska et al., 2018; Pinilla et al., 2019, 2021), which was one of the main limitations of models of photoevaporation acting alone.
However, in addition to the infrared deficits, wide cavities, and high accretion rates, transition disks also seem to be distributed across two populations in terms of their millimeter flux, with a break at approximately \(F_{\rm{mm}}=30\) mJy \(\cdot\)\((d/140\) pc\()^{-2}\)(Owen et al., 2012; Owen, 2016). The fluxes of the population of
faint transition disks can be easily reproduced by standard photoevaporation models with dust evolution (Owen and Kollmeier, 2019; Garate et al., 2021), but it is not yet clear if disks undergoing photoevaporation could also reproduce the fluxes of the millimeter bright population.
Observations of rings and gaps in protoplanetary disks (e.g., ALMA Partnership et al., 2015; Andrews et al., 2018; Long et al., 2018; Cieza et al., 2021) indicate that disk density profiles are rich in substructures that can act as dust traps (Whipple, 1972; Weidenschilling, 1977), and they greatly affect the resulting dust distribution and flux in the millimeter continuum (e.g., Pinilla et al., 2012, 2020). However, models that include consistent dust evolution (i.e., growth, fragmentation, and multiple species, e.g., Birnstiel et al., 2010, 2012; Drazkowska et al., 2019), early substructures1, along with photoevaporative dispersal have not been widely studied. To our knowledge, only the work of Booth and Owen (2020) has simultaneously considered all three of the mentioned ingredients, in the specific context of the Solar System formation, where the authors show that large amounts of dust can be trapped by Jupiter, therefore decreasing the amount of refractory material delivered to the Sun by the time that photoevaporative dispersal starts to clear out the disk. Thus, it is necessary to further determine the emission in the millimeter continuum of photoevaporating disks where early dust traps are included, and compare them with the fluxes and morphology found in the bright transition disk population.
Footnote 1: In this article, we refer to substructures that are present in the disk before the onset of photoevaporative dispersal, as “early” or “primordial”.
We note that a common issue of photoevaporative disk models is that these tend to overpredict the fraction of non-accreting transition disks, colloquially dubbed as "relic disks" (Owen et al., 2011), which have not yet been detected by observations. In fact, based on the current observational thresholds, the fraction of relic disks should only be around 3% of the total transition disk population (Hardy et al., 2015), though this fraction should be revised, since several of the observed systems have been identified as non-cluster members in recent years (see Michel et al., 2021). Mechanisms that remove dust grains from the disk, such as radiation pressure (Owen and Kollmeier, 2019) or wind entrainment (Franz et al., 2020), could in principle reduce the infrared signal of these relics below detection limits, though it is an open question whether or not these processes would be able to remove enough solid material in a disk where early dust traps were present. Alternatively, it could also be that the fraction of relic disks is simply lower than previously predicted (see Ercolano et al., 2018; Garate et al., 2021).
In this paper, we studied the evolution of a photoevaporating disk from the point of view of the dust dynamics, using a 1D model. In particular, we focused on how the presence of early substructures (such as the ones caused by planets) affects the resulting dust density and size distribution during photoevaporative dispersal, along with the predicted flux in the millimeter continuum (\(\lambda=1.3\,\mathrm{mm}\)), and SED in the infrared. In our model we included the growth and fragmentation of multiple dust species (Birnstiel et al., 2010), state-of-the-art models of X-ray and EUV photoevaporation (Picogna et al., 2019), and the loss of dust particles with the photoevaporative winds (Franz et al., 2020).
We further discuss whether the predicted observational signatures of photoevaporating disks can be linked to those observed in transition disks, in terms of millimeter flux and morphology, how the presence or absence of substructures and their properties (location and amplitude) affects the observable features of dispersing disks; and finally, if the dust loss by wind entrainment during the dispersal process can explain why relic disks have not yet been detected.
In Section 2 we introduce our disk evolution model and its implementation. In Section 3 we present our simulation setup, and the explored parameter space. Our results are shown in Section 4, and in Section 5 we discuss them in the context of observations. We summarize our results in Section 6.
## 2 Disk model
In this section we present our disk evolution model in 1D, which includes gas and dust advection, dust diffusion, X-ray photoevaporation, a prescription for a gap-like substructure, and the evolution of the dust size distribution through coagulation and fragmentation; all assuming that the disk is axisymmetric.
### Gas evolution
The gas evolution is governed by the viscous diffusion and the mass loss due to photoevaporation. Then, the evolution of the gas surface density \(\Sigma_{\mathrm{g}}\) can be described through the following diffusion equation (Lust, 1952; Pringle, 1981)
\[\frac{\partial}{\partial t}\Sigma_{\mathrm{g}}=\frac{3}{r}\frac{\partial}{ \partial r}\left(r^{1/2}\frac{\partial}{\partial r}\left(\nu\Sigma_{\mathrm{g} }r^{1/2}\right)\right)-\dot{\Sigma}_{w}, \tag{1}\]
where \(r\) is the radial distance to the central star and \(\dot{\Sigma}_{w}\) is the mass loss rate, which depends on the X-ray luminosity \(L_{x}\) from the central star (see Section 2.3). The gas viscous evolution is characterized by the kinematic viscosity \(\nu\), which is defined in Pringle (1981) as
\[\nu=\alpha c_{\mathrm{s}}^{2}\Omega_{\mathrm{s}}^{-1}, \tag{2}\]
where \(\alpha\) is a dimensionless parameter that represents the magnitude of the gas turbulence (Shakura and Sunyaev, 1973), \(c_{s}=\sqrt{k_{B}T/mm_{p}}\), is the isothermal sound speed, with \(m_{p}\) the proton mass, \(\mu=2.3\) the mean molecular weight, \(k_{B}\) the Boltzmann constant, \(T\) the gas temperature. \(\Omega_{\mathrm{g}}=\sqrt{GM_{*}/r^{3}}\), is the Keplerian orbital speed, with \(G\) the gravitational constant, and \(M_{*}\) the mass of the central star.
To induce a gap-like substructure in the gas surface density, such as the one that would be created by a planet, we implemented the following radial turbulence profile that includes a Gaussian bump
\[\alpha(r)=\alpha_{0}\times\left(1+A_{\mathrm{gap}}\exp\left(-\frac{\left(r-r_ {\mathrm{gap}}\right)^{2}}{2w_{\mathrm{gap}}^{2}}\right)\right), \tag{3}\]
where \(\alpha_{0}\) is the base value for the turbulence, \(r_{\mathrm{gap}}\) is the location of the gap structure, \(w_{\mathrm{gap}}\) is the Gaussian standard deviation that controls the gap width, and \(A_{\mathrm{gap}}\) is the amplitude of the bump, which in turn controls the depth of the gap in the gas surface density profile. This Gaussian factor was also used by Pinilla et al. (2020), though in their study it was used to create bumps in the surface density, instead of gaps.
Physically, a local increment in the kinematic viscosity creates a region where the gas diffuses faster, which translates into a gap in the surface density profile. For disks that are in steady state, the gas accretion rate is radially constant, and given by \(\dot{M}_{\mathrm{g}}=3\pi\Sigma_{\mathrm{g}}\nu\)(Pringle, 1981). The relation between the gas surface density and viscosity can also be applied to disks that are
in quasi-steady state (such as the self-similar solution described by Lynden-Bell & Pringle 1974), where variations in the \(\alpha\) turbulence profile yield inversely proportional variations in the gas surface density, approximately following \(\Sigma_{\rm g}\propto\alpha^{-1}\)(see examples in Dullemond et al. 2018; Stammler et al. 2019; Pinilla et al. 2020, among others).
For the gas temperature we assumed that the disk is heated passively by the central star, and therefore its temperature profile is related to the stellar temperature \(T\), and size \(R_{*}\) through
\[T=\theta_{irr}^{1/4}\left(\frac{r}{R_{*}}\right)^{-1/2}T_{*}, \tag{4}\]
where \(\theta_{irr}=0.05\) is the disk irradiation angle.
### Dust evolution
For the dust evolution model, we followed the work of Birnstiel et al. (2010), which describes the advection, diffusion, coagulation, and fragmentation of multiple dust species, in response to the interaction with the gas component through the aerodynamic drag force.
The corresponding advection-diffusion equation for the dust surface density \(\Sigma_{\rm d}\) is
\[\frac{\partial}{\partial t}\left(r\,\Sigma_{\rm d}\right)+\frac{\partial}{ \partial r}(r\,\Sigma_{\rm d}\,v_{\rm d})-\frac{\partial}{\partial r}\left(rD _{\rm d}\Sigma_{\rm g}\frac{\partial}{\partial r}\epsilon\right)=-\dot{\Sigma} _{\rm w,d}, \tag{5}\]
where \(v_{\rm d}\) corresponds to the dust radial velocity, \(D_{\rm d}\) is the dust diffusivity, \(\epsilon=\Sigma_{\rm d}/\Sigma_{\rm g}\), is the local dust-to-gas ratio, and \(\dot{\Sigma}_{\rm w,d}\) is the dust loss rate due to wind entrainment (Hutchison et al. 2016; Franz et al. 2020; Hutchison & Clarke 2021; Booth & Clarke 2021; Franz et al. 2022, 2022, 20). Here we note that Equation 5 acts on every individual dust species, and that all dust related quantities are defined as functions of the particle size \(a\).
All components of the dust dynamics for a given particle size are determined by their dimensionless stopping time, the Stokes number
\[{\rm St}=\frac{\pi}{2}\frac{a\,\rho_{s}}{\Sigma_{g}}\cdot\begin{cases}1& \lambda_{\rm mtp}/a\geq 4/9\\ \frac{4}{9}\frac{a}{\lambda_{\rm mtp}}&\lambda_{\rm mtp}/a<4/9.\end{cases} \tag{6}\]
This definition distinguishes between the Epstein drag regime, that dominates when the grain sizes are smaller than the mean free path \(\lambda_{\rm mtp}\), and the Stokes regime, that occurs when the particles are large or the gas is very dense (for example in the inner disk).
The radial velocity of a dust particle is then
\[v_{\rm d}=\frac{1}{1+{\rm St}^{2}}v_{*}-\frac{2{\rm St}}{1+{\rm St}^{2}}\eta v _{*}, \tag{7}\]
following Nakagawa et al. (1986) and Takeuchi & Lin (2002), where \(v_{*}\) is the gas advection velocity due to viscous diffusion and \(\eta=-\left(1/2\right)\left(h_{\rm g}/r\right)^{2}{\rm d}{\rm ln}\,r/{\rm d} {\rm ln}\,r\), is the relative difference between the Keplerian velocity \(v_{\rm k}\) and the gas orbital velocity, due to its own pressure support. The isothermal pressure is defined as \(P=\rho_{\rm g}\rho_{\rm c}v_{*}^{2}\), with \(\rho_{\rm g,0}\) the gas volume density at the midplane, and \(h_{\rm g}=c_{s}\Sigma_{\rm d}^{-1}\), is the gas scale height.
Finally, the dust diffuses with a diffusivity \(D_{\rm d}=\nu/(1+{\rm St}^{2})\)(Youdin & Lithwick 2007), where we note that \(D_{\rm d}\approx\nu\) for particles with \({\rm St}\ll 1\).
In addition to advection and diffusion, dust species also grow and/or fragment depending on their relative velocities and their collision rate, where the evolution of the grain size distribution is governed by the Smoluchowski equation (Birnstiel et al. 2010). In a typical protoplanetary disk there are two regimes of dust growth: fragmentation limited, or drift limited (Birnstiel et al. 2010, 2012).
The fragmentation limit occurs when the collision velocities of larger dust grains surpass the fragmentation velocity threshold \(v_{\rm frag}\) (which depends on the material properties), resulting in destructive collisions, and replenishing the population of small grains (e.g., Ormel & Cuzzi 2007; Brauer et al. 2008; Birnstiel et al. 2009). This regime can be typically found in the inner regions of protoplanetary disks, regions with high turbulence, and in pressure maxima, where the drift velocity is zero, and is given by
\[{\rm St}_{\rm frag}=\frac{1}{3}\frac{v_{\rm frag}^{2}}{\alpha c_{s}^{2}}. \tag{8}\]
The drift limit, on the other hand, occurs when the dust grains grow to the point where the drift timescale is shorter than the growth timescale (Birnstiel et al. 2010). This regime appears in regions with steep pressure gradients, such as the outer disk and regions with low dust-to-gas ratios, where the maximum size that a grain can grow to is approximately
\[{\rm St}_{\rm drift}=\left|\frac{{\rm d}{\rm ln}\,P}{{\rm d}{\rm ln}\,r} \right|^{-1}\frac{v_{k}^{2}}{c_{s}^{2}}\epsilon_{\rm tot}, \tag{9}\]
where \(\epsilon_{\rm tot}\) refers to the _local_ dust-to-gas ratio of all dust species combined.
### Photoevaporation model
When high energy radiation from the central star hits the disk surface layer, the material is heated and unbound from the stellar gravitational potential in a process called photoevaporation, which ultimately leads to disk dispersal (Clarke et al. 2001; Alexander et al. 2006a,b; Alexander & Armitage 2007).
In our model, we implemented the mass loss rate profile from Picogna et al. (2019, see their Equations 2-5) into the sink term \(\dot{\Sigma}_{\rm w}\) of the gas diffusion equation (Equation 1) to simulate the
Figure 1: Example of the gas loss rate profile, following the photoevaporation model from Picogna et al. (2019, their Equations 2 and 4), for \(L_{*}=10^{90}\,{\rm erg\,s}^{-1}\). The figure shows the mass loss rate before the photoevaporative cavity opens (solid line) and after the photoevaporative cavity opens (dashed line), with the cavity edge located at \(r_{\rm cavity}\approx 30\,{\rm AU}\) after \(1.9\,{\rm Myr}\) of evolution.
effect of X-ray and EUV photoevaporation, where the total mass loss rate increases with the stellar X-ray luminosity \(L_{x}\).
Figure 1 shows an example of the \(\dot{\Sigma}_{w}(r)\) profile, that distinguishes between the case when the disk is still young and without a cavity, and the case after the photoevaporative cavity opens, where the cavity edge is directly irradiated by the central star. This model is valid for a 0.7 M\({}_{\odot}\) star irradiating a disk with the X-ray spectrum as given by Ercolano et al. (2009). Ercolano et al. (2021) and Picogna et al. (2021) expand this model to a range of stellar masses and apply observationally determined X-ray spectra. Given that the new models are qualitatively similar to those of Picogna et al. (2019) used here, we do not expect that their implementation in our work would lead to significant changes in our conclusions.
From Equation 1 we see that the gas evolution can be dominated by either viscous diffusion or by photoevaporation. In the early stages, when the disks are more massive, the viscous accretion is generally thought to be the dominant evolution mechanism, while photoevaporation will dominate the disk dispersal in later stages, removing the remaining material by opening a cavity from the inside out (Clarke et al., 2001).
Along with the gas removal through photoevaporative winds, we can also expect for a fraction of the dust grains to be entrained in the photoevaporative flow (Hutchison et al., 2016; Franz et al., 2020). To quantify the dust loss rate we define a sink term in Equation 5
\[\dot{\Sigma}_{w,d}=\epsilon_{w}\dot{\Sigma}_{w}, \tag{10}\]
where \(\epsilon_{w}\) represents the dust-to-gas _loss_ ratio, and is defined in our model as the mass fraction of particles that are small enough to couple to the gas motion with \(a\leq a_{w}\), and that lie above the wind launching surface with \(z\geq h_{w}\), where \(h_{w}\) and \(a_{w}\) are free parameters in our model.
The final ingredient to find \(\epsilon_{w}\) is to define a vertical structure for the gas and dust volume densities \(\rho_{\rm g}\) and \(\rho_{\rm d}\). Following Fromang and Nelson (2009), and assuming that the gas is in vertical hydrostatic equilibrium (with constant temperature in the vertical direction), we model the vertical density distribution as
\[\rho_{\rm g,d}(z)=\frac{\Sigma_{g,d}}{\sqrt{2\pi}h_{g,d}}\exp\left(-\frac{z^{ 2}}{2h_{g,d}^{2}}\right), \tag{11}\]
where the dust scale height \(h_{\rm d}\) is
\[h_{\rm d}=h_{\rm g}\cdot\min\left(1,\,\sqrt{\frac{\alpha}{\min({\rm St},\,1/ 2)(1+{\rm St}^{2})}}\right), \tag{12}\]
following Youdin and Lithwick (2007) and Birnstiel et al. (2010), where we notice that the dust scale height is smaller than the gas one, since large grains with (\({\rm St}\gtrsim\alpha\)) tend to settle toward the midplane.
The formal expression for the dust-to-gas loss ratio can then be written as
\[\epsilon_{w}(a)=\frac{\int_{h_{w}}^{\infty}\rho_{\rm d}(z,a){\rm d}z}{\int_{ h_{w}}^{\infty}\rho_{\rm g}(z){\rm d}z}, \tag{13}\]
which due to settling and growth is always smaller than the local dust-to-gas mass ratio (\(\epsilon_{w}<\epsilon\)).
Our photoevaporation model assumes that all the small grains above the wind launching surface are fully entrained and that none of the removed material (gas or dust) falls back onto the disk (Clarke and Alexander, 2016; Picogna et al., 2019; Franz et al., 2020; Sellek et al., 2021).
## 3 Simulation setup
We performed our simulations using the code DustPy2,3(Stammler and Birnstiel, 2022), that solves the gas and dust surface density evolution (Equation 1 and Equation 5) following the model from Birnstiel et al. (2010).
Footnote 2: A legacy version of the code was used. The latest version of DustPy is available on github.com/stammler/DustPy.
Footnote 3: DustPy is built on the Simframe simulation framework (github.com/stammler/simframe, Stammler and Birnstiel, 2022).
We implemented the photoevaporation model described in Section 2.3 and induced gap-like substructures using the \(\alpha\) turbulence profile from Equation 3. For our study, we focused on the impact that the X-ray luminosity \(L_{x}\), the gap amplitude \(A_{\rm gap}\), and the gap location \(r_{\rm gap}\) have on the disk evolution.
For each simulation we tracked the evolution of the disk mass \(M_{g,\rm d}\) (in gas and dust), photoevaporative cavity size \(r_{\rm cavity}\), flux in the millimeter continuum \(F_{\rm mm}\) at 1.3 mm, and SED. Our goal is then to compare these properties to the values observed in protoplanetary disk populations. In particular, we want to determine if these photoevaporating disks could be bright enough to explain the transition disk millimeter fluxes, or if they are too faint to be detected at all, and therefore related to the relic disk problem (Owen et al., 2011, 2012; Owen and Kollmeier, 2019).
In this section we describe the initial conditions and numerical grid setup, the radiative transfer model that we used to post-process the simulations and obtain the millimeter fluxes and SEDs, and finally our parameter space exploration, which we used to study the impact of the X-ray luminosity and gap properties.
### Initial conditions and numerical grid
In our simulations, the central star has a mass of \(M_{*}=0.7\) M\({}_{\odot}\), a radius of 1.7 R\({}_{\odot}\), and a temperature of 4500 K, selected to match the stellar parameters from the photoevaporative models of Picogna et al. (2019) and Owen and Kollmeier (2019). For these stellar properties, the disk has a temperature of \(\approx 190\) K at 1 AU (see Equation 4).
For the initial gas surface density profile we used a modified version of the Lynden-Bell and Pringle (1974) self-similar solution
\[\Sigma_{\rm g}(r)=\frac{M_{\rm disk}}{2\pi r_{c}^{2}}\left(\frac{r}{r_{c}} \right)^{-1}\exp(-r/r_{c})\frac{\alpha_{0}}{\alpha(r)}, \tag{14}\]
where \(M_{\rm disk}=0.05\) M\({}_{\odot}\) is the total disk mass, and \(r_{c}=60\) AU is the disk characteristic radius.
To introduce a gap in the surface density profile from the beginning of the simulation, we added the factor \(\alpha_{0}/\alpha(r)\) to the self-similar solution. Then, the resulting gap structure is consistent and sustained by the turbulence profile defined in Equation 3(Stammler et al., 2019; Pinilla et al., 2020; Stadler et al., 2022), where the gap center is located at \(r_{\rm gap}\) and the amplitude (i.e., the depth of the perturbation in the surface density profile) is determined by \(A_{\rm gap}\). In the case where the gap amplitude is \(A_{\rm gap}=0\) (i.e., no gap), we recover the traditional self-similar solution. We used a value of \(\alpha_{0}=10^{-3}\) for the disk base turbulence.
The initial dust-to-gas ratio is \(\epsilon_{0}=1.5\times 10^{-2}\), and the initial dust size distribution follows the MRN distribution (Mathis et al., 1977), with an initial maximum grain size of \(a_{0}=1\,\mu\)m. Our initial dust-to-gas ratio is higher than the canonical 1%, motivated by a recent study by Lebreuilly et al. (2020), which indicates that
protoplanetary disks may heritage higher dust-to-gas ratio than the ISM from the protostellar collapse.
For the dust grains we assumed that these are compact and covered by ice, with a material density of \(\rho_{s}=1.6\,\mathrm{g\,cm^{-3}}\) and a fragmentation velocity of \(v_{\mathrm{frag}}=10\,\mathrm{m\,s^{-1}}\)(Wada et al., 2011; Gundlach et al., 2011; Gundlach and Blum, 2015), though notice that recent results suggest that the fragmentation velocity of ice grains could be lower than previously thought (Gundlach et al., 2018; Musiolik and Wurm, 2019; Steinplz et al., 2019).
We used a logarithmically spaced radial grid going from 4 AU to 300 AU with \(n_{r}=200\) radial cells, and a logarithmically spaced mass grid from \(10^{-12}\,\mathrm{g}\) to \(10^{5}\,\mathrm{g}\) (approx. \(0.5\,\mu\)m to \(20\,\mathrm{cm}\) in grain sizes) with \(n_{m}=120\) cells. Finally, in order to determine the fraction of dust that is entrained in the photoevaporative wind \(\epsilon_{w}\), we employed a 1+1D approach in which we constructed a vertical grid locally at every radial grid cell to solve the integrals in Equation 13. This grid is defined in function of the gas scale height, going from the midplane to \(10\,h_{\mathrm{g}}\) with \(n_{z}=100\) cells.
We saved the simulation outputs every 0.1 Myr, and terminated the simulations when the photoevaporative cavity exceeded 120 AU in size, since other photoevaporation regimes are more likely to become dominant over the X-ray driven dispersal for larger cavity sizes (e.g. FUV, Gorti and Hollenbach, 2009).
### Radiative transfer and optically thin approximation
To obtain the millimeter fluxes \(F_{\mathrm{mm}}\) at \(\lambda=1.3\,\mathrm{mm}\) we can take two approaches: the vertical slab approximation, or the complete radiative transfer calculation. For the vertical slab approximation we used the vertically integrated surface density to calculate the optical depth \(\tau_{r}=\sum_{a}\kappa_{r}(a)\Sigma_{\mathrm{cl}}(a)\), where \(\kappa_{r}(a)\) is the absorption opacity and \(\nu\) is the frequency, and obtain the total flux at \(\lambda=1.3\,\mathrm{mm}\) (\(\nu=230\,\mathrm{GHz}\)) with
\[F_{\mathrm{mm}}=\int B_{\nu}(T)\,(1-\exp(-\tau_{\nu}))\,\mathrm{d}\Omega, \tag{15}\]
where \(B_{\nu}\) is the Planck function, \(\mathrm{d}\Omega\) is the solid angle differential, and \(T\) is the vertically isothermal dust temperature (Equation 4). This approach is ideal to quickly compute the fluxes for all snapshots directly from the dust distribution, but has the drawback that it is only reliable for low optical depths, it neglects the effect of self-scattering, and that the temperature profile may not be consistent with that of a irradiated disk, especially at the edge of the photoevaporative cavity.
To obtain more accurate fluxes for key snapshots, we used the radiative transfer code RADMC-3D4(Dullemond et al., 2012), to recalculate the dust temperature, the millimeter fluxes at 1.3 mm, and the SED between 0.1 \(\mu\)m and 1 cm. For our calculations we considered the complete treatment of scattering, that includes polarization and anisotropy (Kataoka et al., 2015). The radiative transfer is performed on a azimuthally symmetrical spherical grid, where the radial coordinate matches the logarithmically spaced grid from Dustpy (see Section 3.1), and the colatitude coordinate covers the entire domain with \(n_{\theta}=180\) grid cells. We used \(10^{7}\) photon packages to calculate the thermal structure, \(2.5\times 10^{5}\) photon packages to calculate the emission in the millimeter continuum, and \(10^{4}\) photon packages to calculate the SEDs. To account for the full scattering treatment we subdivided the azimuthal coordinate in \(n_{\theta}=64\) grid cells. For this work we also assumed that our disks are face-on (the inclination is \(i=0\)), and that they are located at a distance of \(d=140\,\mathrm{pc}\), which is the typical distance of the nearby star-forming regions (Dzib et al., 2018; Roccatagliata et al., 2020).
Footnote 4: www.ita.uni-heidelberg.de/ dullemond/software/radmc-3d/
For both the optically thin and the radiative transfer setup we used the opacity model from the DSHARP survey (Birnstiel et al., 2018), which assumes compact grains composed of water ice, troilite, refractory organics, and astronomical silicates (Henning and Stognienko, 1996; Draine, 2003; Warren and Brandt, 2008). Then we used the code OpTool5(Domnik et al., 2021) to obtain the opacities, following the Mie theory for compact grains, for all 120 grain sizes tracked by the Dustpy simulations.
Footnote 5: github.com/cdominik/optool
While the DSHARP opacities provide a convenient framework that is common to several recent studies, it is not clear whether these represent the true absorption of dust grains accurately. For example, the model from Ricci et al. (2010), based on the optical constants of Zubko et al. (1996), Draine (2003), and Warren and Brandt (2008), leads to absorption opacities that are approximately one order of magnitude higher than the DSHARP opacities in the millimeter continuum, which leads to higher optical depths and fluxes (Zormpas et al., 2022; Stadler et al., 2022). Since the fluxes obtained from radiative transfer calculations are dependent on the selected opacity model, we included in our results a comparison between the fluxes obtained with the Birnstiel et al. (2018) and the Ricci et al. (2010) opacity models.
### Parameter space: X-ray luminosity and gap properties
In this study we want to understand what effect the presence or absence of substructure has on the disk observable quantities during photoevaporative dispersal. To explore the parameter space we selected two fiducial simulations: one without a gap (\(A_{\mathrm{gap}}=0\)), and one with a gap (\(A_{\mathrm{gap}}=4\), i.e., a decrease by a factor of 0.2 in the local \(\Sigma_{\mathrm{g}}\)) located at \(r_{\mathrm{gap}}=40\) AU, where both simulations have the X-ray luminosity \(L_{x}=10^{30}\,\mathrm{erg\,s^{-1}}\). For reference, a gap located at 40 AU and with an amplitude of 4, is what we would expect from a planet of \(225\,\mathrm{M_{\oplus}}\) (approx. twice the mass of Saturn), or planet-to-star mass ratio of \(q\approx 9.5\times 10^{-4}\), following the Kanagawa et al. (2017) gap model.
Afterward, we repeated our study for different X-ray luminosities, while keeping the fiducial gap properties. Finally we studied the effect of the different gap locations and amplitudes, this time keeping the fiducial X-ray luminosity. Table 1 shows the X-ray luminosity and gap properties of our parameter space, with the fiducial values in boldface.
The X-ray luminosities were selected from within the range of the Taurus luminosity distribution (Preibisch et al., 2005). Each value of \(L_{x}\) can also be understood in terms of the resulting total mass loss rates \(M_{w}\), which are respectively \(4.6\times 10^{-9}\), \(1.6\times 10^{-8}\) and \(3.2\times 10^{-8}\,\mathrm{M_{\odot}\,yr^{-1}}\) for the values listed in Table 1 (see Eq. 5 from Picogna et al., 2019).
The gap locations were selected to be within (or at) the disk characteristic size \(r_{c}=60\) AU, and the maximum gap amplitude was selected to ensure that dust trapping is effective at the different gap locations. For the gap widths, we chose to apply a simple
\begin{table}
\begin{tabular}{l c} \hline \hline Variable & Value \\ \hline \(L_{x}\) [\(10^{30}\) erg s\({}^{-1}\)] & 0.3, **1.0**, 3.0 \\ \(r_{\mathrm{gap}}\) [AU] & 20, **40**, 60 \\ \(A_{\mathrm{gap}}\) & **0**, 1, 2, **4** \\ \hline \end{tabular}
\end{table}
Table 1: Parameter space.
prescription of
\[w_{\rm gap}=5\left(\frac{r_{\rm gap}}{40\,{\rm AU}}\right){\rm AU}, \tag{16}\]
which roughly matches the widths of the dust traps from Pinilla et al. (2020), and is always larger than the local scale height for our parameter space.
Finally, the amount of dust removed by photoevaporation in our model depends both on the maximum entrainment size and on the scale height of the wind launching region. As fiducial values for our simulations, we assumed that only particles smaller than \(a\leq a_{\rm w}=10\,\mu\)m (Hutchison et al., 2016; Franz et al., 2020; Hutchison & Clarke, 2021; Booth & Clarke, 2021) can be carried by the photoevaporative winds, and that the particles must be at least above \(z\geq h_{\rm w}=3h_{\rm g}\), though the photoevaporative surface can be located at higher altitudes.
In Appendix A we further explored the parameter space for \(a_{\rm w}\) and \(h_{\rm w}\), though we do not expect for the resulting dust distribution to be greatly affected by the exact parameter values, since due to grain growth and setting dust loss should be small in comparison to the gas loss (i.e., \(\epsilon_{\rm w}<\epsilon\), Franz et al., 2022). Motivated by the possibility of more efficient dust removal due to additional mechanisms such as radiation pressure (Owen & Kollmeier, 2019), an FUV component in the wind (e.g., Gorti & Hollenbach, 2009), or grains lifted by magneto-hydrodynamical (MHD) winds (Miyake et al., 2016), we also included a model in which all the dust is fully entrained in the wind (\(\epsilon_{\rm w}=\epsilon\)). We also included a comparison of the dust mass evolution of our fiducial model against the entrainment prescription of Booth & Clarke (2021).
## 4 Results
### Fiducial models
In this section we present our results for the evolution of photoevaporating disks, focusing on the effect that early substructures (represented through a primordial gap in the gas component) have on the dust component, in terms of the distribution of solids and the corresponding observable quantities (millimeter fluxes and SEDs).
We often refer to the disk without the primordial gap as a "smooth disk", and the disk with the primordial gap as a "structured disk". We also distinguish between the "gap" structure that is created through the variation in the \(\alpha\) viscosity profile (Equation 3), and the "cavity" that is carved by photoevaporative dispersal.
#### 4.1.1 Evolution of the dust distribution
A disk with a gap-like substructure can efficiently trap dust grains at the local pressure maximum, so long as the substructure forms before any significant radial drift occurs (Stadler et al., 2022). In contrast, in a disk without substructure the dust drifts very efficiently toward the star (Birnstiel et al., 2010; Pinilla et al., 2012, 2020). Our fiducial models show that, by the time that photoevaporation starts clearing the gas component from the inside out, after \(\sim 1\) Myr of disk evolution (for \(L_{x}=10^{30}\) erg s\({}^{-1}\)), the structured disk retains a higher mass of solids than the smooth disk, \(77\,{\rm M}_{\oplus}\) and \(4\,{\rm M}_{\oplus}\) respectively, out of the initial \(\sim 200\,{\rm M}_{\oplus}\) (see Figure 2, top panel).
Once photoevaporation opens up a cavity in the inner regions, further dust drift toward the star is completely halted, since the edge of the cavity is also a pressure maximum that can trap solids (see the evolution of the surface densities, Figure 3). From this point onward, the dust loss is driven exclusively by the entrainment in the photoevaporative winds, though additional loss terms such as planetesimal formation and removal by radiation pressure are neglected in our model (Stammler et al., 2019; Owen & Kollmeier, 2019, see discussion in Section 5.3).
Figure 2 (bottom panel) shows that the dust loss rate by photoevaporative entrainment increases between one and two orders of magnitude after the photoevaporative cavity opens. One reason is that the gas loss rate locally increases at the edge of the photoevaporative cavity, where the material is directly irradiated by the central star. In Figure 1 we see how the gas loss profile changes in this "open cavity" scenario, with a sharp spike at the location of the cavity edge (see also Picogna et al., 2019, their Eq. 4). The other reason, and perhaps more importantly, is that dust growth becomes limited by fragmentation around the pressure maxima which replenishes the population of small grains that are more easily entrained with the wind, and the gas surface density is also reduced at the photoevaporative cavity edge, sharply reducing the maximum grain size. In contrast, the growth in the gas rich regions with steeper pressure gradients is limited only by drift, which results in a dust distribution dominated by large particles (this can bee seen from the grain size distributions in Figure 4 and 5), that are not easily entrained with the photoevaporative wind.
Figure 2: _Top:_ Mass evolution of the disk mass in gas (red lines) and dust (blue lines). _Bottom:_ Evolution of the mass loss rate of gas and dust by photoevaporative winds. The markers indicate the moment when photoevaporation opens a cavity in the inner disk (“+” for the initially smooth disk, “x” for the initially structured disk).
After the photoevaporative cavity opens, the remaining dust mass decreases from \(77\,\mathrm{M_{\oplus}}\) to \(55\,\mathrm{M_{\oplus}}\) for the structured disk model, and from \(4\,\mathrm{M_{\oplus}}\) to \(3\,\mathrm{M_{\oplus}}\) for the smooth disk model. These values imply that the total dust loss across the disk lifetime (or at least until the cavity size reaches \(r_{\mathrm{cavity}}=120\,\mathrm{AU}\) in our model) is mostly dominated by drift during the early stages of disk evolution, rather than entrainment in the photoevaporative winds (see also Ercolano et al. 2017).
From the surface density profiles (Figure 3) and the grain size distributions (Figure 4 and 5), we find that once a photoevaporative cavity opens, the remaining solid material is dragged along with the cavity outer edge, following the moving pressure maximum. For the smooth disk, this leads to the formation of a single dust trap at \(1.4\,\mathrm{Myr}\), that moves outward as time passes. For the structured disk, on the other hand, we find that between \(1.2\) and \(1.4\,\mathrm{Myr}\) there are two traps present, one that follows the photoevaporative cavity, and the other at the outer edge of the primordial gap, which should lead to a distinct disk morphology featuring two rings. Eventually, both dust traps merge into one when the cavity catches up with the gap location, which then continues to move outward. We infer that the two ring morphology is more likely to be observed if the primordial dust trap is located at larger radii than the photoevaporative cavity opening radii. The latter would delay the merging of the two rings and increase the window of observation, though to get an accurate estimate of the likelihood of observing this evolutionary stage, a population synthesis model would be required. We also note that if a dead zone is present in the inner disk, the photoevaporative cavity opening radius can be located beyond \(10\) to \(20\) AU (Garate et al. 2021), meaning that disks with primordial dust traps located inside the dead zone radius (such as Jupiter's current orbit) would not lead to the described two ring morphology.
This particular behavior in the evolution of structured disks leads to a degeneracy between the properties of an observed dust ring and its potential origin, which is of particular interest for the study of transition disks, and we discuss more about it in
Figure 4: Dust size distribution for a smooth disk at \(1.2\), \(1.6\), and \(2.1\,\mathrm{Myr}\). The drift (\(c\)_cyan_) and fragmentation (_pink_) growth limits are also indicated.
Figure 3: Evolution of the surface density profiles disks from \(t=1\,\mathrm{Myr}\), and plotted every \(0.1\,\mathrm{Myr}\) (solid lines, with line opacity increasing with time). The dust surface density accounts for all the grain sizes. The initial condition is shown with dashed lines.
Section 5.2.
#### 4.1.2 Millimeter emission and SED
Because the dust masses between the structured and smooth disks differ by over an order of magnitude during the photoevaporative dispersal, this also results in a similar difference between their corresponding luminosities in the millimeter continuum (Figure 6). In our models, the flux of the structured disk is \(F_{\rm mm}\approx 65\) mJy (obtained from Equation 15) by the time the cavity opens, and remains approximately constant until the cavity reaches the location of the primordial dust trap at \(r\approx 50\) AU, afterward the flux continues to decrease, reaching 44 mJy by the time the cavity has grown to 100 AU. The smooth disk flux simply decreases approximately from 4 mJy to 2 mJy.
We also note that the radiative transfer calculations with RADMC-3D differ only by a small factor from the values obtained with the vertical slab approximation when using the DSHARP opacity model. The difference in both fluxes is likely due the direct heating of the photoevaporative cavity edge by the stellar irradiation, and the proper treatment of the scattering and optical depth.
Finally, the SEDs (Figure 7) show a similar behavior for both the smooth and structured disk, where the deficit in the NIR to MIR wavelengths becomes more prominent as the cavity size
Figure 5: Dust size distribution for a structured disk at 1.1, 1.3, and 1.6 Myr. The drift (\(c\)_yan_) and fragmentation (_pink_) growth limits are also indicated. The last two snapshots were selected to match the ones of Figure 4 in terms of the photoevaporative cavity size.
Figure 6: Millimeter fluxes \(F_{\rm mm}\), as a function of the size of the photoevaporative cavity for the smooth (dotted) and structured (solid) disk models. The solid and dotted lines represent flux from the optically thin approximation (Equation 15). The markers are the fluxes obtained with RADMC-3D (“+” for the smooth disk, “x” for the structured disk). The disks are assumed to be at 140 pc, and no inclination. The cavity size measurement is based on the dust distribution of millimeter sized grains.
Figure 7: SEDs for the smooth (dotted) and structured (solid) disk models, when the photoevaporative cavity size is \(r_{\rm cavity}\approx 15\) AU (gray), and \(r_{\rm cavity}\approx 100\) AU (black), assuming a distance of 140 pc and face-on. The data points show the SED of SzCha from van der Marel et al. (2016) survey, re-scaled to a distance of 140 pc.
grows, and small grains are removed from the inner regions, meaning that our models would be classified as transition disks by their SED (Espaillat et al., 2014). Additionally, the disks in our model display a high emission in the far infrared (FIR, around \(100\,\mu\)m) which are comparable to those of the transition disk SzCha (van der Marel et al., 2016; Gaia Collaboration, 2020, with the distance rescaled to 140 pc), though we remark that this is not intended to be a representative comparison with the transition disk population. In Section 5.3 we discuss the implications of the FIR excess in the context of the relic disk problem.
### Effect of the X-ray luminosity
In this section we test the impact that different X-ray luminosities have on the evolution of the disk mass in the dust component, and on the corresponding flux in the millimeter continuum (see Figure 8). For the structured disk, we use again the fiducial gap amplitude of \(A_{\rm gap}=4\) located at \(r_{\rm gap}=40\) AU.
We notice that for higher X-ray luminosities the photoevaporative cavity opens earlier (due to the higher mass loss rates), and that both the dust mass and millimeter flux are also higher when the photoevaporative cavity opens. This occurs because the dust drift, dust diffusion, and wind entrainment processes had less time to remove solid material from the disk.
The difference can be clearly seen in the mass and flux evolution of the smooth disk, where dust drift can only be stopped by the pressure maximum corresponding to the photoevaporative cavity. For the highest X-ray luminosity (\(L_{x}=3\times 10^{30}\,{\rm erg\,s^{-1}}\)) the millimeter flux of the smooth disk is \(F_{\rm mm}=12\) mJy (at \(0.8\) Myr, when the cavity opens), while for the lowest X-ray luminosity (\(L_{x}=3\times 10^{29}\,{\rm erg\,s^{-1}}\)) the flux is only \(F_{\rm mm}=0.4\) mJy (at \(4.3\) Myr).
On the other hand, while the structured disk can trap dust particles at the local pressure maximum, some of them will still diffuse through the gap and be lost to the star. We find that dust entrainment only accounts for a minor fraction of the dust removal in structured disks before the opening of the photoevaporative cavity, with rates between \(10^{-6}\) M\({}_{\odot}\) yr\({}^{-1}\) and \(10^{-5}\) M\({}_{\odot}\) yr\({}^{-1}\) (for \(L_{x}=3\times 10^{29}\) erg s\({}^{-1}\) and \(3\times 10^{30}\) erg s\({}^{-1}\), respectively). The millimeter flux of the structured disk is \(F_{\rm mm}=70\) mJy when the cavity opens (at \(0.7\) Myr) for the highest X-ray luminosity, and \(F_{\rm mm}=49\) mJy for the lowest X-ray luminosity (at \(3.6\) Myr). We expect that disks with multiple dust traps would be able to retain more material and prevent further dust loss due to diffusion (Pinilla et al., 2012, 2020).
Another feature that we observe for each pair of simulations with the same X-ray luminosity, is that the inner cavity opens earlier in the structured disk than in the smooth one by \(0.2\) Myr, a behavior that is also seen in the simulations of Rosotti et al. (2013) when the effects of photoevaporation and planet-disk interactions are considered. This occurs because the presence of the gap structure seems to speed up the viscous evolution of the disk by a small factor, reducing the gas accretion rate in the inner regions faster, and allowing for photoevaporative dispersal to start earlier. This is also the reason why the gas mass decreases slightly faster in the initially structured disk than in the initially smooth disk shown in Figure 2 (top panel).
### Effect of the trap location and amplitude
In this section we test the impact that the gap amplitude and location have on the evolution of the mass in the dust component
Figure 8: Evolution of the dust mass (_top_) and disk flux at \(\lambda=1.3\) mm (_bottom_, assuming a distance of 140 pc), for different X-ray luminosities \(L_{x}\), with the black line corresponding to the fiducial value. The markers indicate the moment when photoevaporation opens a cavity in the inner disk (“+” for the smooth disk, “x” for the structured disk).
Figure 9: Same as Figure 8, but for structured disks with different gap amplitudes, with a fixed location at \(r_{\rm gap}=40\) AU and X-ray luminosity of \(L_{x}=10^{30}\) erg s\({}^{-1}\). Notice that the axis scales are different from Figure 8.
and the respective flux in the millimeter continuum (Figure 9 and 10). For the X-ray luminosity we use the fiducial value of \(L_{x}=10^{30}\,\mathrm{erg\,s^{-1}}\).
We observe that a minimum amplitude of \(A_{\mathrm{gap}}=2\), which roughly corresponding to a Saturn mass in our models, seems to be required for a gap at \(r_{\mathrm{gap}}=40\,\mathrm{AU}\) to effectively trap the dust particles that drift from the outer region (Figure 9, top panel). The dust mass for this disk, when the inner cavity opens, is about \(60\,\mathrm{M_{\oplus}}\), in comparison with the \(77\,\mathrm{M_{\oplus}}\) measured for the disk with the deeper gap (\(A_{\mathrm{gap}}=4\)). In contrast, the disk with \(A_{\mathrm{gap}}=1\) is unable to stop the dust drift and is almost indistinguishable from the completely smooth disk, retaining a dust mass of only \(7\,\mathrm{M_{\oplus}}\) at the cavity opening.
The location of the gap (relative to the disk characteristic radius \(r_{c}\)) on the other hand determines the size of the dust reservoir that can be retained at a given dust trap. Because dust grains tend to drift inward, all the dust that is initially inside the gap location is rapidly lost into the star, as can be seen during the first \(0.1\,\mathrm{Myr}\) of disk evolution (Figure 10, top panel). Only the dust grains that are further away than the local pressure maximum can be potentially trapped in (if we ignore diffusion and wind entrainment), and therefore gaps that are closer to the star result in a higher dust content during the disk evolution and dispersal. For comparison, the dust mass of the disk with the innermost gap at \(20\,\mathrm{AU}\), is approximately double the mass of the disk with the outermost gap at \(60\,\mathrm{AU}\), at the moment of the cavity opening.
In terms of the millimeter flux we notice that the disk with \(r_{\mathrm{gap}}=20\,\mathrm{AU}\) is slightly fainter than the fiducial model with \(r_{\mathrm{gap}}=40\,\mathrm{AU}\) upon the opening of the inner cavity. This might sound counter-intuitive, since the simulation with \(r_{\mathrm{gap}}=20\,\mathrm{AU}\) has a larger dust mass (Figure 10) at all times. However, we also notice that a dust trap located further inside into the disk will have both a higher optical depth (\(\tau_{r}\gtrsim 1\)) and a smaller surface area, where both of these effects would then contribute to reduce the total flux observed (see Equation 15). In other words, at the moment of the cavity opening the disk with the innermost dust trap has more solid material, but this material is "hidden" from the observer. As the cavity starts growing and the dust trap moves out, the disk with the initially innermost gap becomes the brightest, reaching \(F_{\mathrm{mm}}=78\,\mathrm{mJy}\) at its peak (after the inner cavity opens).
We note that a disk with multiple traps located in the inner and outer regions should be able to retain a higher fraction of the initial dust mass, and display higher fluxes in the millimeter continuum. We also want to point out that the ratio between the gap location and the disk initial size (\(r_{\mathrm{gap}}/r_{c}\)) should be more important to predict the amount of dust trapped than their absolute value, since that the same values of \(r_{\mathrm{gap}}\) presented in this section may trap more (or less) solid material for disks that are initially more extended (or compact).
### Comparison between opacities
Due to the uncertainty in the opacities of dust grains in protoplanetary disks, particularly in the amount of carbonaceous material, we chose to perform an additional set of radiative transfer calculations, this time with the opacity model of Ricci et al. (2010), and compare the corresponding millimeter fluxes against those obtained with the Birnstiel et al. (2010) (DSHARP) model in Section 4.1.2.
From Table 2 we observe that the flux in the millimeter continuum is always higher when the Ricci et al. (2010) opacity model is considered, which is to be expected since the absorption opacities are also higher. The trend between smooth and structured disks is maintained no matter the opacity model used, with the smooth disks being fainter (\(F_{\mathrm{mm}}<20\,\mathrm{mJy}\)) and the structured disks being brighter (\(F_{\mathrm{mm}}>110\,\mathrm{mJy}\)).
Finally, we notice that the relative increase in flux when using the Ricci et al. (2010) opacities is higher in the smooth disks (a factor between \(4\) to \(5\)) than in the structured disks (a factor between \(2\) to \(4\)). This occurs because the structured disks have regions that are optically thick (at the dust traps, for example), while on the other hand most of the smooth disks are optically thin, and therefore more sensitive to changes in the opacity model.
## 5 Discussion
### Explaining the observed transition disks with photoevaporation models
Since first discovered, transition disk properties have remained a challenge to theoretical models. These objects display deep
\begin{table}
\begin{tabular}{l c|c c} \hline \hline Disk Model & \(t\) [Myr] & \(F_{\mathrm{Birnstiel}}\) [mJy] & \(F_{\mathrm{Ricci}}\) [mJy] \\ \hline & 1.6 & 4.6 & 18.2 \\ Smooth & 2.1 & 3.5 & 14.7 \\ & 2.3 & 2.7 & 11.9 \\ \hline & 1.3 & 56.1 & 114.0 \\ Structured & 1.6 & 82.3 & 166.1 \\ & 1.9 & 55.0 & 169.6 \\ \hline \end{tabular}
\end{table}
Table 2: Flux comparison between Birnstiel et al. (2018) and Ricci et al. (2010) opacity models, at \(\lambda=1.3\,\mathrm{mm}\).
Figure 10: Same as Figure 8, but for structured disks with different gap locations, with a fixed amplitude of \(A_{\mathrm{gap}}=4\) and X-ray luminosity of \(L_{x}=10^{30}\,\mathrm{erg\,s^{-1}}\). Notice that the y-axis scale is different from Figure 9.
cavities in the dust component, as probed by the deficit in NIR and MIR emission (Strom et al., 1989; Skrutskie et al., 1990; van der Marel et al., 2016) and resolved continuum observations (Andrews et al., 2011; Pinilla et al., 2018; Francis and van der Marel, 2020), while displaying relatively high accretion rates (\(\dot{M}_{\rm acc}\sim 10^{-10}\) to \(10^{-8}\) M\({}_{\odot}\) yr\({}^{-1}\)) that indicate the presence of long lived inner disks (Cieza et al., 2012; Alcala et al., 2014; Manara et al., 2017). Additionally, transition disks seem to be distributed across a mm-faint (\(F_{\rm mm}<30\) mJy) and a mm-bright (\(F_{\rm mm}>30\) mJy) population, according to their fluxes in the millimeter continuum (see the review by Owen, 2016). However, it should be noted that the faint transition disks, which are identified based on their SEDs, could be instead highly inclined disks rather than actual transition disks with cavities (van der Marel et al., 2022).
Giant planets have often been proposed as an explanation for transition disk properties, since these can carve deep gaps in the millimeter continuum by trapping millimeter grains, while still allowing for the flow of gas toward the inner disk (Dong et al., 2012; Pinilla et al., 2012; de Juan Ovelar et al., 2013; Owen, 2014). On that line, a recent study of van der Marel and Mulders (2021) proposes that the population of observed transition disks can be linked to the population of detected exoplanets, under the assumption that planets more massive than Jupiter are responsible for the transition disk substructures, and that these planets then migrate inward.
Yet, despite the high masses predicted for the planetary companions (Jupiter-sized or larger, Muley et al., 2019; van der Marel et al., 2021), to this day the only disk with a confirmed planet detection is PDS-70 (Kepler et al., 2018). While some of these hypothetical planets would still be beyond the current detection limits (Asensio-Torres et al., 2021), we cannot discard that other mechanisms may be actually responsible for transition disks.
The simulations presented in this work (and in combination with Garate et al., 2021), offer an additional pathway to explain the properties of transition disks without the presence of very massive planets, which would also explain why we have not detected these giants in the first place. In Section 4 we showed that when substructures are present, the dust grains are retained in the local pressure maxima through the lifetime of the protoplanetary disk. Then, once the inside-out dispersal due to photoevaporation begins, all the trapped material is dragged along with the edge of the photoevaporative cavity, which results in a bright disk with a wide expanding cavity (Figure 6), consistent with the mm-bright population of transition disks.
Now, notice that the only requirements imposed on these substructures to explain transition disk properties of the mm-bright population would be a minimum amplitude (i.e., gap depth) to trap solid material, and dust traps located closer to the star favor would favor brighter disks (see Figure 9 and 10). This means that even planets with masses between those of Saturn and Jupiter, without imposing strong constraints on their location, could create the necessary substructures to reproduce a population of millimeter bright transition disk, under the assumption that internal photoevaporation is the primary dispersal process responsible for opening the cavity.
On the other hand, if substructures are absent, then dust drift depletes most of the solid material in the protoplanetary disk, which leads to a disk with a faint emission in the continuum and a cavity driven exclusively by photoevaporation, similar to the transition disks found in the mm-faint population. We note that these analogies between smooth disks and mm-faint population, and between structured disks and mm-bright population, hold independently of the dust opacity (see Section 4.4).
In summary, our model suggests a synergy between photoevaporation and substructures, where photoevaporation is responsible for creating deep cavities in the gas and dust (both micron and millimeter sized components), while the presence or absence of early substructures determines the dust trapping and the disk millimeter flux, but not necessarily the observed cavity size. Without photoevaporation a moderate sized planet would not be able to create a deep cavity, and without early substructures photoevaporation would not be able to trap the amount of dust observed in bright transition disks.
Regarding the absolute dust mass and millimeter flux values found in our simulations for the smooth and structured disk models, we highlight that these depend on the initial dust mass used in the simulation setup, and it is more meaningful to look at the fraction of dust that was retained or lost relative to the initial value. For our fiducial setup that means that \(\approx 25\%\) of the initial dust mass was still present for the structured disk, and only \(\approx 2\%\) for the smooth disk. Transition disks with higher millimeter fluxes than those found in our current paper, such as LkCa 15 or GM Aur (Facchini et al., 2020; Huang et al., 2020, respectively), could be explained if we consider, for example, a disk that is initially more massive.
Another characteristic of transition disks are their relatively high accretion rates. Previously these seemed to be incompatible with photoevaporation models, which typically overpredicted the fraction of non-accreting disks with large cavities (Owen et al., 2011; Picogna et al., 2019). However, the works of Morishima (2012) and Garate et al. (2021) showed already that a dead zone in the inner regions leads to long-lived inner disks, capable of sustaining high accretion rates while photoevaporation opens a cavity in the outer disk, and the work of Ercolano et al. (2018) and Wolfer et al. (2019) suggests the fraction of accreting disks with photoevaporative cavities could be higher if the disks are relatively depleted in carbon and oxygen.
Based on the measurements of X-ray luminosities (Preibisch et al., 2005), on the models of dead zones that suggest that these should be both common and stable (Delage et al., 2022, 2022b, in prep.), and on the abundance of observed substructures in disks (e.g., Williams and Cieza, 2011; Andrews et al., 2018), which additionally are expected to form early in the disk evolution (Stadler et al., 2022), we can expect for all of the above-mentioned ingredients to influence evolution of protoplanetary disks. Then, in a combined disk model we would have that photoevaporation takes care of opening wide cavities, dead zone properties set the lifetime of the inner disk and accretion rate onto the star, and the dust trapping in early substructures (by a Saturn-to-Jupiter mass planet for example) determines the disk flux.
For example, our work shows that bright transition disks such as DM Tau, could still be explained through photoevaporation (see Section 6.3 by Francis et al., 2022, though a higher initial disk mass would be most likely required to explain this particular object), provided that there are additional dust trapping mechanisms acting during the early stages of disk evolution to explain the millimeter fluxes. With this new information taken into account, we can relax the constraints on massive planets as the sole cause of transition disks with wide gaps, and also relax the requirements on planet migration to link the transition disk and exoplanet populations (van der Marel and Mulders, 2021).
### The degeneracy of planet properties when photoevaporation is considered
In this section we want to emphasize an (unfortunate) consequence suggested by our model in terms of the characterization
of planet candidates within transition disk cavities. Our model shows a disk in which photoevaporative dispersal controls the size of the cavity that expands from the inside out, and primordial substructures (which in this subsection we assume to be caused by planets) control the dust trapping and millimeter flux.Here, we can distinguish two stages in the photoevaporative dispersal process. An early stage when the photoevaporative cavity is smaller than the planet orbit (which leads to the formation of two dust rings), and a later stage where the photoevaporative cavity size is larger than the planet orbit. In the latter scenario we find that the size of the cavity is completely independent of the planet orbital location, and the disk flux in the millimeter continuum is only mildly sensitive to the gap amplitude and location (Figure 9 and 10).
Therefore, our model suggest that there is to a strong degeneracy in which we cannot draw reliable constraints on the mass or location of a planet candidate within a transition disk based on the properties of the cavity. A wide cavity with a bright ring in the millimeter continuum might be very well caused by a super-Jupiter mass planet near the cavity edge, or by photoevaporative dispersal and a Saturn mass planet hidden in the inner regions. In particular, a planet inside a photoevaporative cavity would be effectively disconnected from the rest of the protoplanetary disk, and would not display a detectable circumplanetary disk during the dispersal process, and this could in principle explain why we have not detected more planet companions or circumplanetary disks other than PDS70 (Keppler et al., 2018; Benisty et al., 2021).
To further illustrate this situation, we show a synthetic observation of our structured disk model in Figure 11, which was generated using the \(\SIMPS{SIM}{6}\) package (Kurtovic et al., in prep.) to post-process our radiative transfer models as if they were observed by ALMA with the template configuration of Elias24 (Huang et al., 2018). The image sequence shows that once the photoevaporative cavity opens (1.3 Myr, left panel) two rings can be initially identified, where the inner one is caused by the dust trapped at the photoevaporative cavity edge, while the outer one is caused by the dust trapped outside the planet gap. In our simulations, the inner disk is fainter than the outer one (at this early stage) because the photoevaporative cavity was only able to trap the remaining material from the inner regions, while the primordial gap trapped most of the material available from the outer regions. However, this stage is short-lived and lasts only for \(\sim 0.2\) Myr in our the fiducial setup. As time passes, the photoevaporative cavity expands, which causes the inner dust ring to merge with the outer one (1.6 Myr, middle panel), and then to continue to expand well beyond the planet orbital location (1.9 Myr, right panel). Figure 11 suggests that by the last snapshot it would be impossible to infer anything about the planet location (that was responsible for the dust trapping) from the ring morphology and location in the millimeter continuum alone.
In order to distinguish if a cavity could be carved by photoevaporation or by a massive planetary companion, other types of signatures should be considered. Recent theoretical studies have focused on modeling the observational signatures from photoevaporative winds, including the expected dust content entrained with the gas (Franz et al., 2022, 2022), which could be used to point toward the photoevaporative origin of some cavities in transition disks where a planet companion has not yet been found, though these models are strongly dependent on the dust reservoir at the cavity edge. As shown in Figure 3, the peak of \(\Sigma_{\rm d}\) is always at larger radii than that of \(\Sigma_{\rm g}\), which means the dust signature in the wind may be even fainter than predicted by Franz et al. (2022); and further studies investigate the correlation between the gas and dust distributions at the cavity edge in more detail (Picogna et al., 2023).
Local perturbations in the gas kinematics, for example, can be linked to a planetary companion embedded in the gap (e.g., Perez et al., 2015; Pinte et al., 2019; Izquierdo et al., 2022). Asymmetries such as spirals would not be caused by photoevaporation, but other processes in addition to planetary companions can also cause them, such as self-gravitating instabilities (e.g., Lodato and Rice, 2005; Meru et al., 2017) and shadows cast by an inclined inner disk (Montesinos and Cuello, 2018; Cuello et al., 2019).
Characterizing the gas content inside the millimeter continuum cavities can also help to differentiate between the different scenarios (van der Marel et al., 2016), since planets tend to carve deeper cavities in the dust than in the gas, while photoevaporation carves deep cavities in both of the components (see reviews by Owen, 2016; Ercolano and Pascucci, 2017). However,
Figure 11: Synthetic ALMA observations at 1.3 mm of the structured disk model at 1.3 Myr, 1.6 Myr, and 1.8 Myr, generated using the \(\SIMPS{SIM}{10}\) package (Kurtovic et al., in prep.) to post-process our radiative transfer model. The image shows how our disk would look if it was observed with the same ALMA configuration of Elias24 (Huang et al., 2018, from the DSHARP sample), assuming a distance of 139 pc and an inclination of 29\({}^{\circ}\). The beam size is plotted in the lower-left corner. The orbit of the primordial gap (\(r_{\rm{sp}}=40\) AU) is marked with a dashed line. We note that this is not intended to be a comparison “with” Elias24.
we note that the presence of a dead zone may also result in a long-lived inner disk inside the photoevaporative cavity (Morishima, 2012; Garate et al., 2021). This inner disk would be rich in gas and poor in dust, and would act as an accretion reservoir for the star while photoevaporation opens a cavity in the outer regions, effectively mimicking some of the features produced by a planet-carved gap.
With this section we have highlighted how different mechanisms can interact together to produce similar features to those of a very massive planet, and that this degeneracy should be taken into account in the cases where no further evidence of the expected planet candidates is found.
### On the relic disk problem
Several explanations have been proposed to solve the overprediction of relic disks by photoevaporative models. Some studies focus on reducing the predicted fraction through faster dispersal processes, such as thermal sweeping (Owen et al., 2013), and low carbon and oxygen abundances (Ercolano et al., 2018; Wolfer et al., 2019), or through long lived inner disks with dead zones that can sustain the high accretion rates for longer times (Garate et al., 2021). Another possibility is that the radiation pressure is very efficient at removing dust during photoevaporation dispersal, since the stellar photons can transfer their momentum directly to the solid particles, which can lead to very faint disks that would be hard to detect in the IR (Owen and Kollmeier, 2019).
In this work we find that the SEDs from Figure 7 still display a high amount of FIR emission, which is above the median of class II disks from nearby star-forming regions (Ribas et al., 2017), and in line with the high FIR luminosities found in transition disks (Espaillat et al., 2014). However, we note that these emissions are only possible in our model because the dust removal through entrainment with photoevaporative winds is very inefficient, leading to low dust-loss rates, even after considering different entrainment parameters (see Appendix A and Hutchison et al., 2016; Hutchison and Clarke, 2021; Booth and Clarke, 2021). If we considered the additional effect of radiation pressure and magneto-thermal wind models to remove solid material (Owen and Kollmeier, 2019; Rodenkirch and Dullemond, 2022, respectively), we would likely find also a deficit of FIR that is more in line with the predictions for relic disks.
Another mechanism that could reduce the disk flux is the conversion of dust particles into planetesimals by streaming instability (Youdin and Goodman, 2005), which tends to limit the optical thickness at the dust ring to values of \(\tau_{\nu}\approx 0.5\) as shown by Stammler et al. (2019), however this would occur only in the cases where the local dust-to-gas ratio is high with \(\epsilon\gtrsim 1\), and would stop as soon as the dust content drops below this threshold. Because of the self-regulating nature of planetesimal formation, we do not expect it to significantly reduce the millimeter flux. We further note that a recent study of Carrera and Simon (2022) questions the efficiency of the streaming instability as a planetesimal formation scenario in pressure bumps, which would also reinforce our claim.
With all these different mechanisms at play it becomes unclear whether we should expect photoevaporating disks to be bright or faint in the FIR. We propose that the answer depends on the presence or absence of an inner gas disk during the dispersal process, such as the ones sustained by dead zones (Morishima, 2012; Garate et al., 2021). In the case that an inner disk is absent, the edge of the photoevaporative cavity (where most of the dust is trapped) should be directly irradiated by the central star, leading to higher photoevaporation rates, and efficient dust clearing by radiation pressure (Picogna et al., 2019; Owen and Kollmeier, 2019). Instead, if an inner disk is present, it could cast a shadow on the outer disk (see Ueda et al., 2019) and shield the edge of the photoevaporative cavity from the direct irradiation, slowing its dispersal and reducing the dust loss rates found in Owen and Kollmeier (2019).
Testing this idea however, goes beyond the scope of this work, and requires a proper consideration of the dust distribution of the inner disk in the presence of a dead zone and photoevaporation, the effect of accretion heating which may increase the scale height of the inner disk, and resolving the inner edge of the dead zone where large amounts of dust can be trapped (Ueda et al., 2019). If the presence of an inner disk during dispersal is correlated with the disk mass (which is likely if the inner disk is sustained by a dead zone, see Turner et al., 2007; Delage et al., 2022), then we would expect that more massive disks are more likely to become accreting transition disks that retain a bright dust component, while less massive disks become relic disks that both disperse quickly and lose their dust content due to radiation pressure. We expect that only vertically thicker inner disks will be able to block enough stellar irradiation to slow down the dust removal from the outer disk, and it remains to be tested whether or not such inner disk scale heights are achievable.
Our preliminary results in Appendix A on this aspect are however inconclusive, as we find that even a perfect entrainment scenario is not enough to completely remove the FIR excess. While dust removal by radiation pressure was shown to reduce the FIR emission by Owen and Kollmeier (2019, see their Figure 11), we find that it leads to dust loss rates that are similar to those from our current work for our fiducial parameters. Thus, our model seems unable to create the undetectable relic disks so far.
Other avenues to proceed could be to reconsider the relic disk problem in the light of recent results, such as the non-membership of some of the systems considered in the Hardy et al. (2015) study (see Galli et al., 2020; Luhman, 2020; Michel et al., 2021), and that some of the faint transition disks mentioned in Owen et al. (2012) might be actually inclined disks (van der Marel et al., 2022), before drawing further conclusions regarding the nature of relic disks.
Finally, we note that two recent observations of the disks J16090141 and J16070384 by van der Marel et al. (2022) could be cataloged as relic disks candidates, since they show most of the expected features, that is, large cavities in the millimeter continuum, low millimeter fluxes, low accretion rates (Alcala et al., 2014, 2017), and low emission in the FIR (Ansdell et al., 2018), though it is still necessary to better characterize the gas depletion of the inner disk in objects before determining if they are compatible with photoevaporative dispersal scenario, and also to consider that these disks are orbiting around low mass stars.
## 6 Summary
In this work we performed numerical simulations to study the effect that primordial substructures have on the dust evolution and millimeter flux of protoplanetary disks undergoing photoevaporative dispersal. Our simulations show that the presence of a primordial substructure, specifically in terms of the dust trapping efficiency, determines the flux that the disk displays during its dispersal. Once photoevaporation opens a cavity in the inner regions, further dust loss due to drift is prevented, with dust removal due to wind entrainment having little impact on the evolution of solids. Therefore, disks that developed early substructures are bright when observed in the millimeter continuum,
while disks that were smooth during its early evolution are faint, with both types of disks maintaining a relatively constant flux as the photoevaporative cavity expands (Figure 6).
From our parameter space exploration, we learn that in order to have a bright disk, while undergoing photoevaporative dispersal, the main requirement is that an early substructure with an amplitude of \(A_{\rm app}\gtrsim 2\) exists (approximately what a Saturn mass planet would cause). Substructures located closer to the star could trap more dust and lead to brighter disks, while higher X-ray luminosities that lead to an earlier dispersal also reduce the amount of dust lost due to diffusion and drift prior to the cavity opening.
The millimeter fluxes calculated for the smooth and structured disk models are respectively comparable to those measured in the mm-faint (\(F_{\rm mm}<30\) mJy) and mm-bright (\(F_{\rm mm}>30\) mJy) populations of transition disks, a result that holds for two different opacity prescriptions. It is possible then, that at least some observations of transition disks correspond to dispersing disks where early substructures were present (bright disks) or absent (faint disks).
While it is not new that the presence of substructures determines the flux of a disk or that photoevaporation leads to the inside-out expansion of a cavity, the combination of both results has a much more distinct consequence: Bright transition disks with a large cavity are not necessarily caused by massive super-Jupiter mass planets. Instead, these disks can also be created by smaller planets, with masses within the range of Saturn and Jupiter that trap the dusty material at earlier times (without strong constraints on their location), and a photoevaporative dispersal process, where the edge of the expanding cavity drags all the remaining solids in the protoplanetary disk to larger radii. This scenario could also explain why we have not detected more planets in transition disk observations: they might be both less massive, and located further inside their cavities than previously expected.
From the predicted SEDs, we find that our photoevaporating disk models display a bright FIR signal, which could be problematic from the theoretical point of view, since there are no detections of non-accreting transition disks (i.e., relic disks) with such high FIR fluxes. Studies considering dust removal by radiation pressure (Owen & Kollmeier, 2019), and long-lived inner disks Garate et al. (2021) that block a fraction of the stellar flux received by the outer disk can alleviate this discrepancy between the current model and observations. Alternatively, surveys that focus on the NIR or MIR emission to detect protoplanetary disks could have missed these transition disks that would display only an FIR component.
The simulations presented in this paper, along with the previous work from (Garate et al., 2021), suggest that a comprehensive disk evolution model could explain the observed properties of transition disks, where photoevaporative dispersal is responsible for opening wide cavities, substructures created by planets of moderate mass can trap the dust required to produce the fluxes measured in the millimeter continuum, and dead zones in the inner regions lead long-lived inner disks capable of sustaining the measured accretion rates. The versatility of a combined model opens several pathways to explain the observations of transition disks, and relaxes the constraints imposed on models that rely exclusively on giant planets to explain all the features.
To conclude, we highlight the relevance of the synergy between multiple ingredients in disk evolution, in this case photoevaporation and gap-like substructures, which when considered together can explain wider range of the observed features in protoplanetary disks, with fewer constrains.
###### Acknowledgements.
We would like to thank the anonymous referees for their reports, insights, and suggestions that greatly improved the extent of this paper. We also thank Jochel Stafler for the help with the RADMC3D setup, Nienke van der Marel and James Owen for the insightful discussions, and for sharing a selection of the SEDs and mass loss rates profiles, respectively, that were used for comparison in this paper. The authors acknowledge funding from the Alexander von Humboldt Foundation in the framework of the Sofja Kovalevskaja Award endowed by the Federal Ministry of Education and Research, from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme under grant agreement No. 714769, by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) Ref no. FOR 2634/2 (ER 685/7-1, ER 685/8-2), and by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC-2094 - 390783311.
|
2309.07461 | Detecting Unknown Attacks in IoT Environments: An Open Set Classifier
for Enhanced Network Intrusion Detection | The widespread integration of Internet of Things (IoT) devices across all
facets of life has ushered in an era of interconnectedness, creating new
avenues for cybersecurity challenges and underscoring the need for robust
intrusion detection systems. However, traditional security systems are designed
with a closed-world perspective and often face challenges in dealing with the
ever-evolving threat landscape, where new and unfamiliar attacks are constantly
emerging. In this paper, we introduce a framework aimed at mitigating the open
set recognition (OSR) problem in the realm of Network Intrusion Detection
Systems (NIDS) tailored for IoT environments. Our framework capitalizes on
image-based representations of packet-level data, extracting spatial and
temporal patterns from network traffic. Additionally, we integrate stacking and
sub-clustering techniques, enabling the identification of unknown attacks by
effectively modeling the complex and diverse nature of benign behavior. The
empirical results prominently underscore the framework's efficacy, boasting an
impressive 88\% detection rate for previously unseen attacks when compared
against existing approaches and recent advancements. Future work will perform
extensive experimentation across various openness levels and attack scenarios,
further strengthening the adaptability and performance of our proposed solution
in safeguarding IoT environments. | Yasir Ali Farrukh, Syed Wali, Irfan Khan, Nathaniel D. Bastian | 2023-09-14T06:41:45Z | http://arxiv.org/abs/2309.07461v2 | Detecting Unknown Attacks in IoT Environments: An Open Set Classifier for Enhanced Network Intrusion Detection
###### Abstract
The widespread integration of Internet of Things (IoT) devices across all facets of life has ushered in an era of interconnectedness, creating new avenues for cybersecurity challenges and underscoring the need for robust intrusion detection systems. However, traditional security systems are designed with a closed-world perspective and often face challenges in dealing with the ever-evolving threat landscape, where new and unfamiliar attacks are constantly emerging. In this paper, we introduce a framework aimed at mitigating the open set recognition (OSR) problem in the realm of Network Intrusion Detection Systems (NIDS) tailored for IoT environments. Our framework capitalizes on image-based representations of packet-level data, extracting spatial and temporal patterns from network traffic. Additionally, we integrate stacking and sub-clustering techniques, enabling the identification of unknown attacks by effectively modeling the complex and diverse nature of benign behavior. The empirical results prominently underscor the framework's efficacy, boasting an impressive 88% detection rate for previously unseen attacks when compared against existing approaches and recent advancements. Future work will perform extensive experimentation across various openness levels and attack scenarios, further strengthening the adaptability and performance of our proposed solution in safeguarding IoT environments.
Network Intrusion Detection, Open Set Classification, Machine Learning, Zero-Day Attack, Meta Learning.
## I Introduction
The rapid proliferation of Internet of Things (IoT) devices has directed a new era of interconnectedness, revolutionizing various sectors like healthcare, transportation, agriculture, other industries [1], and the military. These IoT ecosystems consist of interconnected sensors, actuators, and network-enabled devices, facilitating data exchange through the internet [2]. However, the exponential growth of IoT systems, projected to reach 75.3 billion devices by 2025 [3], has also introduced new avenues for cyberattacks, posing significant challenges to the security and privacy of interconnected devices and their data. As adversaries become more sophisticated, traditional security measures like Network Intrusion Detection Systems (NIDS) relying on closed-world settings [4] face unprecedented challenges in safeguarding IoT environments. Such NIDS are tested only against known attack classes, rendering them ineffective against previously unseen attacks. In contrast, effective security solutions must address open-world network intrusion detection settings, where classifiers must detect unknown attack classes. These types of classifiers are known as open-set classifiers, while those relying on closed-world settings are termed close-set classifiers [5].
As the boundaries between benign and malicious behaviors blur [6], there is an urgent need for a more robust and proactive security approach that can accurately identify unknown/novel attacks in real-time, effectively mitigating their impact on IoT systems. In response to this challenge, our paper introduces an innovative framework for an open-set classifier tailored to IoT devices in adversarial environments. The framework utilizes the stacking concept [7] and diverse prototypical features of benign traffic to spot deviations from normal behavior. It classifies incoming network traffic as either benign or unknown attacks. By adopting an open-set problem formulation, our approach confidently distinguishes between benign traffic and entirely new threats, even without prior training data.
Our contributions encompass not only the proposal of an open-set classifier tailored for IoT environments but also a different approach to utilizing the network traffic of IoT devices as serialized RGB images. Unlike traditional closed-set classifiers that rely on flow-based data, our open-set classifier operates at the packet level of IoT network traffic. This granular approach allows us to easily distinguish novel attacks, as flow-based data lacks the actual message content of each flow. In addition to our contributions, we have conducted a thorough evaluation of our approach against diverse attack scenarios,
demonstrating its efficacy in detecting and accurately classifying unseen threats. The experimental validation showcases the superiority of our approach over traditional closed-set NIDS and state-of-the-art open-set classifiers, highlighting its potential to enhance IoT security significantly.
## II Related Works For Open-set Classification
Efforts in anomaly detection within NIDS have been extensive, aiming to differentiate normal network traffic from malicious patterns. However, a substantial portion of this work predominantly addresses the closed-world problem, where models are designed to recognize only the classes encountered during training. This presents a challenge when models need to identify classes not seen during training, constituting the open-set recognition (OSR) problem [8].
Pioneers in the pursuit of OSR, Scheirer et al., formally defined the problem [9]. They introduced a pioneering 1-vs-set machine-aided solution, followed by the innovative Compact Abating Probability (CAP). A notable instantiation within CAP is the W-SVM, utilizing Statistical Extreme Value Theory (EVT) to calibrate SVM decision scores. The efficacy of W-SVM is demonstrated by Cruz for fine-grained open-set intrusion detection [10]. Further, Chen et al. proposed an Auto-encoder Ensemble [11] approach exploiting the variable connectivity architecture of auto-encoders for improved performance, while Bradley et al. leveraged survival analysis [12].
In recent advancements, Ruff et al. introduced DeepSAD [13], a semi-supervised approach grounded in the idea that the entropy of the latent distribution for normal data should exhibit lower values than that of anomalous samples. Pang et al. presented PreNet [14], a novel deep weakly-supervised approach focused on learning pairwise relation features and anomaly scores through predicting relationships between randomly sampled instances. Li et al. proposed the ECOD [15] algorithm, inspired by outliers often being "rare events" in the tails of a distribution. Despite these developments, the NIDS domain has seen few works directly addressing the OSR challenge. Baye et al. recently conducted an empirical study, exploring notable OSR algorithms using NIDS data to uncover correlations between deep learning-based OSR algorithms' performance and hyperparameter values they use [8].
In sum, the efficacy of NIDS in an open-world context is limited as most machine learning-based NIDS operate within a closed-world setting [4]. This underscores the pressing need for further progress and innovation in this field.
## III Methodology
This section offers a comprehensive overview of our methodology, commencing with the dataset employed in our experiments. Subsequently, we delve into the preprocessing steps undertaken to ready the data for training and testing. Additionally, we elaborate on the clustering procedure applied to the serialized RGB network traffic images to determine the optimal number of clusters, denoted as \(N\). Lastly, we provide an in-depth description of our proposed framework.
### _Dataset and Preprocessing_
The dataset used to evaluate our proposed framework is CIC-IDS2017 [16], created by the University of New Brunswick in 2017. It consists of simulated network traffic in both packet-based and bidirectional flow-based formats, encompassing the most up-to-date attacks and benign traffic. The dataset is available in two formats: the original packet capture (PCAP) files (packet-based data) and CSV files (flow-based data) obtained by extracting 80 features from the PCAPs using CICFlowMeter.
For our work, we specifically used the packet-based data from CIC-IDS2017, as flow-based data cannot detect attacks that rely on packet's payload [17]. Additionally, packet-based data of CIC-IDS2017 is not labeled; therefore, we first labeled the data utilizing our developed tool (Payload-Byte) [18] that extracts and labels packet capture files of network traffic using metadata from NIDS datasets. The tool leverages five-tuple features, including Source IP, Destination IP, Source Port, Destination Port, and Protocol, to match packets with labeled flow-based data instances. The resulting labeled features are payload content. Since payload size varies for each packet, Payload-Byte uses a maximum payload length of 1500 bytes. The extracted payload forms one large feature, which is then divided into 1500 features based on bytes. Each byte's hexadecimal representation is transformed into an integer ranging from 0 to 255, resulting in one feature. For packets with fewer than 1500 payload bytes, zero padding is employed to maintain a standardized feature vector structure. After labeling the data, we removed duplicated instances and instances with no payload data. Furthermore, we performed under-sampling to reduce the dataset size by decreasing the number of benign instances. For a comprehensive understanding of the preprocessing steps and the functioning of Payload-Byte, we refer readers to our previous work [18].
After labeling and preprocessing the data, we converted it into serialized RGB images, following the methodology used in our previous work [19]. After the transformation, the data was divided into three sub-datasets:
* **Base Learner Dataset (\(D_{1}\)):** This dataset exclusively contains benign data and serves as the training set for
Fig. 1: Illustration of dataset division into three sub-datasets. The sub-dataset (\(D_{3}\)) includes five randomly separated attack classes, while the rest of the attack classes are included in \(D_{2}\). On the other hand, \(D_{1}\) exclusively consists of benign data.
the base learner models.
* **Meta Learner Dataset (\(D_{2}\)):** Comprising nine known attack classes and benign data samples. This dataset is utilized for generating meta features and training the meta classifier.
* **Evaluation Dataset (\(D_{3}\))**: This dataset forms the testing dataset, consisting of five unknown attacks and benign samples.
Our primary aim of evaluating the OSR problem for NIDS involves detecting unknown classes without prior knowledge. To achieve this, we separated five random attack classes _(DoS Hulk, DoS slowloris, DoS Slowhttptest, Web Attack-Sql Injection and Bot)_ from the dataset to generate unknown attack scenarios. Furthermore, we partitioned the benign data samples in a ratio of 50:30:20 for the base learner, meta learner, and evaluation datasets, respectively. The remaining nine attack classes were treated as known attacks and included in meta learner dataset along with 30% of benign data. The complete distribution of the dataset is illustrated in Fig. 1.
### _Clustering of Benign Network Traffic_
Incorporating the clustering of benign traffic within our framework offers a crucial enhancement to the performance of the proposed framework. As benign data is usually more spread out in the feature space than attack data, this makes it difficult to distinguish between normal instances and unknown attacks [20]. By dividing the benign data into sub-clusters, we aim to capture inherent variations and nuances in benign behavior patterns, thereby facilitating a more nuanced and accurate classification. This stratification enables our framework to differentiate between benign and unknown attacks with higher precision, contributing to a reduced rate of false positives.
Initially, we converted the transformed (Serialized RGB Images) dataset into a two-dimensional space using the t-distributed Stochastic Neighbor Embedding (t-SNE) method [21]. The two-dimensional space representation of our transformed data can be seen in Fig. 2. Notably, the benign data displays a dispersed distribution across the space, while instances of attacks overlap with it, posing a challenge in distinguishing between benign and attack instances. Subsequently, this two-dimensional representation of the data formed the basis for both visualization and the subsequent clustering process of benign traffic using K-means clustering.
To determine the optimal number of clusters (\(N\)), we employed two widely used methods in the literature: the Elbow method and the Silhouette method [22]. These techniques aid in identifying the most suitable value of \(N\), which is fundamental for effective clustering. The Elbow method focuses on the point where the reduction in within-cluster variance starts
Fig. 3: Graph for sum of squared distances and silhouette score for different number of clusters. Green line represents the values obtained through elbow method, and the mmono line represents the silhouette score. The optimal number of clusters for benign is found to be seven, shown by red circles.
Fig. 2: Representation of Serialized RGB images of network traffic into two dimensions using t-SNE method. (a) illustrate the distribution of benign data and attacks, highlighting the diverse nature. (b) provides an insightful depiction of the effective clustering of the benign data into seven distinct clusters.
to slow down, indicating an appropriate number of clusters. The Silhouette method, on the other hand, assesses the quality of the clustering based on cohesion and separation of clusters.
Fig. 3 showcases the outcomes yielded by the Elbow and Silhouette methods. These graphs show that the optimal number of clusters, denoted as \(N\), is seven. This determination resonates in our framework, resulting in the adoption of seven base-learner models that are discussed in the subsequent heading. The two-dimensional projection of the benign data as well as how benign data is clustered into seven sub-benign clusters, is visually represented in Fig. 2. After successful clustering, we leverage the resultant cluster labels to annotate the sub-clusters of benign data in the serialized RGB data format which is utilized for training the base-learner models.
### _Framework_
Our proposed framework builds upon our previous work [19], which initially focused on the closed-set classifier approach. While the earlier work provided preliminary insights, this paper extends the methodology to encompass open-set scenarios. A visual depiction of our proposed framework is presented in Fig. 4. In the context of detecting unknown attacks within IoT environments, our framework draws inspiration from the concept of Stacking [23, 24], which is a Meta Learning based modeling technique consisting of two types of learners: Base Learners and Meta Learners.
For the base learners, we build upon the architecture used in our previous work [19], which involves a deep concatenated Convolutional Neural Network (CNN). Notably, the base learners are solely trained on benign data utilizing \(D_{1}\) (sub-dataset). Given the diverse nature of benign behavior patterns, distinguishing benign data as a whole from novel attacks becomes challenging. To address this, we adopt an unsupervised clustering method, K-means, to divide the benign data into \(N\) sub-classes. We then train \(N\) base learner models, each based on binary classification, to discern whether a data sample belongs to its particular benign cluster or not.
In other words, we train each model to distinguish samples from its specific cluster versus the rest of the benign clusters. Consequently, after training the base learners, we obtain \(N\) probabilities indicating the likelihood that a given sample belongs to each respective cluster. This approach allows us to gain insights into the association of a sample with each sub-class of benign behavior, aiding in the accurate detection and classification of novel attacks.
Next, we utilize \(D_{2}\) subset of the dataset and feed it through the base learners, producing meta features based on the \(N\) probabilities from each model. These meta features are then used to train the meta-classifiers, which include Random Forest, Logistic Regression, XGBoost, and LightLGBM.
Once the meta-classifiers are trained, the training process of our framework is completed, and we can evaluate its performance using \(D_{3}\) (sub-dataset). Since there are four meta classifiers, we obtain four outputs indicating whether a sample is benign or an unknown attack. To mitigate potential conflicts in the outputs, we incorporate a voting ensemble mechanism.
Let \(M\) be the set of meta-classifiers, where \(|M|\) represents the total number of meta-classifiers. Each meta-classifier \(m_{i}\in M\) produces an output \(O_{i}\) for a given input sample. The outputs can be binary, where \(O_{i}=1\) indicates a predicted attack, and \(O_{i}=0\) indicates a predicted benign sample. The voting mechanism is implemented as follows:
\[V=\frac{1}{|M|}\sum_{i=1}^{|M|}O_{i} \tag{1}\]
where \(V\) is the final voting result. If \(V\geq 0.5\), the sample is classified as an attack; otherwise, it is considered benign.
Fig. 4: Pictorial representation of the proposed framework for detecting unknown attacks in IoT environments. The framework consists of two levels: Base Learner Models and Meta Learner Models. Each level is trained using a different subset of the dataset.
The overall training and testing process of our framework is detailed in Algorithm 1. This algorithm outlines the sequential steps, from training the base learners to combining meta features and utilizing meta classifiers for making predictions.
```
1:Benign data, Known and Unknown attack data
2:Prediction \(\rightarrow\) Benign or Unknown Attack
3:Step 1: Dataset Division
4:\(D_{1}\) : Base Learner Dataset \(\leftarrow\) Benign data
5:\(D_{2}\) : Meta Learner Dataset \(\leftarrow\) Benign + Known attacks data
6:\(D_{3}\) : Evaluation Dataset \(\leftarrow\) Benign + Unknown attacks data
7:Step 2: Sub-Clustering of Benign Data
8:Determine optimal number \(N\) of benign clusters using Elbow and Silhouette methods
9:Apply unsupervised K-means clustering to divide the benign data of \(D_{1}\) into \(N\) sub-classes,
10:for\(i=1\) to \(N\)do
11:Step 3: Base Learner Model Training
12:Train base learner models on the data of \(i\)-th benign cluster against the rest of the benign data
13:Step 4: Meta Learner Model Training
14:Pass \(D_{2}\) through the trained Base Learner Models to obtain probabilities \(P_{i}\)
15:Aggregate the probabilities \(P_{i}\) from each sample across all Base Learner Models to generate meta features.
16:Train meta-classifiers (Random Forest, Logistic Regression, XGBoost, and LightLGBM) on the combined meta features
17:Step 5: Testing
18:Feed \(D_{3}\) through trained Base and Meta Learner Models to obtain predictions
19:Implement majority voting among the Meta Learner Models' outputs to finalize the prediction
20:\(V=\frac{1}{|M|}\sum_{i=1}^{|M|}O_{i}\)\(\triangleright\)\(M\) = Set of meta- classifiers; \(O_{i}\) = Output of \(i\)-th meta-classifier
21:if\(V\geq 0.5\)then
22:Output \(\rightarrow\) Unknown Attack
23:else
24:Output \(\rightarrow\) Benign
```
**Algorithm 1** Proposed Framework for Detecting Unknown Attacks in IoT Environments
## IV Results and Discussion
To evaluate our proposed framework, we compare the performance with several novelty and out-of-distribution detection approaches, as well as state-of-the-art methods in detecting unknown attacks to comprehensively assess its effectiveness. To ensure an equitable comparison, we ensure that each approach is evaluated under analogous experimental setting, utilizing the same packet-level dataset and similar data division.
Our experimental evaluation aimed to assess the effectiveness of our proposed framework by considering the detection rate of unknown attacks (sensitivity/recall) and the detection rate of benign samples (specificity) as our evaluation metrics. The summarized results can be observed in Fig. 5, which provides an overview of the performance metrics for each approach.
From the figure, it is evident that our approach closely matches the sensitivity performance of the ECOD approach. However, noteworthy differences arise in terms of specificity, where our approach outperforms ECOD by 37%. This significant discrepancy indicates our approach's superiority in this aspect. Similarly, our framework achieves comparable specificity with the PReNet approach but simultaneously leads by 17% in sensitivity. Overall, our framework exhibits comprehensive performance when it comes to detecting both unknown attacks and benign behaviors. A crucial aspect of an open-set classifier is balancing specificity and sensitivity, a challenge our proposed framework adeptly manages.
The superiority of our proposed framework over other approaches can be attributed to three main factors. Firstly, the utilization of packet-based data and its image-based representation enables the extraction of both spatial and temporal information from network traffic. This empowers our framework to identify subtle patterns and anomalies within the data, significantly enhancing its ability to distinguish between unknown attacks and normal traffic. Secondly, the subdivision of benign data contributes to a clearer depiction of the inherent data distributions. This division aids in capturing the intricate and varied patterns intrinsic to benign behavior. As a result, our framework excels in detecting previously unseen attacks by comprehensively understanding the complex behaviors present
Fig. 5: Comparison of the detection rates of unknown attacks and benign samples with other approaches. The proposed framework outperforms other available approaches in terms of detecting unknown attacks.
within the benign class.
## V Conclusion and Future Work
In this paper, we present a novel framework designed specifically for open-set classification within the domain of NIDS in adversarial IoT environments. The key innovation of our framework resides in its utilization of packet-level data, which is transformed into serialized RGB images. This distinctive approach enables us to harness both the spatial and temporal information inherent in the network traffic data, providing a richer and more comprehensive understanding of the underlying patterns.
By combining the principles of stacking and sub-clustering within our framework, we effectively address the intricate challenge of identifying unknown attacks amidst the ever-evolving cybersecurity landscape. Our experimental findings underline the remarkable efficacy of our framework, boasting an impressive 88% detection rate for previously unseen attacks that were not encountered during the training phase. It is important to note that this paper lays the foundation for our proposed framework, with comprehensive experimentation and evaluation across varying degrees of openness and attack scenarios forming a significant part of our future work. Through these continued efforts, we aim to further validate and fine-tune the capabilities of our framework to provide enhanced capability for NIDS in adversarial IoT environments.
## Acknowledgment
This work was supported in part by the U.S. Military Academy (USMA) under Cooperative Agreement No. W911NF-22-2-0081, the U.S. Army Combat Capabilities Development Command (DEVCOM) Army Research Laboratory under Support Agreement No. USMA 21050, and the U.S. Army DEVCOM C5ISR Center under Support Agreement No. USMA21056. The views and conclusions expressed in this paper are those of the authors and do not reflect the official policy or position of the U.S. Military Academy, U.S. Army, U.S. Department of Defense, or U.S. Government.
Research reported in this paper was also supported by an Early-Career Research Fellowship from the Gulf Research Program of the National Academies of Sciences, Engineering, and Medicine. The content is solely the responsibility of the authors and does not necessarily represent the official views of the Gulf Research Program of the National Academies of Sciences, Engineering, and Medicine.
|
2308.00027 | Returning CP-Observables to The Frames They Belong | Optimal kinematic observables are often defined in specific frames and then
approximated at the reconstruction level. We show how multi-dimensional
unfolding methods allow us to reconstruct these observables in their proper
rest frame and in a probabilistically faithful way. We illustrate our approach
with a measurement of a CP-phase in the top Yukawa coupling. Our method makes
use of key advantages of generative unfolding, but as a constructed observable
it fits into standard LHC analysis frameworks. | Jona Ackerschott, Rahool Kumar Barman, Dorival Gonçalves, Theo Heimel, Tilman Plehn | 2023-07-31T18:00:01Z | http://arxiv.org/abs/2308.00027v1 | **Returning CP-Observables to The Frames They Belong**
## Abstract
**Optimal kinematic observables are often defined in specific frames and then approximated at the reconstruction level. We show how multi-dimensional unfolding methods allow us to reconstruct these observables in their proper rest frame and in a probabilistically faithful way. We illustrate our approach with a measurement of a CP-phase in the top Yukawa coupling. Our method makes use of key advantages of generative unfolding, but as a constructed observable it fits into standard LHC analysis frameworks.**
###### Contents
* 1 Introduction
* 2 Reconstructing observables by unfolding
* 2.1 Generative unfolding
* 2.2 Periodic splines
* 2.3 Phase space parametrization
* 3 CP-phase from Higgs-top production
* 3.1 CP-observables
* 3.2 Unfolding-based analysis
* 3.3 Results
* 4 Outlook
Introduction
With the LHC continuing its success story of precision hadron collider physics, the size and complexity of the datasets of the upcoming Run 3 and HL-LHC are challenging the existing analysis methodology [1, 2, 3]. At the same time, the goal of LHC physics has moved from model-based searches for physics beyond the Standard Model (SM) to a comprehensive analysis of all its data, based on consistent analysis frameworks like the Standard Model effective theory [4, 5, 6].
The first step in any global analysis based on the fundamental principles of QFT is to determine the underlying symmetries, which are required to construct the effective Lagrangian. The, arguably, most interesting symmetry in the SM is \(CP\), linked to cosmology through the Sakharov conditions for baryogenesis [7], and potentially realized in an extended Higgs sector [8]. In the language of effective theory, \(CP\)-violation in the Higgs coupling to vector bosons is loop-suppressed and arises at dimension six [9, 10, 11, 12, 13]. In contrast, \(CP\)-violation in Higgs couplings to fermions can appear at dimension four [14], making a \(CP\)-phase in the top Yukawa coupling the most sensitive link between baryogenesis and LHC physics [15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32].
Obviously, we do not want to leave the test of fundamental Lagrangian symmetries to a global analysis [33, 34] with limited control over experimental and theoretical uncertainties [35, 36, 37, 38, 39], including systematics from parton densities [40]. Instead, we should use dedicated (optimal) observables to target one fundamental symmetry at a time [41, 42, 43, 12, 29]. In the Higgs-gauge sector, the optimal observable is the azimuthal angle between the two tagging jets in weak boson fusion. For associated top-Higgs production, the azimuthal angle between a charged lepton from one top decay and the down quark from the other plays a similar role. Accurately extracting it faces the challenge of identifying the corresponding decay jet. Another powerful observable probing the Higgs-top interaction is the Collins-Soper angle [23, 28, 29, 44]. Again, the challenge is to map it onto the observed final state after particle decays, parton shower, and detector effects.
Both of these observables illustrate the common problem that an optimal or ideal kinematic correlation is usually not defined on the reconstructed final state. So while an optimal observable provides full sensitivity without the need to consider additional phase space correlations, we pay a prize in its reconstruction.
The standard inference approach for such kinematic correlations is to approximate them at the reconstruction level. For this approximation, we can use a directly observable correlation at the reconstruction level or rely on some kind of algorithm. The approximation is unlikely to be optimal. An improved approach would be to encode the observable in a learned mapping, for instance, through neural networks. Fundamentally different and, in principle, optimal alternatives are simulation-based inference [45, 46] or the matrix element method [47, 48, 49, 50], but they come at a significant numerical cost and are hard to re-interpret for other measurements.
For cases where an optimal observable is defined in some kinematic frame, we propose a simplified unfolding approach, where we unfold the reconstruction-level events to the appropriate reference frame, and then construct the optimal observable for the down-stream task. Unfolding or reconstructing events beyond the immediately available detector output is a long-standing problem [51, 52, 53, 54], undergoing transformative progress through modern machine learning (ML) [55, 56, 57, 58, 59, 60, 61, 62, 63]. One key observation is that forward and backward simulations are completely symmetric when we interpret them as sampling from conditional probabilities [64, 65]. This motivates ML-unfolding through probabilistic inverse simulations [60, 62, 59], which allows us to reconstruct observables or parameters defined at any level of our forward simulations, for instance, unfolding detector effects, parton shower, particle decays [61],
all the way to measuring fundamental parameters [66].
This generative unfolding technique allows us to just reconstruct key observables, which have the advantage that they can be used in the standard analysis frameworks of ATLAS and CMS, but with a performance increase from the full set of kinematic correlations learned through the unfolding. To guarantee stable network predictions and to be able to quantitatively extract the training-induced network uncertainties, we use the Bayesian version [67] of the conditional normalizing flows [68, 69], for which the likelihood losses should lead to well-calibrated results. Eventually, this kind of analysis can serve as a simple starting point for ML-unfolding, as it can be expanded through additional observables step by step.
In this paper, we use \(CP\)-violation through a complex top Yukawa coupling to show how ML-unfolding techniques can construct and numerically encode observables in the reference frame where they are defined. In Sec. 2, we first describe our neural network architecture, the physics task, and the treatment of phase space. In Sec. 3, we introduce our reference process and discuss our results and potential generalization errors. Finally, Sec. 4 is reserved for summary and outlook.
## 2 Reconstructing observables by unfolding
In this study, we propose to use statistical unfolding through inverse simulation [59, 60] to construct kinematic observables in a specific partonic reference frame. While we are making use of unfolding techniques in constructing a given observable, the precision, control, and model dependence of the unfolding is not a limiting factor for our analysis. Instead, we treat the so-defined observable like any other kinematics construction.
### Generative unfolding
Generative unfolding is based on the observation that a forward simulation from a parton-level event \(x_{\text{part}}\) to a reco-level event \(x_{\text{reco}}\) just samples from an encoded conditional probability,
\[r\thicksim N(r)\xrightarrow{x_{\text{part}}}x_{\text{reco}}\thicksim p(x_{ \text{reco}}|x_{\text{part}})\hskip 28.452756pt(\text{forward})\;. \tag{1}\]
This simulation can be trivially inverted on the same training data, so we can unfold detector effects, initial-state jet radiation, or particle decays, by sampling from the inverse conditional probability,
\[r\thicksim N(r)\xrightarrow{x_{\text{reco}}}x_{\text{part}}\thicksim p(x_{ \text{part}}|x_{\text{reco}})\hskip 28.452756pt(\text{inverse})\;. \tag{2}\]
In both cases, the standard training relies on paired events \(\{x_{\text{part}},x_{\text{reco}}\}\). Obviously, this training dataset leads to model dependence, which can be reduced by using iterative methods [62]. The target phase space of the inverse simulation or unfolding can be chosen flexibly, just unfolding detector effects [60, 62], but also jet radiation [60], particle decays [61], or sampling right into model parameter space using setups like BayesFlow [66]. Inference through conditional normalizing flows is standard in many fields of physics [70, 71].
Our generative network encoding the conditional probability defined in Eq. (2) is a conditional normalizing flow, specifically a conditional invertible neural network (cINN) [70], trained with a likelihood loss to guarantee a statistically correct and calibrated output. To link a batch of \(B\) phase space points \(x_{i}\) to a Gaussian latent space \(r_{i}\) with the condition \(c_{i}\), the
likelihood loss reads
\[\mathcal{L}_{\text{cINN}}=\sum_{i=1}^{B}\left(\frac{r_{i}(x_{i};c_{i})^{2}}{2}- \log\left|\frac{\partial r_{i}(x_{i};c_{i})}{\partial x_{i}}\right|\right)\,. \tag{3}\]
### Periodic splines
The main part of our cINN is built from coupling layers, specifically rational quadratic spline blocks [72], each followed by a random permutation. As we will discuss in Sec. 2.3, some phase space directions are periodic and lead to undesired boundary effects when we use these spline transformations. To understand this problem in detail, let us consider a spline transformation
\[g_{\theta}:\quad[-L,L]\rightarrow[-L,L]\,, \tag{4}\]
with parameters \(\theta\). This transformation is given by \(K\) different monotonic rational quadratics, parameterized by \(K+1\) knot points \((x_{k},y_{k})\) and \(K+1\) derivatives. The boundaries are \((x_{0},y_{0})=(-L,-L)\) and \((x_{K},y_{K})=(L,L)\).
For periodic inputs the conditions \(g(x_{0})=y_{0}=-L\) and \(g(x_{K})=y_{K}=L\) are unnecessarily restrictive and do not allow the network to map a distribution onto or past the boundaries, to represent points on a circle. In addition, we want \(g^{\prime}(x_{0})=g^{\prime}(x_{K})\) for periodic inputs, which is not necessarily true [73]. The first issue can be fixed by replacing \(g_{\theta}\) with
\[\tilde{g}_{\theta}(x)=g_{\theta}(x)+g_{0}+2Lk\,, \tag{5}\]
with an integer \(k\) chosen such that \(\tilde{g}(x)\) always lies within \([-L,L]\), and a new parameter \(g_{0}\) added to \(\theta\). To solve the second issue, we simply remove one of the derivative parameters from \(\theta\) and set \(\tilde{g}^{\prime}_{\theta}(x_{K})=\tilde{g}^{\prime}_{\theta}(x_{0})\). The resulting transformation \(\tilde{g}_{\theta}\) is visualized in Fig. 1.
Figure 1: Visualization of a modified coupling transformation for periodic inputs. This transformation maps \(K\) points \(x_{i}\) to \(K\) points \(y_{i}\) on a circle, while we use rational quadratics to interpolate between two points. The modifications ensure that the \(x_{i}\) and \(y_{i}\) can be arbitrarily skewed in relation to one another, and that the derivative at \(x_{1}\) is consistent with both adjacent rational quadratics.
With these modifications, the number of parameters encoding the transformation does not change. This means that we can use the same sub-network to determine \(\theta\) for, both, periodic and non-periodic inputs. In practice, we split the input vector to the transformation into periodic and non-periodic inputs and apply \(g_{\theta}\) and \(\tilde{g}_{\theta}\) separately to each part. This also implies that we have to keep track of the permutations between coupling blocks, to be able to determine the type (periodic or non-periodic) of each input throughout the network. As a last detail, we use a uniform latent space distribution instead of a Gaussian for periodic dimensions.
Bayesian neural networks allow us to efficiently control the training stability and estimate training-related uncertainties. They extend standard architectures for regression, classification, or generative networks to distributions for each network weight, providing a key tool for explainable AI for instance in fundamental physics applications. The uncertainty on the output can then be extracted through sampling [67, 74, 75, 76, 77]. For generative networks, the uncertainty can be defined for the underlying density estimation [68, 69, 3, 50], for which the network learns an uncertainty map over the target phase space. The critical aspect of Bayesian networks is how to make them numerically viable and still retain all their promising features. We use a variational approximation for the training, combined with independent Gaussians for each network weight. Such Bayesian networks include an optimal regularization, so they can outperform their deterministic counterparts with limited extra numerical effort. As always, we emphasize that the underlying approximations do not have to limit the expressivity of the networks when it comes to the sampled uncertainties. Moreover, we can treat the formal bias in the Gaussian widths as a hyperparameter, which needs to be adjusted and should be checked for stability.
We use the standard Bayesian version of the clNN, as introduced in Ref. [69], but with periodic splines. The network is implemented in PyTorch[78]. In addition, we use the Adam[79] optimizer with a constant learning rate. The hyper-parameters employed in our study are provided in Tab. 1.
\begin{table}
\begin{tabular}{l c} \hline \hline Parameter & Value \\ \hline Block type & periodic rational quadratic spline blocks \\ Number of bins & 10 \\ Block Period & \(2\pi\) \\ Block Domain (non-Periodic) & \([-5.0,5.0]\rightarrow[-5.0,5.0]\) \\ \hline Number of Blocks & 16 \\ Layers per Block & 5 \\ Units per Layer & 256 \\ Weight Prior Type & Gaussian \\ Weight Prior log(\(\sigma^{2}\)) & 1.0 \\ \hline Number of Epochs (Bayesian) & 100 (200) \\ Batch Size & 1024 \\ Optimizer & Adam \\ Learning Rate & \(2.0\times 10^{-4}\) \\ \hline Total number of training events & \(\sim\)1.2M \\ Training/Testing split & 800\%/20\% \\ \hline \hline \end{tabular}
\end{table}
Table 1: Setup and hyper-parameters of the Unfolding-clNN.
### Phase space parametrization
The unfolding method introduced above is identical to full, high-dimensional unfolding to the parton-level. However, in this application, we will only target a small number of kinematic distributions. Moreover, the unfolding network will then be used to define these distributions as part of the standard LHC analysis chain. This application allows us to improve the description of relevant phase space directions at the potential expense of correlations which are not useful for the measurement. In our case, we will guarantee the correct descriptions of the intermediate top-mass peaks through the network architecture.
The simplest way of encoding LHC events at the parton-level is through the components of the final-state 4-momenta. However, the corresponding redundant degrees of freedom are not adapted to the production of intermediate on-shell particles and it's reduced phase space. One way to improve the performance of generative networks is to add a maximum mean discrepancy (MMD) between a given set of generated and truth distributions [80] in the loss function. Its main advantage is that it only affects the target distribution and avoids an unnecessarily large model dependence. The disadvantage is that the additional loss term complicates the training and consequently limits the precision of the network. For our INN architecture, the computation of an MMD loss requires samples generated from the latent distribution, while the usual INN loss works on latent-space samples.
In our case, where the dominant signal and background processes share intermediate mass peaks, we can learn these features directly, through an appropriate phase space parametrization. For top decays with 9 degrees of freedom in the final state, a natural parametrization starts with the corresponding top 4-momentum, and then adds the invariant \(W\)-mass and a set of less sensitive angular observables,
\[\left\{\,m_{t},p_{T,t},\eta_{t},\phi_{t},m_{W},\eta_{W}^{t},\phi_{W}^{t},\eta _{t,u}^{W},\phi_{t,u}^{W}\,\right\}\,. \tag{6}\]
Here \(m_{t(W)}\) indicates the reconstructed invariant mass of the corresponding resonance. The superscripts \(t\) and \(W\) indicate the rest frame where the observable is defined, otherwise we use the laboratory frame. The indices \(\ell\) and \(u\) indicate the charged lepton and the up-type quark for leptonic or hadronic \(W\)-decays.
A network trained on this parametrization will reproduce the invariant top and \(W\)-mass distributions, but with drawbacks in the correlations of the hadronic \(W\)-decay. To extract \(CP\)-information, we also want to give the network access to the most important \(CP\)-observables, which we will discuss in detail in Sec. 3.1. This means we will include the Collins-Soper angle \(\theta_{\text{CS}}\)[23, 28, 29, 44] and the angle between the charged lepton and the down-quark \(\Delta\phi_{\ell d}\). One such parametrization for the entire \(t\bar{t}\) system with 18 degrees of freedom is
\[\left\{\,\bar{p}_{t\bar{t}},m_{t_{t}},|\bar{p}_{t_{t}}^{\text{CS} }|,\theta_{t_{t}}^{\text{CS}},\phi_{t_{t}}^{\text{CS}},m_{t_{t}},\right.\] \[\left.\text{sign}(\Delta\phi_{t\nu}^{t\bar{t}})m_{W_{t}},|\bar{p }_{t}^{t\bar{t}}|,\theta_{t}^{t\bar{t}},\phi_{t}^{t\bar{t}},|\bar{p}_{\nu}^{ t\bar{t}}|,\right.\] \[\left.\text{sign}(\Delta\phi_{du}^{t\bar{t}})m_{W_{u}},|\bar{p}_ {d}^{t\bar{t}}|,\theta_{d}^{t\bar{t}},\Delta\phi_{d\ell}^{t\bar{t}},|\bar{p}_{u }^{t\bar{t}}|\right\}\,. \tag{7}\]
The superscripts CS and \(t\bar{t}\) indicate the Collins-Soper frame of the \(t\bar{t}\)-system and the \(t\bar{t}\) rest frame; the latter rotated such that \(\bar{p}_{t_{t}}^{t\bar{t}}\) points in the direction of the positive \(z\)-axis. Also, \(t_{t}\) and \(t_{h}\) denote the leptonically and hadronically decaying tops, while \(u\) and \(d\) denote the up- and down-quarks from the \(W\)-decay. Using \(\text{sign}(\Delta\phi_{AB}^{t\bar{t}})m_{W}\) as a phase space direction makes it harder for the network to generate the \(W\)-peaks, but solves the problem of quadratic phase space constraints.
We emphasize that the combination of generative unfolding with the phase space parametrization of Eq. (7) is expected to introduce a bias in the unfolding. However, for our application, we can ignore this bias given our choice of signal channel and our choice of target observable. Moreover, a potential bias will render the network-defined observable sub-optimal, but does not affect its evaluation in a standard analysis.
## 3 CP-phase from Higgs-top production
The example we choose to illustrate unfolding as a way to define dedicated observables is associated Higgs and top quark pair production
\[pp\to t\bar{t}h+\text{jets}\to(bu\bar{d})\left(\bar{b}\ell^{-}\bar{\nu}\right) \left(\gamma\gamma\right)+\text{jets}\, \tag{8}\]
plus the charge-conjugated process. \(CP\)-violating BSM effects modifying the top Yukawa coupling can be parametrized through the Lagrangian [81]
\[\mathcal{L}\supset-\frac{m_{t}}{\nu}\kappa_{t}\bar{t}(\cos\alpha+i\gamma_{5} \sin\alpha)th\, \tag{9}\]
where \(\alpha\) is the \(CP\)-violating phase, \(\kappa_{t}\) the absolute value of the top Yukawa coupling, and \(\nu=246\) GeV the Higgs VEV The SM-limit is \(\kappa_{t}=1\) and \(\alpha=0\). Deviations from the SM will affect Higgs production and the decay. While changes in the scalar Higgs decay will only impact the total rate, we focus on kinematic effects in the production.
The Lagrangian in Eq. (9) can be linked to the standard SMEFT framework used for general LHC analyses at mass dimension six. In this case, we introduce two Wilson coefficients to modify the top Yukawa [82, 83]
\[\mathcal{L} \supset\frac{f_{t}}{\Lambda^{2}}\mathcal{O}_{t}+\frac{\tilde{f}_{ t}}{\Lambda^{2}}\tilde{\mathcal{O}}_{t}\] \[\equiv\left(\phi^{\dagger}\phi-\frac{\nu^{2}}{2}\right)\left( \frac{f_{t}}{\Lambda^{2}}\left(\tilde{q}_{L}t_{R}\tilde{\phi}+\tilde{\phi}^{ \dagger}\tilde{t}_{R}q_{L}\right)+i\frac{\tilde{f}_{t}}{\Lambda^{2}}\left( \tilde{q}_{L}t_{R}\tilde{\phi}-\tilde{\phi}^{\dagger}\tilde{t}_{R}q_{L}\right) \right)\, \tag{10}\]
where \(\phi\) is the Higgs doublet, \(\tilde{\phi}=i\sigma_{2}\phi^{*}\), and \(q_{L}\) the heavy quark doublet (\(t_{L},b_{L}\)). The parameters \(\kappa_{t}\) and \(\alpha\) in Eq. (9) can be computed as
\[\kappa_{t}^{2}=\left(-1+\frac{\nu^{3}f_{t}}{\sqrt{2}m_{t}\Lambda^{2}}\right)^ {2}+\left(\frac{\nu^{3}\tilde{f}_{t}}{\sqrt{2}m_{t}\Lambda^{2}}\right)^{2} \qquad\text{and}\qquad\tan\alpha=\frac{\tilde{f}_{t}}{f_{t}-\frac{\sqrt{2}m_ {t}\Lambda^{2}}{\nu^{3}}}. \tag{11}\]
We emphasize that the SMEFT description implicitly assumes that new physics enters through higher-dimensional operators. In contrast, a \(CP\)-phase of the top Yukawa can already arise as a dimension-4 modification of the SM-Lagrangian, reflected by the scale combination \(\nu^{3}/(m_{t}\Lambda^{2})\) appearing above.
### CP-observables
There are, fundamentally, two ways of testing the \(CP\)-structure of the Higgs Yukawa coupling introduced in Eq. (9): we can measure the angle \(\alpha\) and conclude from a significant deviation \(\alpha\neq 0\) that \(CP\) is violated in the top Yukawa coupling, ideally using simulation-based inference [29, 30, 84, 46] or the matrix element method [50]. Alternatively, we can define an optimal observable for \(CP\)-violation and test the actual symmetry [12, 42, 43].
### Classical reconstruction
To search for \(CP\)-violation, spin correlations between the top and anti-top quarks in \(t\bar{t}h\) production are ideal, because the short top-lifetime allows for a transfer of the top-polarization to the decay products prior to hadronization or spin decorrelation [85]. The angular correlation between the top-spin and the momenta of the top decay products is given by
\[\frac{1}{\Gamma_{t}}\frac{d\Gamma}{d\cos\xi_{i}}=\frac{1}{2}\left(1+\beta_{i}P _{t}\cos\xi_{i}\right)\,, \tag{12}\]
where \(\xi_{i}\) is the angle between the top spin and the \(i\)-th particle in the top quark rest frame, \(P_{t}\in[0,1]\) is the polarization of the top quark, and \(\beta_{i}\) is the spin analyzing power of the \(i\)-th decay product. Due to the left-handed nature of the weak interaction, the charged lepton and \(d\)-quark display the largest spin analyzing power,
\[\beta_{t^{+}}=\beta_{\bar{d}}=1\qquad\text{(to leading order)}. \tag{13}\]
While one cannot tag a \(d\)-jet, it is possible to find efficient proxies. A practical solution is to select the softer of the two light-flavor jets in the top rest frame. This choice gives a spin analyzing power for this jet as 50% of that of the charged lepton [86, 87, 29]. Assuming that the softer \(W\)-decay jet in the top rest frame comes from the \(d\)-quark, we can now construct appropriate angular correlations to measure.
### Linear CP-observables
The basis of optimal observables testing a symmetry are \(U\)-even or \(U\)-odd observables, defined through their transformation properties on the incoming and outgoing states,
\[\mathcal{O}\left(U\left|i\right\rangle\to U\left|f\right\rangle\right)=\pm \mathcal{O}\left(\left|i\right\rangle\to\left|f\right\rangle\right)\,, \tag{14}\]
where in our case \(U=CP\). Furthermore, a genuine \(U\)-odd observable is defined as an observable which vanishes in a \(U\)-symmetric theory
\[\left\langle\mathcal{O}\right\rangle_{\mathcal{A}=U\cdot\mathcal{A}U^{-1}}=0\,\,. \tag{15}\]
The two definitions are related in that any \(U\)-odd observable is also a genuine \(U\)-odd observable under the condition that the initial state and the phase space are \(U\)-symmetric [88, 12], so the genuine \(U\)-odd property is weaker.
Unfortunately, we cannot infer a \(CP\)-invariant theory from \(\left\langle\mathcal{O}\right\rangle\) of a \(CP\)-odd observable alone. While a \(\left\langle\mathcal{O}\right\rangle\neq 0\) always points to a \(CP\)-violating theory, the result \(\left\langle\mathcal{O}\right\rangle=0\) can appear in \(CP\)-symmetric and in \(CP\)-violating theories. To further analyze this case, we can construct a \(CP\)-odd observable that is also odd under the so-called naive time reversal \(\hat{T}\). Now, the expectation value of this observable is completely tied to the \(CP\)-symmetry of the underlying theory [12].
\(CP\)-odd observables can be constructed either as \(\hat{T}\)-even scalar products of two 4-momenta or as a \(\hat{T}\)-odd contraction of four independent 4-momenta through the Levi-Civita tensor. For the \(t\bar{t}\)-system of \(t\bar{t}h\) production we can use two top momenta and decay momenta
\[\left\{p_{b_{l}},p_{l},p_{v},p_{b_{h}},p_{u},p_{d}\right\}\,. \tag{16}\]
It is straightforward to construct the \(C\)-even, \(P\)-odd, and \(\hat{T}\)-odd observable
\[\mathcal{O}=\varepsilon_{\mu\nu\sigma\rho}P_{t_{h}}^{\mu}p_{t_{l}}^{\nu}p_{A}^ {\rho}p_{B}^{\sigma}\,, \tag{17}\]
with suitable top decay momenta \(p_{A,B}\). We can use the \(CP\)-invariance of the initial state and the phase space for \(t\bar{t}h\) production to show that its expectation value in Eq. (17) vanishes in the SM.
In the \(t\bar{t}\) center of mass frame, we can turn Eq. (17) into a triple product, a standard form for \(CP\)-odd observables,
\[\mathcal{O}=2\,E_{t_{h}}\,\bar{p}_{t_{t}}\cdot(\bar{p}_{A}\times \bar{p}_{B})\,. \tag{18}\]
However, it depends on the top 4-momentum, which are hard to determine accurately. It can be modified by introducing the azimuthal angle difference \(\Delta\phi_{AB}^{t\bar{t}}=\phi_{A}^{t\bar{t}}-\phi_{B}^{t\bar{t}}\) in the \(t\bar{t}\) frame [23, 29],
\[\Delta\phi_{AB}^{t\bar{t}}=\text{sgn}[\bar{p}_{t_{t}}\cdot(\bar{ p}_{A}\times\bar{p}_{B})]\arccos\left[\frac{\bar{p}_{t_{t}}\times\bar{p}_{A}}{| \bar{p}_{t_{t}}\times\bar{p}_{A}|}\cdot\frac{\bar{p}_{t_{t}}\times\bar{p}_{B}} {|\bar{p}_{t_{t}}\times\bar{p}_{B}|}\right]\,, \tag{19}\]
to give us
\[\mathcal{O}=2p_{t}^{\bar{\varepsilon}}\,E_{t}\,p_{T,A}p_{T,B}\, \sin\Delta\phi_{AB}^{t\bar{t}}\,, \tag{20}\]
where we choose \(p_{t_{t}}=\{E_{t},0,0,p_{t}^{\bar{\varepsilon}}\}\) and \(p_{t_{h}}=\{E_{t},0,0,-p_{t}^{\bar{\varepsilon}}\}\). By construction, \(\mathcal{O}\) and \(\Delta\phi_{AB}^{t\bar{t}}\) are sensitive to the linear interference terms in the scattering cross section, and therefore sensitive to the sign of the \(CP\)-phase.
These linear \(CP\)-observables can be constructed for various \(\{A,B\}\) pairs, and their \(CP\)-sensitivity dependents on the spin-analyzing power of the particles \(A\) and \(B\). We compute the Fisher information metric \(I\) to rank their \(CP\)-sensitivity, using MadMiner[29, 46]. The \(\alpha\)-dependent component of \(I\) is defined as
\[I=\mathbb{E}\left[\frac{\partial\log p(x|\kappa_{t},\alpha)}{ \partial\alpha}\frac{\partial\log p(x|\kappa_{t},\alpha)}{\partial\alpha} \right], \tag{21}\]
where \(p(x|\kappa_{t},\alpha)\) represents the likelihood of a phase space configuration \(x\) given the theory parameters \(\kappa_{t}\) and \(\alpha\). \(\mathbb{E}\) denotes the expectation value at the SM point, \((\kappa_{t},\alpha)_{\text{SM}}=(1,0)\). In Fig. 2, we show the Fisher information at parton-level associated with the linear \(CP\)-observables \(\mathcal{O}_{AB}\) in red and the Fisher information for \(\sin\Delta\phi_{AB}^{t\bar{t}}\) in blue.
Figure 2: Fisher information \(I\) for the linear CP-observables \(\sin\Delta\phi_{AB}^{t\bar{t}}\) (blue) and \(\mathcal{O}_{AB}\) (red), probing the sensitivity to \(CP\)-violating phase \(\alpha\) in \(t\bar{t}h\) production.
First, we see that for all combinations \((A,B)\) the Fisher information in \(\mathcal{O}_{AB}\) is slightly larger than the Fisher information in \(\sin\Delta\phi_{AB}^{\,t\bar{t}}\), an effect of the momentum-dependent prefactor in \(\mathcal{O}_{AB}\). Among the various combinations \((A,B)\), the combination of the lepton and the down-type quark is the most sensitive. This corresponds to the maximal spin analyzing power for this pair. Next comes the combination where either the charged lepton or the down quark is replaced by the \(b\)-quark or the \(W\)-boson. In this case, the Fisher information is suppressed by two powers of
\[\beta_{b}=\beta_{W}\sim 0.4. \tag{22}\]
The correlation between a pair of \(b\)-quarks or \(W\)-bosons is further suppressed by another factor \(\beta_{b,W}^{2}\).
#### Non-linear observables and Collins-Soper angle
For a given realization of \(CP\)-violation in an SM-like interaction vertex, the \(CP\)-observable defined in the previous section is not guaranteed to be the most powerful observable [10]. This is obvious for dimension-6 operators, where the symmetry structure is often combined with a momentum dependence of the interaction [12], and the two aspects can, in principle, be tested independently. Comparing the two handles, \(CP\)-odd observables are only sensitive to the interference between the SM-contribution and the \(CP\)-violating matrix element, while observables testing the momentum structure of the interaction vertex can be dominated by the new-physics-squared contribution. For large \(CP\)-phases \(\alpha\), the more promising analysis strategy will use a general test of the structure of the top-Higgs coupling. This motivates using a combination of dedicated \(CP\)-observables with general interaction probes as an optimal search strategy.
Several observables have been evaluated as probes of the \(CP\)-phase \(\alpha\) in Eq. (9) using \(t\bar{t}h\) production [23, 28, 29]. They include the pseudorapidity difference between the two tops and the azimuthal angle between the two tops, the Higgs transverse momentum [81, 89], or the invariant mass of the top and anti-top pair,
\[\big{\{}\,\Delta\eta_{t\bar{t}},\Delta\phi_{t\bar{t}},p_{T,h},m_{t\bar{t}}\, \big{\}}. \tag{23}\]
These standard observables can be supplemented with the projection angle [81, 89, 90]
\[b_{4}=\frac{p_{z,t}}{|\vec{p}_{t}|}\cdot\frac{p_{z,\bar{t}}}{|\vec{p}_{\bar{t} }|}. \tag{24}\]
Finally, we can use the Collins-Soper angle \(\theta_{\rm CS}\)[44], the angle between the top quark and the bisector of the incoming hadrons in the \(t\bar{t}\) center of mass frame. The original motivation for the Collins-Soper angle was to define an observable for the Drell-Yan process \(pp\to\ell^{+}\ell^{-}\) that corresponds to the scattering angle. Factorization arguments suggest the di-lepton rest frame, to minimize ISR-effects and then study the angular correlations between the incoming quarks and the outgoing leptons. In this frame the 3-momenta of the quarks and leptons each define a plane, and in turn an azimuthal angle and a polar angle between the two planes.
Without ISR the \(z\)-axis of the so-defined CS-frame is trivially given by the parton and hadron directions. Including ISR, we instead define this \(z\)-axis as halving the angle between one of the hadrons and the reverse direction of the other hadron. The Collins-Soper angle can be used to measure the polarization of the intermediate gauge boson, the weak mixing angle [91], or the (Lorentz) structure of the interaction vertices. The Collins-Soper angle can also be used to probe the structure of the Higgs-photon coupling [92, 93] and to boost new
physics searches in \(h^{*}\to ZZ\), \(Zh\), and \(t\bar{t}Z\) channels [94, 95, 96, 97]. Finally, it can be generalized to \(t\bar{t}\) production, where it is constructed for the top momentum in the \(t\bar{t}\) rest frame [98, 23, 99]. While the Collins-Soper angle has no specific sensitivity to \(CP\)-violation, we view it as the Swiss Army knife of coupling tests.
All above-mentioned kinematic observables are sensitive to the new-physics-squared terms, proportional to \(\sin^{2}\alpha\) or \(\cos^{2}\alpha\), in the \(t\bar{t}h\) rate, with no sensitivity to the sign of the CP-phase. From Ref. [29], we know the relative sensitivity of these observables to probe the Higgs-top \(CP\)-structure through a modified Fisher information metric, accounting for non-linear effects. The top-five observables with the highest Fisher information for \(\alpha\) are (symbolically written)
\[\Delta\eta_{t\bar{t}}>\theta_{\rm CS}>b_{4}>\Delta\phi_{t\bar{t}}>p_{Th}. \tag{25}\]
We show the parton-level distributions for the four most sensitive observables in the semileptonic \(t\bar{t}h\) channel for the SM value \(\alpha=0\) and \(\alpha=\pi/4,\pi/2\) at the LHC with \(\sqrt{s}=14\) TeV in Fig. 3. Different values of \(\alpha\) lead to distinctly different profiles in the distributions.
As alluded to above, the technical challenge and a limitation to the optimality of a given analysis is to construct the different observables in their respective kinematic frames. Considering their strong sensitivity on \(\alpha\), we include the leading observables in the phase space parametrization given in Eq. (7) to target this problem directly.
### Unfolding-based analysis
The standard challenge for every LHC analysis is to extract a small signal from a large (continuum) background. For our simple study, we show how we can avoid modeling this step. The generative unfolding trained on \(t\bar{t}h\) events gives us the probability \(p(x_{\text{part}}|x_{\text{reco}},S)\) that a parton-level signal event \(x_{\text{part}}\), corresponds to an assumed signal event \(x_{\text{reco}}\) at reconstruction level. What we are ultimately interested in, however, is a model parameter \(\alpha\), which could be a mass, a \(CP\)-phase, or any other continuous theory parameter, which affects our signal distribution. Since we do not know if a particular reco-level event \(x_{\text{reco}}\) is signal or background, we only care about the full probability \(p(\alpha|x_{\text{reco}})\) of our model parameter, given some reco-level event \(x_{\text{reco}}\) which is either signal or background. Since \(\alpha\) does not change the background, this probability can be split into the distribution \(p(\alpha|x_{\text{part}})\), where \(x_{\text{part}}\) is a parton-level signal event, and the probability \(p(x_{\text{part}}|x_{\text{reco}})\) of \(x_{\text{part}}\) given \(x_{\text{reco}}\):
\[p(\alpha|x_{\text{reco}})=\int p(\alpha|x_{\text{part}})p(x_{\text{part}}|x_{ \text{reco}})\,\mathrm{d}x. \tag{26}\]
The challenge is to compute \(p(x_{\text{part}}|x_{\text{reco}})\) from our unfolding result \(p(x_{\text{part}}|x_{\text{reco}},S)\). Using the definition of conditional probabilities we can write
\[p(x_{\text{part}}|x_{\text{reco}}) =\sum_{T\in\{S,B\}}p(x_{\text{part}}|x_{\text{reco}},T)p(T|x_{ \text{reco}})\] \[=p(x_{\text{part}}|x_{\text{reco}},S)p(S|x_{\text{reco}})+p(x_{ \text{part}}|x_{\text{reco}},B)(1-p(S|x_{\text{reco}}))\, \tag{27}\]
where the probabilities of \(x_{\text{reco}}\) being a signal or background event, \(p(T|x_{\text{reco}})\), can be encoded in a trained classifier. Let us consider for a moment what the probability \(p(x_{\text{part}}|x_{\text{reco}},B)\) tells us. We are interested in signal events \(x_{\text{part}}\), i.e. events that are affected by \(\alpha\). By definition, background events \(x_{\text{reco}}\) cannot give us any information, beyond prior knowledge, about \(x_{\text{part}}\). For this reason, we can drop \(x_{\text{reco}}\) and write \(p(x_{\text{part}}|x_{\text{reco}},B)=p(x_{\text{part}})\), where \(p(x_{\text{part}})\) is only constrained through prior knowledge. This includes our model assumptions as well as phase-space constraints due to a finite center-of-mass energy. We can now write
\[p(x_{\text{part}}|x_{\text{reco}})=p(x_{\text{part}}|x_{\text{reco}},S)p(S|x_{ \text{reco}})+p(x_{\text{part}})(1-p(S|x_{\text{reco}})). \tag{28}\]
What Eq. (28) shows, is that we can limit our unfolding model to extracting \(p(x_{\text{part}}|x_{\text{reco}},S)\) and still include background events into our analysis later, without changing our model.
As given in Eq. (8), we study \(pp\to t_{h}\bar{t}_{\ell}h\) production with \(h\to\gamma\gamma\) at the HL-LHC. The dominant background is continuum \(t\bar{t}\gamma\gamma\) production, subdominant contributions arise from the process \(Wb\,\bar{b}(h\to\gamma\gamma)\). We use MadGraph5_aMC@NLO[100] with NNPDF2.3QED[101] to generate signal events at leading order with \(\sqrt{s}=14\) TeV Signal events are simulated without kinematic cuts using the HC_NLO_X0 UFO model[102, 103]. Parton showering and hadronization effects are simulated using Pythia 8[104]. The detector response is simulated with Delphes 3[105], using the default ATLAS HL-LHC card[106, 107].
Next, we select events containing exactly two photons, two \(b\)-tagged jets, one lepton, and at least two light-flavored jets. The individual particles in the final state are required to satisfy the acceptance cuts
\[p_{\tau,b}>25\text{ GeV}\,,\quad p_{\tau,j}>25\text{ GeV}\,,\quad p_{\tau,\ell}>15\text{ GeV}\,,\quad p_{\tau,j}>15\text{ GeV}\,,\] \[|\eta_{b}|<4\,,\qquad\quad|\eta_{j}|<5\,,\qquad\quad|\eta_{\ell}| <4\,,\qquad\quad|\eta_{\gamma}|<4\,. \tag{29}\]
At the parton-level, the signal phase space involves eight final state particles; following Sec. 2.3 it requires 22 parameters if we are assuming the Higgs is fully and uniquely reconstructed.
The training dataset involves an event-wise pairing of parton and detector level events with up to six light-flavored jets, satisfying the selection cuts in Eq. (29). While the event at the reconstruction level requires additional degrees of freedom for jet radiation, the number of degrees of freedom is reduced by the neutrino. An additional challenge is the combinatorics of the \(b\)-jets and light-flavor jets.
### Results
Jet combinatoricsThe first results from unfolding \(t\bar{t}h\) SM-events are presented in Fig. 4. We train the unfolding network on SM-events and also apply it to SM-events. First, we examine the robustness of the network to unfold a variable number of jets to the parton-level. For our lepton-hadron reference process in Eq. (8) two light-flavor jets come from the hadronic top decay, while additional jets arise from QCD jet radiation. The unfolding network has to reconstruct the two hard jets at the parton level from a variable number of jets at the detector level [60].
To evaluate the unfolding performance, we examine four invariant masses: \(m_{t_{t}}\), \(m_{t_{b}}\), \(m_{h_{t_{b}}}\), and \(m_{t_{t}t_{b}}\). We train one network on SM events without ISR and one network on events with up to six light-flavored jets. The corresponding clNN-generated distributions are shown as
Figure 4: Jet combinatorics — cINN-generated distributions for \(m_{t_{t}}\), \(m_{t_{b}}\), \(m_{h_{t_{b}}}\) and \(m_{t_{t}t_{b}}\) in the SM. Unfolded distributions are shown as solid lines, parton-level truth as dashed lines. The training data set either does not include ISR (green) or up to six ISR jets (red).
solid lines in Fig. 4. The parton-level truth is displayed as dashed lines. We find that unfolded distributions generated by both networks are in good agreement with the parton-level truth in the bulk of the phase space. Despite the added combinatorial ambiguity, the performance of both networks is largely comparable. We also show the uncertainties from the Bayesian setup, represented as \(1\sigma\) error bands. They test the stability of the unfolding network similar to an ensemble of networks. It is important to observe that the truth distributions remain within these error bands.
Reconstructing dedicated observablesFor Fig. 5 we train the unfolding network on SM events with up to six light-flavor jets. We compare cINN-generated events at the parton-level and in the appropriate rest frame with events from a classical reconstruction for four particularly interesting observables from Sec. 3.1: \(\theta_{\text{CS}}\), \(\Delta\eta_{t\bar{t}}\), \(b_{4}\), and \(\Delta\phi_{t\bar{t}}\). For comparison, we display the parton-level truth as dashed lines. In the ratio we observe that the generated distributions agree with the truth within a few percent. Slightly larger deviations in the tails are due to limited training statistics.
The conventional approach to complex kinematic correlations in the semileptonic \(t\bar{t}\) system relies on a complex reconstruction algorithm, with a significant loss of information due to missing correlations [29]. We show the reconstructed distributions from the classical recon
Figure 5: Reconstructing dedicated observables — cINN-generated distributions and distributions based on classical reconstruction [29] for \(\theta_{\text{CS}}\), \(\Delta\eta_{t\bar{t}}\), \(b_{4}\) and \(\Delta\phi_{t\bar{t}}\) for SM events. The secondary panels show the bin-wise agreement between the cINN-generated distributions and the parton-level truth.
Figure 6: Model dependence — clNN-generated distributions for \(\theta_{\text{CS}}\), \(\Delta\eta_{t_{t}t_{h}}\), \(b_{4}\), and \(\Delta\phi_{t_{t}t_{h}}\). Upper two rows: unfolding of SM events using three different networks, trained on data with \(\alpha=-\pi/4,0,\pi/4\). Lower two rows: unfolding of events with \(\alpha=-\pi/4,0,\pi/4\), with a network trained on SM events.
struction strategy developed in Ref. [29] as dotted lines. Comparing these distributions to the clNN-unfolded version, we see that at least for a network trained and tested on SM events, the improvement from generative unfolding is striking.
Model dependenceAfter observing the significant improvement through our new method for SM events, we need to test how model-dependent the network training is. In the upper panels of Fig. 6, we reconstruct the usual set of key observables for SM events, but with three different networks, trained on events generated with the \(CP\)-angles \(\alpha=-\pi/4,0,\pi/4\). We adopt the BSM values \(\alpha=\pm\pi/4\) here, as these choices closely align with the current experimental limits [108, 109]. From Fig. 6 we expect, for instance, the network trained on events with \(\alpha=\pi/4\) to be biased towards a broader \(\theta_{\text{CS}}\) distribution, a wider rapidity difference \(\Delta\eta_{t_{t},t_{h}}\), and a flatter \(b_{4}\) distribution. In the different panels we see a slight bias, especially in the secondary panels. But the bias remains at the order of 10%, at most 20%, much below the change in the corresponding distributions from varying \(\alpha\). On the other hand, this bias is significantly larger than the uncertainty band, which indicates that this model dependence can be reduced through the proposed iterative method of Ref. [62]. The corresponding study is beyond the scope of this paper, because it balances a reduced bias of the unfolding with less statistics, an aspect which we do not include in this proof-of-principle study.
In the lower panels of Fig. 6 we test the model dependence the other way around, by unfolding data for different \(\alpha=-\pi/4,0,\pi/4\) using a network trained on SM events. The figure of merit are the ratios of clNN-unfolded and respective truth distributions, shown in the secondary panels. This situation is closer to the reality of a measurement, where we infer \(\alpha\) by comparing the distribution extracted from data to different simulated hypotheses. As before, we see a slight bias, now towards the SM structure of a more narrow \(\theta_{\text{CS}}\) distribution, a narrow rapidity difference \(\Delta\eta_{t_{t},t_{h}}\), and a steeper \(b_{4}\) distribution. Also, as before, the effect
Figure 7: Sensitivity — Left: clNN-generated distributions for \(\theta_{\text{CS}}\) from unfolding events with two \(\alpha\) values. These generated distributions are compared to the distributions obtained from classical reconstruction methods, as described in Ref. [29], and the respective truth. Right: To quantify the sensitivity of the clNN, as shown here for \(\theta_{\text{CS}}\), we compute the reduced \(\chi^{2}\)-value between the distributions (\(\sim\)120k events and 64 bins) for both \(\alpha\) values, using the Poisson errors of the bin counts. We do this for the clNN-generated (red), classically reconstructed (green), and truth distributions (blue) of \(\theta_{\text{CS}}\), \(\Delta\eta_{t_{t},t_{h}}\), \(b_{4}\), and \(\Delta\phi_{t_{t},t_{h}}\). In the bottom panel, we also show the ratio of the reduced \(\chi^{2}\)-values of the clNN and the classical reconstruction to the truth. Uncertainties on clNN-generated values are obtained from the Bayesian setup.
of the bias is much smaller than the effect of \(\alpha\) on the data, leaving us optimistic that we can use the clNN-unfolded distribution to measure \(\alpha\).
SensitivityFinally, in Fig. 7, we apply the generative unfolding to SM and \(\alpha=\pi/4\) events. The unfolding network is trained on SM events. As a baseline comparison, we also show the same two curves for classical reconstruction of \(\theta_{\text{CS}}\), following Ref. [29] as dotted lines in the left panel of Fig. 7. As mentioned earlier, generative unfolding leads to a major improvement over classical reconstruction. The difference in the two unfolded kinematic distributions, shown in solid lines, illustrates the reach of an analysis based on the kinematic distribution. To showcase the improvement in new physics sensitivity, we calculate the reduced \(\chi^{2}\) values for \(\theta_{\text{CS}}\), \(\Delta\eta_{t_{t}t_{h}}\), \(b_{4}\), and \(\Delta\Phi_{t_{t}t_{h}}\) between the SM and \(\alpha=\pi/4\) hypotheses, using the Poisson errors of the bin counts. The reduced \(\chi^{2}\) values are computed with \(\sim\)120k events and 64 bins, for three scenarios: parton-level truth (blue), classical reconstruction from Ref. [29] (green), and the clNN-based generative model trained on SM events (red). A higher \(\chi^{2}\) value indicates a greater sensitivity to new physics.
The results show that the unfolding setup leads to an enhancement in sensitivity compared to the classical reconstruction strategy. This indicates that the generative unfolding approach is effective in extracting more information from the kinematic distributions, thereby improving the analysis' capability to detect and explore new physics phenomena. We further observe that the network is slightly more consistent in reproducing the sensitivity relations of the true observable distributions than the classical reconstruction. The latter performs well on some observables, but quite bad for others. Especially surprising is the classical sensitivity on \(\theta_{\text{CS}}\), given that the reconstruction here is far from the actual CS-angle.
## 4 Outlook
Unfolding is one of the most exciting development of analysis preservation and publication at the LHC. Modern machine learning makes it possible to unfold high-dimensional distributions, covering all correlations without binning. Generative unfolding defines this unfolding in a statistically consistent manner. However, using unfolded data is a challenge for the ATLAS and CMS analysis chains, especially in controlling and estimating uncertainties.
We investigated a simpler application of the unfolding technique, the extraction of a kinematic observable in a specific partonic reference frame. It solves the dilemma that on the one hand an optimal observable requires no complex correlations, but on the other hand such an observable is, typically, hard to reconstruct. In this case the generated kinematic distribution can be used like any other observable; the unfolding network is nothing but a kinematic reconstruction algorithm.
The perfect examples for a challenging kinematic correlation are the Collins-Soper angle or the optimal \(CP\)-observables in \(t\bar{t}h\) production. They allow us to measure a \(CP\)-phase in the top Yukawa coupling, a cosmologically relevant parameter entering an LHC signature at dimension four and at leading order. We have shown that unfolding allows us to extract the leading observables for such a \(CP\)-phase \(\alpha\), with the help of an appropriate phase space parametrization. While such a parametrization might shape the unfolded kinematic distribution, this effect can be controlled through calibration.
First, we have shown that the clNN-unfolding can solve the combinatorics of \(W\)-decay jets vs QCD jet radiation. Second, the unfolded distributions of SM events, with a network trained
on SM events, show excellent agreement with the parton-level truth. Potential differences are covered by the uncertainty estimate from the Bayesian network. Third, we have tested the model dependence in two different ways -- unfolding SM event using networks trained on events with different amounts of \(CP\)-violation and unfolding events with \(CP\)-violation using a network trained on events. For the former, we have found that there exists a small, but significant model dependence, which can be removed through Bayesian iterative improvements. For the latter, the unfolded distributions do not perfectly reproduce the respective truth, but the bias is much smaller than the kinematic effect of the \(CP\)-angle.
All these tests have motivated a comparison of the reach of the HL-LHC for the \(CP\)-angle \(\alpha\), based on classical reconstruction methods and on clNN-unfolded distributions. The generative unfolding approach effectively extracts more information from kinematic distributions, enhancing sensitivity to new physics phenomena. This highlights the importance of advanced machine learning techniques, such as clNNs, for the HL-LHC.
While this study is clearly not the last word on this analysis technique, we consider the outcome promising enough for an experimental study, with a proper treatment of statistical limitations, continuum backgrounds, calibration, and iterative improvements of the unfolding network.
## Acknowledgements
RKB and DG thank the U.S. Department of Energy for financial support, under grant number DE-SC0016013. Some computing for this project was performed at the High Performance Computing Center at Oklahoma State University, supported in part through the National Science Foundation grant OAC-1531128. TH is funded by the Carl-Zeiss-Stiftung through the project _Model-Based AI: Physical Models and Deep Learning for Imaging and Cancer Treatment_. This research is supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under grant 396021762 - TRR 257: Particle Physics Phenomenology after the Higgs Discovery and through Germany's Excellence Strategy EXC 2181/1 - 390900948 (the Heidelberg STRUCTUREURES Excellence Cluster).
|
2308.16645 | Detecting Delamination via Nonlinear Wave Scattering in a Bonded Elastic
Bar | In this paper we examine the effect of delamination on wave scattering, with
the aim of creating a control measure for layered waveguides of various bonding
types. Previous works have considered specific widths of solitary waves for the
simulations, without analysing the effect of changing the soliton parameters.
We consider two multi-layered structures: one containing delamination
"sandwiched" by perfect bonding and one containing delamination but
"sandwiched" by soft bonding. These structures are modelled by coupled
Boussinesq-type equations. Matched asymptotic multiple-scale expansions lead to
coupled Ostrovsky equations in soft bonded regions and Korteweg-De Vries
equations in the perfectly bonded and delaminated region. We use the Inverse
Scattering Transform to predict the behaviour in the delaminated regions. In
both cases, numerical analysis shows that we can predict the delamination
length by changes in the wave structure, and that these changes depend upon the
Full Width at Half Magnitude (FWHM) of the incident soliton. In the case of
perfect bonding, we derive a theoretical prediction for the change and confirm
this numerically. For the soft bonding case, we numerically identify a similar
relationship using the change in amplitude. Therefore we only need to compute
one curve to determine the behaviour for any incident solitary wave, creating a
framework for designing measurement campaigns for rigorously testing the
integrity of layered structures. | J. S. Tamber, D. J. Chappell, J. C. Poore, M. R. Tranter | 2023-08-31T11:33:12Z | http://arxiv.org/abs/2308.16645v1 | # Detecting Delamination via Nonlinear Wave Scattering in a Bonded Elastic Bar
###### Abstract
In this paper we examine the effect of delamination on wave scattering, with the aim of creating a control measure for layered waveguides of various bonding types. Previous works have considered specific widths of solitary waves for the simulations, without analysing the effect of changing the soliton parameters. We consider two multi-layered structures: one containing delamination'sandwiched' by perfect bonding and one containing delamination but'sandwiched' by soft bonding. These structures are modelled by coupled Boussinesq-type equations. Matched asymptotic multiple-scale expansions lead to coupled Ostrovsky equations in soft bonded regions and Korteweg-De Vries equations in the perfectly bonded and delaminated region. We use the Inverse Scattering Transform to predict the behaviour in the delaminated regions.
In both cases, numerical analysis shows that we can predict the delamination length by changes in the wave structure, and that these changes depend upon the Full Width at Half Magnitude (FWHM) of the incident soliton. In the case of perfect bonding, we derive a theoretical prediction for the change and confirm this numerically. For the soft bonding case, we numerically identify a similar relationship using the change in amplitude. Therefore we only need to compute one curve to determine the behaviour for any incident solitary wave, creating a framework for designing measurement campaigns for rigorously testing the integrity of layered structures.
**Keywords:** nonlinear waves, wave scattering, solitons, Inverse Scattering Transform
## 1 Introduction
Solitary waves are of significant interest from both a mathematical perspective as well as in physical and engineering applications. They often arise as solutions to nonlinear equations such as the KdV equation (and its extensions) in shallow water [1, 2, 3, 4], the Benjamin-Ono equation for internal waves of stratified fluids [5, 6], the nonlinear Schrodinger equation in optics [7, 8], and flexural waves in the beam equation [9], to name a few. From a purely mathematical perspective, there are many studies into the existence and behaviour of solitons, for example as solutions to the Boussinesq equation [10, 11]. Boussinesq-type equations are of interest in this study, in the context of solid mechanics. It is well-known that they can describe long longitudinal bulk strain waves in elastic waveguides, such as rods and metal plates (see e.g. [12, 13, 14, 15, 16, 17]). Practical experiments have confirmed that longitudinal bulk strain solitons exist in these waveguides, which validated theoretical findings [18, 19, 20, 21].
Indeed, layered waveguides with bonding between the layers can be modelled by the so-called "doubly dispersive equation" (DDE), which can be derived using nonlinear elasticity theory for long longitudinal waves in a bar of rectangular cross-section [12, 13]. The DDE for a bar of rectangular cross-section \(\sigma=2a\times 2b\) has the form
\[f_{tt}-c^{2}f_{xx}=\frac{\beta}{2\rho}(f^{2})_{xx}+\frac{Jv^{2}}{\sigma}(f_{tt }-c_{1}^{2}f_{xx})_{xx}, \tag{1}\]
where
\[\beta =3E+2l(1-2v)^{3}+4m(1+v)^{2}(1-2v)+6nv^{2},\] \[c =\sqrt{\frac{E}{\rho}},\quad c_{1}=\frac{c}{\sqrt{2(1+v)}},\quad J =\frac{4ab(a^{2}+b^{2})}{3}, \tag{2}\]
\(\rho\) is the density, \(E\) is the Young's modulus, \(v\) is the Poisson's ratio, while \(l\), \(m\), \(n\) are the Murnaghan's moduli, and \(a\) and \(b\) are geometric and physical parameters.
The case when the interlayer bonding is missing over part of the structure, known as delamination, is important for a wide range of applications in non-destructive testing for structural damage in the multi-layer beam-like structures found throughout civil and mechanical engineering. The governing mathematical model then takes the form of a scattering problem and, for a perfectly bonded waveguide (represented in experiments by cyanoacrylate), we find a series of Boussinesq equations, with continuity conditions on the interface between sections [22]. Incident solitons fission into multiple solitons with dispersive radiation, agreeing with theoretical predictions [22], numerical simulations [23, 24] and experimental observations [20, 25, 26].
In the case of an imperfect "soft" bonding (represented in experiments by polychloroprene), a model based upon a series of anharmonic coupled dipoles can be used to derive coupled regularised Boussinesq (cRB) equations to model long nonlinear longitudinal bulk strain waves in a bi-layer, assuming sufficiently soft bonding [27]. In this case, when the materials in the layers are sufficiently close, an incident solitary wave evolves in the bonded region into a solitary wave with a one-sided, co-propagating oscillatory tail, known as a radiating solitary wave. In the delaminated regions, the solitary wave detaches from its tail and this can be used as a measure of delamination [28]. More recently, in the limiting case of a semi-infinite delamination, we find the emergence of Ostrovsky wave packets in bonded regions [29]. The Ostrovsky equation originally arose in the context of shallow-water waves, where the rotation of the Earth is considered [30], and the evolution of wave packets generated from an initial pulse has been extensively studied [31].
In this paper we aim to use theoretical predictions and numerical simulations to establish a prediction for the delamination length based upon changes in the wave during its propagation. We will consider a range of initial conditions by varying the Full Width at Half Magnitude (FWHM) of the incident wave, whereas previous studies have only considered a single fixed incident soliton [23, 28]. Our aim is to find a relationship between the generated delamination curves for different values of FWHM, so that only one curve needs to be computed, significantly reducing the computation time. This wider range of predictions allows for the design of measurement campaigns for detecting and measuring delamination in layered waveguides. We will consider a multi-layered symmetric structure with perfect bonding, as well as a two-layered structure with soft bonding. In both cases, we will consider delamination'sandwiched' by bonding. These structures are illustrated in figures 1 and 2, and are inspired by an existing experimental set-up [20].
The paper is structured as follows. In Section 2 we introduce the equations describing longitudinal wave propagation in both the perfectly bonded and soft bonded cases. We also introduce the weakly-nonlinear solution for the perfectly bonded case so that we can create a measure of the delamination length using theoretical predictions. In Section 3, we begin by illustrating the evolution of incident solitary waves in the cases of both a perfectly bonded waveguide and a soft bonded waveguide. Next, for the perfectly bonded case, we use theoretical predictions to determine the length of the delamination region for a variety of incident solitary waves, tested via numerical simulations and measuring the amplitude of the transmitted wave with reference to the incident wave. This gives rise to a relationship between FWHM of the incident soliton and the delamination length, allowing for efficient computation for
other incident waves. A similar result is also found for the soft bonded case, analysing the decrease in amplitude after the wave propagates through a delaminated region. The theoretical prediction is difficult in this case, so we rely on numerical observations and instruction from the previous case. In Section 4 we conclude our discussions.
## 2 Problem set-up
### Perfectly bonded case
We consider the scattering of a long longitudinal strain solitary wave in a perfectly bonded layered bar with delamination in the centre, as illustrated in Figure 1. Note that we only illustrate two layers in this figure, but the setup can accommodate any number of symmetric layers, as we assume that the materials of the layers are identical and that the bonding is the same between all layers. This problem is described by the regularized non-dimensional Boussinesq equations [22]
\[u_{tt}^{(i)}-c_{i}^{2}u_{xx}^{(i)}=\varepsilon\left[-12\alpha_{i}u_{x}^{(i)}u_ {xx}^{(i)}+2\beta_{i}u_{ttxx}^{(i)}\right], \tag{3}\]
where \(i=1,3\) represent the perfect bonded regions and \(i=2\) represents the delaminated region. We have the coefficients \(\alpha_{i}\), \(\beta_{i}\) and \(c_{i}\), which can theoretically vary between sections (representing a waveguide with different materials in each section), but for our purposes we will assume that the sections are of one and the same material. The parameter \(\varepsilon\) is the small wave parameter. The Boussinesq equations are complemented with continuity conditions, namely continuity of longitudinal displacement
\[u^{(i)}\Big{|}_{x=x_{i}}=\left.u^{(i+1)}\right|_{x=x_{i}}, \tag{4}\]
and continuity of normal stress
\[\sigma^{(i)}|_{x=x_{i}}=\sigma^{(i+1)}|_{x=x_{i}}, \tag{5}\]
where \(\sigma^{(i)}\) is defined by our original equation (3) when written in the form
\[u_{tt}^{(i)}=\frac{\mathrm{d}\sigma^{(i)}}{\mathrm{d}x}.\]
We consider \(\alpha_{i}=1\) for all \(i\), \(\beta_{1,3}=1\) and
\[\beta_{2}(n,k)=\frac{n^{2}+k^{2}}{n^{2}(1+k^{2})}, \tag{6}\]
where \(n\) represents the number of layers in the structure and \(k\) is defined by the geometry of the waveguide. Referring to Figure 1, the cross section has width \(2a\) and the height of each layer is \(2b/n\). In terms of these values, as there are two layers, \(n=2\) and \(k=b/a\). In our numerical simulations we will consider various \(n\) and \(k\) values.
#### 2.1.1 Weakly-nonlinear solution
In order to find theoretical predictions for the evolution of the solitary waves, we construct a weakly-nonlinear solution and use theoretical results for the derived equations. For brevity, we only provide a summary of the results below, more details can be found in Refs. [23, 24, 28, 29]. We seek a weakly-nonlinear solution for the strains \(f^{(i)}=u_{x}^{(i)}\) of the form
\[f^{(i)} =T^{(i)}(\xi,X)+R^{(i)}(\eta,X)+\varepsilon P^{(i)}(\xi,\eta,X)\] \[\quad+\mathcal{O}\left(\varepsilon^{2}\right), \tag{7}\]
where \(\xi=x-c_{i}t\), \(\eta=x+c_{i}t\) and \(X=\varepsilon x\). Substituting the respective weakly-nonlinear solutions into the differentiated form of (3), then applying space-averaging (see [23, 28, 29]) yields the leading order solutions
\[T_{X}^{(i)}-6\frac{\alpha_{i}}{c_{i}^{2}}T^{(i)}T_{\xi}^{(i)}+\beta_{i}T_{ \xi\xi\xi}^{(i)}=0, \tag{8}\]
Figure 1: Bi-layer structure with an initial perfect bonded region for \(x_{0}<x<x_{1}\), a delaminated region for \(x_{1}<x<x_{2}\) and a perfect bonded region for \(x_{2}<x<x_{3}\). We assume that the materials in both layers are identical.
\[R_{X}^{(i)}-6\frac{\alpha_{i}}{c_{i}^{2}}R^{(i)}R^{(i)}_{\eta}+\beta_{i}R^{(i)}_{ \eta\eta\eta}=0. \tag{9}\]
To determine "initial conditions" for the equations derived in each section, we substitute (7) into the continuity conditions (4) - (5) to find values for \(T\) and \(R\) at the interface, in terms of the previous transmitted wave. This gives rise to transmission and reflection coefficients in terms of \(c_{i}\). As we have assumed that the waveguide is one and the same material (so \(c_{i}=1\) for all \(i\)) we have full transmission and no reflection.
#### 2.1.2 Theoretical predictions
We can rewrite the transmitted wave equation (8) in the form
\[U_{\tau}-6UU_{\chi}+U_{\chi\chi\chi}=0,\quad U|_{\tau=0}=U_{0}(\chi). \tag{10}\]
For a sufficiently rapidly decaying initial condition \(U_{0}(\chi)\) on the infinite line, the solution to (10) is related to the spectral problem for the Schrodinger equation
\[\Psi_{\chi\chi}+[\lambda-U_{0}(\chi)]\Psi=0, \tag{11}\]
where \(\lambda\) is the spectral parameter. Finding the evolution of the scattering data for the discrete and continuous spectra and using these to reconstruct the solution to the KdV equation is known as the Inverse Scattering Transform (IST) [32]. We can use the results from the IST to create theoretical predictions for the solitons in the delaminated region, as well as in the second bonded region.
We assume that there is an incident soliton in the first region, which is a travelling wave solution and thus will move in time but retains its shape. To illustrate the theoretical predictions we consider the second region, where we have \(\beta_{2}\) defined as in (6) and the initial condition for (10) in this region then takes the form
\[U_{0}(\chi)=-A\text{sech}^{2}\left(\frac{\chi}{l}\right),\quad A=\frac{v}{2 \beta_{2}},\quad l=\frac{2}{\sqrt{v}}. \tag{12}\]
In this case the solution will consist of either one soliton, or a series of solitons, characterised by eigenvalues in the discrete spectrum, and accompanying dispersive radiation determined by the continuous spectrum. In some cases we may see the fission of the initial soliton, which is when more than one soliton is generated, in particular when \(\beta_{2}\neq 1\).
The discrete eigenvalues of (11) take the form \(\lambda=-k_{n}^{2}\), where
\[k_{n}=\frac{1}{2l}\left[\sqrt{1+4Al^{2}}-(2n-1)\right], \tag{13}\]
for \(n=1,2,\ldots,N\). Recalling (6), the number of solitons, \(N\), generated in the delaminated region is given by the largest integer satisfying the inequality
\[N<\frac{1}{2}\left(\sqrt{1+\frac{8}{\beta_{2}}}+1\right). \tag{14}\]
We can see from (14) that, for \(\beta=1\) we will have one soliton, while for \(\beta<1\), we will have more than one soliton and as \(\beta\) becomes smaller, more solitons will be generated. This corresponds to either an increase in layers in the waveguide, or a change in geometry. As \(\tau\rightarrow\infty\), the solution will evolve into a train of solitary waves, ordered by their heights, propagating to the right and some dispersive radiation (a dispersive wave train) propagating to the left (in the moving reference frame), i.e.
\[U(\chi,\tau) \sim-\sum_{n=1}^{N}2k_{n}^{2}\text{sech}^{2}(k_{n}(\chi-4k_{n}^{2} \tau-\chi_{n}))\] \[\quad+\text{radiation}, \tag{15}\]
where \(\chi_{n}\) is the phase shift. In the context of our problem, if there is an infinite delamination then the solitons will separate and rank order, while for finite delamination the solitons will only separate for a large delamination. This allows us to create a measure of the delamination length, by comparing the measured signal at the end of the bar to the theoretical prediction.
We introduce the incident solitary wave for \(T^{(1)}\), the exact travelling wave solution of (8), as
\[T^{(1)}(\xi,X)=-\frac{v}{2}\text{sech}^{2}\left(\frac{\sqrt{v}}{2}\left(\xi- vX\right)\right), \tag{16}\]
where \(v\) is the phase speed. Solitary waves are often measured in experiments in terms of their Full Width at Half Magnitude (FWHM), so we
rewrite this as
\[-\frac{v}{2}\mathrm{sech}^{2}\left(\frac{\sqrt{v}}{4}\mathrm{FWHM}\right)=-\frac{ v}{4}, \tag{17}\]
and hence we obtain
\[v=\left(\frac{4}{\mathrm{FWHM}}\cosh^{-1}(\sqrt{2})\right)^{2}. \tag{18}\]
This allows us to generalise the FWHM based measure to any size of incident solitary wave.
### Imperfect bonding case
The second case we consider is when we have a two layered waveguide with "soft" bonding between the layers.
This is illustrated in Figure 2. The longitudinal displacement in the bonded regions is described by the regularized non-dimensional equations
\[u_{tt}^{(i)}-u_{xx}^{(i)} =\varepsilon\left[-12u_{x}^{(i)}u_{xx}^{(i)}+2u_{ttxx}^{(i)}\right]\] \[\quad-\varepsilon\delta(u^{(i)}-w^{(i)}), \tag{19}\] \[w_{tt}^{(i)}-c^{2}w_{xx}^{(i)} =\varepsilon\left[-12\alpha w_{x}^{(i)}w_{xx}^{(i)}+2\beta w_{ ttxx}^{(i)}\right]\] \[\quad+\varepsilon\gamma(u^{(i)}-w^{(i)}), \tag{20}\]
for \(x_{i-1}<x<x_{i}\), while in the delaminated regions we have Boussinesq equations
\[u_{tt}^{(i)}-u_{xx}^{(i)}=\varepsilon\left[-12u_{x}^{(i)}u_{xx}^{(i)}+2u_{ ttxx}^{(i)}\right], \tag{21}\]
\[w_{tt}^{(i)}-c^{2}w_{xx}^{(i)}=\varepsilon\left[-12\alpha w_{x}^{(i)}w_{xx}^{( i)}+2\beta w_{ttxx}^{(i)}\right]. \tag{22}\]
As with the perfectly bonded case, these equations are complemented with continuity conditions at the interfaces between the sections. We have continuity of longitudinal displacement
\[u^{(i)}|_{x=x_{i}} =u^{(i+1)}|_{x=x_{i}},\] \[w^{(i)}|_{x=x_{i}} =w^{(i+1)}|_{x=x_{i}}, \tag{23}\]
and continuity of normal stress
\[\sigma_{u}^{(i)}|_{x=x_{i}} =\sigma_{u}^{(i+1)}|_{x=x_{i}},\] \[\sigma_{w}^{(i)}|_{x=x_{i}} =\sigma_{w}^{(i+1)}|_{x=x_{i}}, \tag{24}\]
for \(i=1,2\), where \(\sigma_{u}\) and \(\sigma_{w}\) are defined by (19) and (20) as
\[u_{tt}^{(i)} =\frac{\mathrm{d}\sigma_{u}^{(i)}}{\mathrm{d}x}-\delta(u^{(i)}-w ^{(i)}),\] \[w_{tt}^{(i)} =\frac{\mathrm{d}\sigma_{w}^{(i)}}{\mathrm{d}x}+\gamma(u^{(i)}-w ^{(i)}),\]
respectively. We will consider the case here where the materials in the layers are similar, namely \(c-1=\mathcal{O}(\varepsilon)\). We can construct a weakly-nonlinear solution to this system of equations, as was done in [28], however we cannot obtain any direct theoretical predictions from this approach as the derived coupled Ostrovsky equations are not solvable via the Inverse Scattering Transform. Therefore, we will explore this case numerically to determine a measure of delamination.
## 3 Numerical results
We now aim to use the derived weakly-nonlinear solution and the theoretical predictions of Section 2 to introduce a measure of the delamination length in terms of the change in wave structure. In this section, we first demonstrate the effect of delamination on the transmitted soliton for both the perfectly bonded and soft bonded waveguides in Section 3.1. We then introduce a measure of the delamination length for the perfectly bonded case in Section 3.2, and for the soft bonded case in Section 3.3. In both cases we consider how these measures scale with respect to the incident soliton in order to rapidly recompute results for a
Figure 2: Bi-layer structure with an initial soft bonded region for \(x_{0}<x<x_{1}\), a delaminated region for \(x_{1}<x<x_{2}\) and a soft bonded region for \(x_{2}<x<x_{3}\). We assume that the materials in both layers are similar, that is, their material properties differ by \(\mathcal{O}(\varepsilon)\).
wide range of initial conditions. We will use the finite difference scheme from Ref. [24] to solve the original Boussinesq system and a semi-analytical method using a pseudospectral scheme for the derived KdV equations for the perfectly bonded case, similar to the one used for coupled Ostrovsky equations in [28]. In all cases our simulations will use a grid spacing of \(\Delta x=0.01\) and a time step of \(\Delta t=0.01\) for the finite-difference scheme. For the semi-analytical method we take \(\Delta X=5\times 10^{-4}\) and \(\Delta\xi=0.1\), corresponding to \(N=131,072\).
### Examples of scattering
Firstly, we demonstrate the effect of delamination on the propagation of an incident solitary wave, in both scenarios described in Section 2. For the perfectly bonded case, let us assume a spatial domain \(x\in[-100,1000]\) and we have a delamination starting at \(x=0\) of length \(D\). We present three cases: no delamination, a delamination of length \(D=50\) and a delamination of length \(D=300\). These results, as well as a comparison between the weakly-nonlinear solution and the directly computed solution, are shown in Figure 3. We can see from Figure 2(a) that, in the case of no delamination, the soliton propagates without change in shape or structure. When delamination is introduced in figures 2(b) and 2(c), the soliton fissions into two solitons with accompanying dispersive radiation. Comparing the case of \(D=50\) to \(D=300\), we can see that the second soliton has become more distinct from the radiation and the first soliton has tended towards its theoretically predicted amplitude. Figure 2(d) shows that there is a reasonable agreement between the direct numerical simulation and the semi-analytical method when \(D=300\), with a slight phase shift and amplitude change. This will be reduced for smaller values of \(\varepsilon\) (as we have constructed a series in increasing powers of \(\varepsilon\)), however we have qualitatively the same structure.
Similarly, we consider the case of soft bonding and an incident soliton. We assume a spatial domain \(x\in[-500,1000]\), with a homogeneous (delaminated) section for \(x\in[-500,-400]\) to generate the same wave in both layers, bonded sections \(x\in[-400,0]\) and \(x\in[D,1000]\), with a delaminated region for \(x\in[0,D]\). We again present three cases including when there is no delamination, a shorter delamination length of
Figure 3: The solution at \(t=900\) in the final section of the perfectly bonded waveguide, for various delamination lengths, and comparison of the direct numerical (blue, solid line) and semi-analytical (red, dotted line) simulations. Parameters are \(\varepsilon=0.1\), \(\mathrm{FWHM}=5.0\), \(n=2\) and \(k=2\). The finite-difference method uses a computational domain of \([-100,1000]\) and for the semi-analytical pseudospectral method we have \(N=131,072\).
\(D=100\) and a larger delamination with \(D=300\). The results are shown in Figure 4 for the upper layer, where the lower layer is qualitatively similar. In the case of no delamination shown in Figure 3(a) we have a solitary wave with a one-sided oscillatory tail, known as a _radiating solitary wave_. As the delamination length increases, the solitary wave begins to lose amplitude and expel energy into its tail through an exchange of energy between the layers. These are clear signs of the presence of delamination in figures 3(b) and 3(c), with structural changes that can be detected and the decay in amplitude can be quantified to give a measure of the length of delamination. Note that in this case there is no comparison between the simulation schemes since the semi-analytical scheme is not applicable for the soft bonded case.
### Measure of delamination length for perfect bonding
We now expand upon the observations from the previous subsection by introducing a measure of delamination based upon the theoretical predictions from Section 2. We then generalise this measure to apply for different incident soliton widths. If we assume that the solitons are well-separated, representing the case of infinite delamination, the amplitude of the soliton can be found from the IST. We have
\[A_{3} =A_{1}k_{2}^{2}k_{3}^{2},\quad k_{2}=\frac{1}{2}\left(\sqrt{1+ \frac{8}{\beta_{2}}}-1\right),\] \[k_{3} =\frac{1}{2}\left(\sqrt{1+8\beta_{2}}-1\right), \tag{25}\]
where \(A_{1}\) is the amplitude of the incident soliton, \(A_{3}\) is the amplitude of the lead soliton in the second bonded region, and \(k_{2}\), \(k_{3}\) are the eigenvalues corresponding to the lead soliton amplitude in the second and third regions, as determined by the IST. As the delamination length is reduced, the amplitude in the third region will tend towards the initial amplitude, \(A_{1}\).
Denoting the calculated numerical solution as \(A_{\text{num}}\) from the simulation, we introduce a measure of the amplitude of the lead soliton in the third section of the bar in comparison to the incident soliton as
\[\sigma=\frac{A_{\text{num}}-A_{1}}{A_{3}-A_{1}}\times 100. \tag{26}\]
This corresponds with the measure used in [24], where it was assumed that \(\text{FWHM}=5\). We now compute the solution using the semi-analytical pseudospectral scheme for a wide range of values of FWHM with the aim of determining a general rule for the delamination length. The delamination length is chosen to be \(D\in[0,300]\) and we measure the delamination in multiples of FWHM.
The results are plotted in Figure 4(a) for \(n=3\), \(k=3\), and in Figure 4(b) for \(n=4\) and \(k=3\). We can see that, as the delamination length increases,
Figure 4: The solution at \(t=900\) in the final section of the upper layer of the soft bonded waveguide, for various delamination lengths. Parameters are \(\varepsilon=0.05\), \(\text{FWHM}=5.0\), \(c=1.025\), \(\alpha=\beta=1.05\), \(\delta=\gamma=1\). The finite-difference method uses a computational domain of \([-500,1000]\).
the measure \(\sigma\) increases until it tends to a value of \(1\), and this behaviour is replicated for different values of FWHM. For larger FWHM it may not reach this limit for the chosen delamination length. We can also see a similar behaviour for different values of \(n\) and \(k\).
To generalise this approach, we consider the equation for the phase speed \(v\) in terms of FWHM (18). We can see that \(v\) in inversely proportional to the square of FWHM. Thus, fixing our reference as \(\mathrm{FWHM}=5\), we let \(\sigma\) be a function of delamination length, parametrised by FWHM, and we introduce the scaling
\[\tilde{\sigma}(D;\mathrm{FWHM})=\frac{\mathrm{FWHM}^{2}}{25}\sigma(D; \mathrm{FWHM}). \tag{27}\]
The resulting plots are shown in Figure 5(a) for \(n=3\), \(k=3\), and in Figure 5(b) for \(n=4\), \(k=3\). We see that the scaled versions are very closely aligned, with the only disagreement stemming from the restriction on delamination length for larger values of FWHM. Therefore, after computing one curve for the smallest value of FWHM, we can reproduce all subsequent curves for any value of FWHM. This allows for the highly efficient computation of the delamination curves and for a wide range of experiments with incident waves of different amplitude.
Figure 5: Plots of the change in amplitude of the lead transmitted soliton, in comparison to the incident soliton and theoretical predictions, as measured by \(\sigma\), for various values of FWHM. Here we take \(\varepsilon=0.1\).
Figure 6: Plots of the scaled delamination curves for the change in amplitude of the lead transmitted soliton, in comparison to the incident soliton and theoretical predictions, as measured by \(\sigma\). Here we take \(\varepsilon=0.1\).
### Finite delamination with soft bonding
We now consider the soft bonded case outlined in Section 2.2. This case was also studied in Ref. [28], but for only one value of FWHM. In this work we compute the solution for a wide range of FWHM, using the finite-difference scheme in [24]. The constructed weakly-nonlinear solution consists of coupled Ostrovsky equations, in contrast to the KdV equations in the previous case [28]. Therefore we cannot use the IST to predict the amplitude of the waves in the bonded regions as coupled Ostrovsky equations are not integrable via the IST. The incident soliton in this case evolves into a radiating solitary wave, that is a solitary wave with a one-sided oscillatory tail.
To determine the change in amplitude, as a measure of the delamination length, we denote the amplitude of the soliton or wave packet in each region as \(A_{L}\), where \(L\) is the region index, and we introduce the measure
\[\zeta=\frac{|A_{1}-A_{3}|}{A_{1}}\times 100. \tag{28}\]
Figure 7(a) presents the results for \(\zeta\), computed over a wide range of FWHM values, for the upper layer. A similar agreement is seen for the lower layer, but the results are omitted for brevity. Following the idea from the perfectly bonded case, we introduce a quadratic scaling. We choose a reference FWHM value (in this case we choose our lowest value of FWHM, namely \(\text{FWHM}=5\)) and then calculate a scaling of the form
\[\tilde{\zeta}=\frac{\zeta}{a+b\ \overline{\text{FWHM}}+c\ \overline{\text{FWHM}}^{2}}, \tag{29}\]
where we have introduced
\[\overline{\text{FWHM}}=\frac{\text{FWHM}}{5},\]
and \(a\), \(b\), \(c\) are constants to be found that satisfy the relationship \(a+b+c=1\). The results for \(a=0.49\), \(b=0.28\), \(c=0.23\) are plotted in Figure 7(b) and we can see a good agreement across a range of values of FWHM. However, this fitting is done by careful choice of parameters rather than theoretical prediction, as for the previous case. The agreement begins to worsen slightly after a delamination of 40 units of FWHM, which corresponds to a minimum of 200 in nondimensional units, however overall the agreement is still good.
We now summarise the scaling used to convert our nondimenesional variables to dimensional material parameters that can be compared to experimental data, in order to confirm whether a delamination length of 200 is reasonable. Referring to the dimensional form of the DDE in (1), with parameters (2), we introduce the scaling to nondimensional form via
\[\tilde{x}=\frac{x}{X},\quad\tilde{f}=\frac{f}{F},\quad\tilde{t}=\frac{t}{T}, \tag{30}\]
Figure 7: Change in amplitude of the radiating solitary wave in the soft bonded structure, for various delamination lengths and values of FWHM. Parameters are \(\varepsilon=0.05\), \(\delta=\gamma=1\).
where
\[X =\sqrt{\frac{J\nu^{2}}{2\varepsilon\sigma c^{2}}\left(c^{2}-c_{1}^{2 }\right)},\quad T=\frac{X}{c},\] \[F =-\frac{12\varepsilon c^{2}\rho}{\beta}X. \tag{31}\]
We can therefore find the corresponding material length given the nondimensional length. Let us assume a PMMA bar of 10mm \(\times\) 10mm cross-section, then using the parameters for PMMA from [20] we find that, for \(\varepsilon=0.1\), a delamination length of \(x=200\) in nondimensional units corresponds to a length of approximately \(x=520\)mm, which is significant given the experimental materials are usually around 600mm long in total. Therefore, restricting our considerations to values of delamination less than 200 nondimensional units is reasonable in the context of practical applications.
## 4 Conclusion
In this paper we have considered the scattering of a bulk strain solitary wave in a delaminated bilayer with either perfect or soft bonding between the layers. The longitudinal displacements are modelled by either Boussinesq equations (perfectly bonded or delaminated sections) or coupled Boussinesq equations (soft bonded sections), with continuity conditions on the interface. Incident solitary waves undergo fission in delaminated regions in the perfectly bonded structure, providing a clear indicator of delamination.
We construct theoretical estimates for the amplitude of the solitons after a delaminated region, using the Inverse Scattering Transform. A measure is introduced using the theoretical and observed values to predict the delamination length based upon amplitude changes. This is then extended for incident waves of different Full Width at Half Magnitude, and a quadratic scaling is introduced and verified by numerical results. The significance of this result is that we now only need to compute a single curve in order to perform a wide range of experiments, which significantly reduces computation times and allows for further experiments (with different incident solitons) to be performed rapidly. This was confirmed for various configurations of the waveguide.
In the case of a soft bonded waveguide with delamination, theoretical estimates cannot be derived using the Inverse Scattering Transform. In this case we resort to direct computation of the solution and a comparison between the amplitude after delamination and the corresponding amplitude for the non-delaminated case. A similar quadratic scaling can be found, which has a good agreement up to a delamination length of 200 in nondimensional units or 520mm in physical units. This is consistent for both layers of the waveguide.
This work facilitates experimentation with a wide range of initial condition parameters, and provides a framework for detecting delamination in perfectly bonded and soft bonded waveguides with similar materials in the layers. The case with distinctly different materials in the layers is more complex, and some preliminary studies have been conducted into quantifying delamination [29].
## Declarations
* Funding: Jagdeep S. Tamber would like to thank Nottingham Trent University for funding through their PhD studentship scheme.
* Competing interests: The authors have no relevant financial or non-financial interests to disclose.
* Ethics approval: Not applicable.
* Consent to participate: Not applicable.
* Consent for publication: Not applicable.
* Availability of data, code and materials: The datasets generated during this study can be reproduced using equations throughout the paper and the cited numerical methods. The codes used to generate the datasets are available from the corresponding author on reasonable request.
|
2309.10109 | AR-TTA: A Simple Method for Real-World Continual Test-Time Adaptation | Test-time adaptation is a promising research direction that allows the source
model to adapt itself to changes in data distribution without any supervision.
Yet, current methods are usually evaluated on benchmarks that are only a
simplification of real-world scenarios. Hence, we propose to validate test-time
adaptation methods using the recently introduced datasets for autonomous
driving, namely CLAD-C and SHIFT. We observe that current test-time adaptation
methods struggle to effectively handle varying degrees of domain shift, often
resulting in degraded performance that falls below that of the source model. We
noticed that the root of the problem lies in the inability to preserve the
knowledge of the source model and adapt to dynamically changing, temporally
correlated data streams. Therefore, we enhance well-established self-training
framework by incorporating a small memory buffer to increase model stability
and at the same time perform dynamic adaptation based on the intensity of
domain shift. The proposed method, named AR-TTA, outperforms existing
approaches on both synthetic and more real-world benchmarks and shows
robustness across a variety of TTA scenarios. | Damian Sójka, Sebastian Cygert, Bartłomiej Twardowski, Tomasz Trzciński | 2023-09-18T19:34:23Z | http://arxiv.org/abs/2309.10109v1 | # AR-TTA: A Simple Method for Real-World Continual Test-Time Adaptation
###### Abstract
Test-time adaptation is a promising research direction that allows the source model to adapt itself to changes in data distribution without any supervision. Yet, current methods are usually evaluated on benchmarks that are only a simplification of real-world scenarios. Hence, we propose to validate test-time adaptation methods using the recently introduced datasets for autonomous driving, namely CLAD-C and SHIFT. We observe that current test-time adaptation methods struggle to effectively handle varying degrees of domain shift, often resulting in degraded performance that falls below that of the source model. We noticed that the root of the problem lies in the inability to preserve the knowledge of the source model and adapt to dynamically changing, temporally correlated data streams. Therefore, we enhance well-established self-training framework by incorporating a small memory buffer to increase model stability and at the same time perform dynamic adaptation based on the intensity of domain shift. The proposed method, named AR-TTA, outperforms existing approaches on both synthetic and more real-world benchmarks and shows robustness across a variety of TTA scenarios.
## 1 Introduction
Deep neural networks have been shown to achieve remarkable performance in various tasks. However, current machine learning models perform very well only when the test-time distribution is close to the training-time distribution. This poses a significant challenge since in a real-world application a domain shift can occur in many circumstances, _e.g._, weather change, time of day shift, or sensor degradation. For this reason, Test-Time Adaptation (TTA) methods have been widely developed in recent years [35, 37]. Their aim is to adapt the source data pre-trained model to the current data distribution on-the-fly during test-time, using an unlabeled stream of test data. Initial test-time adaptation methods considered a single domain shift at a time. To better simulate real-world challenges, a continual test-time adaptation [39] was recently proposed which involves constantly adapting the model to new domains, which is even more demanding.
An efficient TTA method needs to work well in a wide range of settings. While some domain shifts occur abruptly, there are also several shifts that evolve gradually over time [20]. Additionally, the temporal correlation between consecutive frames violates the i.i.d. assumption. The model should be able to achieve stable performance when handling lengthy sequences, potentially extending indefinitely. Existing approaches are based on updating model parameters using calculated pseudo-labels or entropy regularization [37, 39]. Further, filtering of less reliable samples is often employed to reduce error accumulation and improve the computational efficiency [1, 29, 30]. However, all of the methods can become unstable due to the aforementioned factors, and as a result, the pseudo-labels become noisier and result in performance degradation [5]. Without using any source data the model is prone to _catastrophic forget
Figure 1: Continual test-time adaptation methods evaluated on synthetic (CIFAR-10C) and realistic (CLAD-C) domain shifts. Our method is the only one that consistently allows to improve over the naive strategy of using the (frozen) source model.
ting_[28] of previously acquired knowledge.
Existing methods are mostly evaluated on synthetic datasets or on relatively short-length sequences [39, 6, 37] and as such it is not known how those methods will work in real-life scenarios. Therefore, we adapt the autonomous driving benchmark CLAD [36], to the continual adaptation setting. Moreover, we use a SHIFT dataset [34], which is synthetically generated but is very realistic, provides very long sequences, and allows us to specifically control for different factors (time of day, weather conditions).
In the proposed evaluation setup, we find out that current approaches lack the required stability, as their performance significantly deteriorates compared to the source model, see Figure 1. Additionally, we notice that they struggle to correctly estimate batch norm statistics with temporally correlated data streams and low batch sizes. In our method, we extend popular self-training framework [39, 37] with a small memory buffer, which is used during adaptation to prevent knowledge forgetting, without relying on heuristic-based strategies or resetting model weights that are often used [30, 39]. Thanks to using mixup data augmentation [40] relatively small number of samples are required. Furthermore, we develop a module for dynamic batch norm statistics adaptation, which interpolates the calculated statistics between those of pretrained model and those obtained during deployment, based on the intensity of domain shift. We call our method AR-TTA, as we improve **A**daptation by using dynamic batch norm statistics and maintain knowledge by **R**epeating samples from the memory buffer combined with mixup data augmentation.
As a result, our proposed method AR-TTA is simple, stable, and works well across a range of datasets with different shift intensities, when using small batches of data and over very long sequences. Our main contributions can be summarized as follows:
* We evaluate and analyze current test-time adaptation methods on realistic, continual domain shift image classification data.
* We propose a simple continual TTA method based on dynamic batch normalization statistics update and a small memory buffer combined with mixup data augmentation.
* Extensive evaluation shows that the proposed method obtains state-of-the-art performance on multiple benchmarks with both artificial distortions and real-life ones from autonomous driving.
## 2 Related Work
**Test-time adaptation (TTA).** Domain adaptation methods can be split into different categories based on what information is assumed to be available during adaptation [38]. While in some scenarios, access to some labels in target distribution is available, the most common is _unsupervised domain adaptation_, which assumes that the model has access to labeled source data and unlabeled target data at adaptation time. Popular approaches are based on either minimizing the discrepancy between the source and target feature distributions [12, 17]. Alternative approaches are based on self-training, which uses the model's predictions on the target domain as pseudo-labels to guide the model adaptation [42, 22].
Additionally, in _test-time adaptation_ the model needs to adapt to the test-time distribution on the fly, in an _online_ fashion. In the test-time training (TTT) method [35] the model solves self-supervised tasks on the incoming batches of data to update its parameters. TENT [37] updates only batch-norm statistics to minimize predictions entropy. This assumes that updating only batch norm statistics is sufficient to solve the problem, which might not be the case for real-world scenarios. EATA [29] further improves the efficiency of test-time adaptation methods, by using only diverse and reliable samples (with low prediction entropy). Additionally, it uses EWC [18] regularization to prevent drastic changes in parameters important for the source domain.
Contrary to the TENT and EATA approaches, CoTTA [39] updates the whole model. To prevent performance degradation it uses exponential weight averaging as well as stochastic model restoration, where randomly selected weights are reset to the source model. SAR [30] further improves by removing noisy test samples with large gradients and adding loss components that encourage model weights to go to a flat minimum. Nevertheless, they also use model reset, to prevent forgetting.
**TTA benchmarks.** The most popular setting for test-time adaptation includes using different classes of synthetic corruptions proposed in [14], which are then utilized for test-time adaptation, one at a time (so allowing model reset between different domains). However, in practical applications, the target distribution can easily change perpetually over time, e.g., due to changing weather and lightness conditions, or due to sensor corruptions, hence the setting of continual test-time adaptation was recently introduced [39]. The authors proposed to use a continual version of the corrupted benchmark (that is without model reset at domain boundaries).
Another popular dataset for continual test-time adaptation is DomainNet [31] which consists of images in different domains (e.g., sketches, infographics). Yet, the distribution shifts arising in the real world may be very different from the synthetic ones. Hence, recently a CLAD, autonomous driving benchmark [36] was introduced. It consists of naturally occurring distribution shifts like changes in weather and lighting conditions, traffic intensity, etc. It
was developed for the supervised Continual Learning scenario. In this work, we use it for test-time adaptation, that is without using any label information. In our work, we also use SHIFT benchmark [34], a synthetically generated dataset for autonomous driving with realistic discrete and continuous shifts. Similarly to use, CoTTA [39] includes realistic domain shifts but their test is very small (1600 images). To sum up, we extend over previous TTA work by focusing on realistic continual domain shifts over very long sequences.
**Continual Learning.** Our work is also inspired by continual learning where the learner is presented with data from different tasks in a sequential fashion. Without access to the previous data from previous tasks, the model is prone to _catastrophic forgetting_[28]. Popular approaches use knowledge distillation to regularize changes in outputs of the model (compared to the model train on source task) [21] or regularize changes in model parameters [18]. On the other hand, exemplar-based approaches assume some limited access to the source data, which greatly helps to reduce the forgetting [27]. Further, some continual learning approaches focus on _online_ setting, where each data point can be visited only once and small batch sizes are assumed [2].
## 3 Method
The aim of TTA is to adapt the pre-trained model \(f_{\theta_{0}}\) trained on the labeled source data \((\mathcal{X}^{S},\mathcal{Y}^{S})\) to the ever-changing stream of unlabeled test data batches \(\mathbf{x}^{T}\) on the fly during the evaluation.
Our proposed approach to TTA (AR-TTA) can be divided into three parts. We start the description by introducing the model update procedure in Subsection 3.1. Then, in Subsection 3.2 we explain the usage of experience replay with mixup augmentation. The process of adapting batch normalization statistics is presented in Subsection 3.3 The overview of our method is presented in Figure 2.
### Weight-averaged Consistency
Updating the model's weights during test-time is not a trivial task, considering the lack of data labels and the possibility of error accumulation due to noisy training feedback. Moreover, with the update procedure comes the stability-plasticity dilemma. The model should be stable enough to minimize the risk of deteriorating performance, model collapse, and catastrophic forgetting [11]. On the other hand, we want the model to be flexible to keep up with domain changes and adapt on time. Following previous works [9, 39], which show effective methods for alleviating mentioned challenges, we propose to employ self-training on pseudo-labels and keep two models, where one is updated by the exponential moving average of another's weights.
We initialize two identical artificial neural network models, student model \(f_{\theta}\) and teacher model \(f_{\theta^{\prime}}\), with the identical weights obtained by training on source data. For each batch of test data \(\mathbf{x}^{T}_{t}\) at time step \(t\) we generate predictions from both models. Teacher model predictions \(\hat{y^{{}^{\prime}}}_{t}^{T}\) are used as soft pseudo-labels. The student model is updated by the cross-entropy loss between its predictions and the pseudo-labels:
\[\mathcal{L}_{\theta_{t}}(\mathbf{x}^{T}_{t})=-\sum_{c}\hat{y^{{}^{\prime}}}_{ t,c}^{T}\log\hat{y}^{T}_{t,c} \tag{1}\]
where \(\hat{y}^{T}_{t,c}\) is the probability of class \(c\) predicted by the student model.
Next, teacher's weights \(\theta^{{}^{\prime}}\) are updated by exponential moving average of student's weights \(\theta\):
\[\theta^{{}^{\prime}}_{t+1}=\alpha\theta^{{}^{\prime}}_{t}+(1-\alpha)\theta_{t +1} \tag{2}\]
where \(\alpha\) is a smoothing factor.
As mentioned in [39] using a weight-averaged teacher model ensures less noisy pseudo-labels since weight-averaged models over training steps often yield more accurate predictions than the final model and the added inertia prevents hasty, rapid weights update based on noisy self-training feedback. Moreover, susceptibility to catastrophic forgetting is decreased considering the fact that the weights are the combination of past iterations.
We do not limit the weights update only to affine parameters of batch normalization layers, as in many other TTA methods [37, 29, 30], and we update the whole model. We argue that adopting only batch normalization layers does not give the model enough flexibility to perform successfully on varying domains. We confirm this claim experimentally in the appendix and show that updating the whole models provides the best results, compared to fine-tuning only batch-norm layers or different blocks of the model.
The final predictions for the current test batch \(\mathbf{x}^{T}_{t}\) are the classes with the highest probabilities in pseudo-labels generated by the teacher model before the update.
### Experience Replay with Adaptation
During continual test-time adaptation to unlabeled data, the model is exposed to training feedback that most likely differs from what it has learned during source pre-training. Pseudo-labels or entropy minimization feedback are not guaranteed to be accurate, and frequent model updates inevitably strive for significant error accumulation. These factors can be the cause of the model forgetting the initial knowledge. Noisy self-training and significant forgetting can even cause the model to collapse, as shown in [30]. For practical applications, a utilized method has to be reliable and the risk of collapsing has to be reduced to a minimum.
To alleviate this issue, we propose to use the class-balanced replay buffer of exemplars during adaptation to
remind the model what it has learned and strengthen its initial knowledge. We take inspiration from continual learning approaches, which show that exemplars are one of the most effective approaches for this task [36, 3, 4, 3]. To fully take advantage of exemplars and make the latent representations of a model more robust for a given task, we follow a few of the continual learning works [41, 24] and propose to use Mixup data augmentation [40].
After completing the pre-training of the source model, we store a specific number of random exemplars from the labeled source data in the memory. The number of exemplars is the same for all classes. This works better than a random selection of exemplars, which we show in the appendix. In each test-time adaptation iteration, we randomly sample exemplars \(\mathbf{x}_{t}^{S}\), along with their labels \(\mathbf{y}_{t}^{S}\), from memory. The number of sampled exemplars is equal to the batch size. Mixupped batch of samples \(\tilde{\mathbf{x}}_{t}\) is generated by linearly interpolating samples from test data with samples from memory:
\[\tilde{\mathbf{x}}_{t}=\lambda\mathbf{x}_{t}^{T}+(1-\lambda)\mathbf{x}_{t}^{S} \tag{3}\]
where \(\lambda\sim\text{Beta}(\psi,\rho)\), for \(\psi,\rho\in(0,\infty)\). Similarly, labels for cross-entropy loss are the result of interpolation between pseudo-labels produced by the teacher model based on the current unmodified test batch \(\hat{y}_{t}^{T}\) and labels \(\mathbf{y}_{t}^{S}\) from the memory, with the same \(\lambda\) parameter value:
\[\tilde{\mathbf{y}}_{t}=\lambda\hat{\mathbf{y^{{}^{\prime}}}}_{t}^{T}+(1- \lambda)\mathbf{y}_{t}^{S} \tag{4}\]
Student model takes augmented batch \(\tilde{\mathbf{x}}_{t}\) as input. Its predictions are compared with interpolated labels \(\tilde{\mathbf{y}}_{t}\) to calculate the loss as described in the previous Subsection 3.1.
A similar approach to mixing exemplars from replay memory with the ones to train on was successfully used in LUMP [24], however, they used this method for the continual learning tasks.
Using experience replay along with the Mixup augmentation, helps the model preserve already obtained knowledge. Furthermore, having pseudo-labels mixed with certainly accurate labels for exemplars, makes the noisiness of pseudo-labels less impactful for the adaptation process.
### Dynamic Batch Norm Statistics
Batch normalization [16] (BN) was created for reducing the internal covariate shift occurring during model training. It normalizes the distribution of a batch of input data utilizing the calculated running statistics, namely mean and variance, which is saved after the model training and used for the inference at test-time. While testing on out-of-distribution data, saved statistics are not correct and the normalization process fails to produce data with standard normal distribution which leads to poor model performance. Therefore, state-of-the-art test-time adaptation methods [39, 30, 37, 29] usually discard statistics calculated during training and estimate data distribution based on each batch of data separately. However, this way of estimating the statistics is flawed, since the sample size from data is usually too small to correctly estimate the data distribution, depending on batch size. Furthermore, samples might be temporally correlated (video input), which also is the cause of incorrect statistics estimates. In such cases, BN statistics from source data might be useful and closer to the actual data distribution, compared to the estimated values. To robustly estimate the correct normalization statistics we take the inspiration from [15] and propose to estimate BN statistics \(\phi_{t}=(\mu_{t},\sigma_{t})\) at time step \(t\) during test-time by linearly interpolating between saved statistics from source
Figure 2: Our method AR-TTA adapts to the domain of an unlabeled continual stream of data utilizing exemplars of source pre-training data saved in the memory. We keep two identical neural network models with different sets of weights: the teacher model and the student model. We sample one exemplar from memory for each chosen image and mixup image pairs. Similarly, pseudo-labels from the teacher model are mixupped with the labels sampled from memory exemplars. The student model is updated based on cross-entropy loss between its predictions on augmented samples and augmented pseudo-labels. The teacher model is adapted based on an exponential moving average of student’s weights. Predictions for each image are taken from the teacher model.
data \(\phi^{S}\) and calculated values from current batch \(\phi_{t}^{T}\):
\[\phi_{t}=(1-\beta_{ema})\phi^{S}+\beta_{ema}\phi_{t}^{T} \tag{5}\]
where \(\beta_{ema}\) is a parameter that weights the influence of saved and currently calculated statistics.
Since the severity of distribution shift might vary, we need to adequately adjust the value of \(\beta_{ema}\). It should be large in cases when the distribution shift is severe compared to the source domain and low when the distributions are similar. Following [15], we utilize the symmetric KL divergence as a measure of distance between distributions \(D(\phi_{t-1},\phi_{t}^{T})\):
\[D(\phi_{t-1},\phi_{t}^{T})=\frac{1}{C}\sum_{i=1}^{C}KL(\phi_{t-1,i}||\phi_{t, i}^{T})+KL(\phi_{t,i}^{T}||\phi_{t-1,i}) \tag{6}\]
The distance is used to calculate \(\beta_{t}\) at time step \(t\):
\[\beta_{t}=1-e^{-\gamma D(\phi_{t-1},\phi_{t}^{T})} \tag{7}\]
where \(\gamma\) is a scale hyperparameter.
To compensate for the fact that the current distribution can be wrongly estimated and to provide more stability for the adaptation, we take into account previous \(\beta_{1:t-1}\) values and use an exponential moving average for \(\beta_{ema}\) update:
\[\beta_{ema}=(1-\alpha)\beta_{t-1}+\alpha\beta_{t} \tag{8}\]
where \(\alpha\) is a hyperparameter.
The difference between our method and MECTA [15] is that we do not use the exponential moving average of calculated BN statistics, but instead, we keep the statistics from source data intact. We are motivated by the fact that changing this value cause the inevitable forgetting, accumulation of BN statistics estimate error, and susceptibility to temporal correlation. By keeping this value constant, we make sure that the estimation of the statistics on every batch does not cause the degradation of performance on domains similar to the source domain, leading to worse results than the frozen source model and abandonment of the legitimacy of using TTA methods. At the same time, we allow for a slight drift away from source data statistics, by using the exponential moving average of \(\beta\) parameter, giving enough flexibility for the adaptation for severe domain shifts.
## 4 Experiments
Datasets and BenchmarksWe evaluate the methods on four image classification tasks: CIFAR10-to-CIFAR10C, ImageNet-to-ImageNet-C, CLAD-C continual learning benchmark [36] adapted to the test-time adaptation setting, and the benchmark created from the SHIFT dataset [34].
CIFAR10-to-CIFAR10C and ImageNet-to-ImageNet-C are widely used tasks in TTA. They involve training the source model on train split of clean CIFAR10/ImageNet datasets [8, 19] and test-time adaptation on CIFAR10C/ImageNet-C. CIFAR10C and ImageNet-C consist of images from clean datasets which were modified by 15 types of corruptions with 5 levels of severity [14]. They were first used for evaluating the robustness of neural network models and are now widely utilized for testing the adaptation capabilities of TTA methods. We test the adaptation on a standard sequence of the highest corruption severity level 5, frequently utilized by previous approaches [9, 29, 39]. For ImageNet-C we utilize a subset of 5000 samples for each corruption, based on _RobustBench_ library [7], following [39].
CLAD-C [36] is an online classification benchmark for autonomous driving with the goal to introduce a more realistic testing bed for continual learning. Even though, to the best of our knowledge, it has not yet been used for testing TTA before, we chose this benchmark inspired by its realistic nature and the goal to simulate more real-world application setup. It consists of natural, temporal correlated, and continuous distribution shifts created by utilizing the data from SODA10M dataset [13]. The images taken at different locations, times of day, and weathers, are chronologically ordered, inducing distribution shifts in labels and domains. The classification task was created by cutting out the annotated 2D bounding boxes of six classes and using them as separate images for classification. Since it is designed for testing the continual learning setup and the model is originally supposed to be trained sequentially on the train sequences, we slightly modify the setup and pre-train the source model on the first train sequence. TTA is continually tested on the 5 remaining ones with the total number of 17092 images.
The SHIFT dataset is a synthetic autonomous driving dataset designed for continuous multi-task domain adaptation. It provides multiple types of data from the perspective of a car, including RGB images with various types of annotations. Images are taken in numerous types of realistic domains simulated in a virtual environment, including different weather and times of day. This dataset is not designed for a classification task. However, satisfied with the amount of data and the realism of the dataset, we decided to create the classification setup, using the same procedure as for the CLAD-C dataset, utilizing the 2D bounding box annotations. We train the source model on images taken at clean weather during the day and test the adaptation methods for various weather combinations and times of day. We end up with 14 different domains resulting in total of 380667 images. The high number of test samples provides a good simulation of what could happen during model deployment in a real world, ever-changing environment. We called our original continual test-time adaptation benchmark on SHIFT dataset SHIFT-C.
MethodologyTaking into account the realistic scenario, we examine the continual test-time adaptation setup where the model is continually adapted to new domains without any weight resetting to the source model state, unless it is a part of a tested method. To simulate the continuous stream of data and the need for the model to adapt quickly to the data it is provided with, we use a low batch size of 10. This is also the batch size commonly used in the online continual learning [25]. Considering a practical application on embedded device, low batch size could be also a result of limited computation resources and memory constraints, making it impossible to use higher batch sizes. We believe that it is important and more practical for TTA methods to be able to work on a reduced number of samples.
We primarily assess the methods using a straightforward mean classification accuracy metric. Additionally, to make every class equally important in class-imbalanced datasets, we use the average mean class accuracy (AMCA) metric. This metric calculates the mean accuracy over all classes, averaged for each domain.
The results are averaged between 3 random seeds. Samples from CIFAR10C and ImageNet-C are shuffled. Considering the sequential nature of data in CLAD-C and SHIFT-C benchmarks (video sequences), we did not want to shuffle images. Instead, to get seed-averaged results, we trained 3 source models with 3 different seeds and averaged the results between experiments with different models.
BaselinesTo evaluate the performance of our method and validate its efficacy in handling realistic domain shifts, we conduct experiments involving five state-of-the-art methods as baselines: TENT-continual [37], EATA [29], CoTTA [39], and SAR [30]. Moreover, we show results for discarding BN statistics from source data and calculating the statistics for each batch separately (BN stats adapt) [33]. Additionally, we showcase the results obtained from the frozen source model to verify the effectiveness of adaptation (Source).
To provide a fair comparison with methods that do not require saving a small source data memory bank, we also show the results of our proposed method without the usage of replay memory. Furthermore, we show the results of baselines with a simple replay strategy method added in the appendix.
Implementation DetailsFollowing other state-of-the-art TTA methods, we use pre-trained WideResnet28/ResNet50 models from _RobustBench_[7] model zoo for CIFAR10-to-CIFAR10C/ImageNet-to-ImageNet-C tasks. On the rest of the benchmarks, we utilize ResNet50 architecture with weights pre-trained on ImageNet obtained from _torchvision_ library [26] and finetuned to the source data for the specific benchmark. Images from CLAD-C and SHIFT-C benchmarks are resized to 224x224 before being processed by the network. For our method, we use an SGD optimizer with a momentum equal to 0.9. The learning rate is set to 0.00025 for ImageNet-C and 0.001 for the rest of the benchmarks. The default replay memory size is 2000 samples, which is commonly used in continual learning settings [27] and adds only minor storage requirements. We provide experiments with different memory sizes in the appendix. The \(\gamma\) value in Equation 7 is set to 10 for CIFAR10C and ImageNet-C, and 0.1 for CLAD-C and SHIFT-C. The \(\alpha\) value for the exponential moving average of \(\beta_{ema}\) in Equation 8 is set to 0.2. The initial \(\beta_{ema}\) is equal to 0.1. The parameters \(\psi\) and \(\rho\) of the beta distribution, which are utilized to sample the interpolation parameter \(\lambda\) for mixup augmentation (Equations 3, 4), are both set to a value of 0.4. We provide the results for different shapes of beta distribution in the appendix.
Since the batch size in our experiments is significantly lower than the ones used in the previous TTA works and we use two datasets on which the methods were not tested before, to allow for a fair comparison, we choose the learning rate for each method based on a grid search. The optimizers are chosen following the methods' papers. For datasets that were not used in an EATA [29] paper, we also searched for the most optimal value of EATA's method parameter \(\epsilon\), which is responsible for filtering the redundant samples for model adaptation. The description and all the results of the parameter search can be found in the appendix. For the ease of prototyping and testing, we utilized the _Avalanche_ library for continual learning [23].
### Results
Artificial Domain ShiftsThe results from CIFAR10-to-CIFAR10C and ImageNet-to-ImageNet-C tasks are shown in Tables 1 and 2. Artificial domain shifts pose a great challenge for source models, achieving only 56.5%/17.1% mean accuracy for CIFAR10C/ImageNet-C. Calculating BN statistics for each batch separately, already significantly improves the result to 75%/26.9% accuracy on corrupted images. Each of the compared state-of-the-art TTA methods uses the BN stats adapt technique, therefore their performance improves over it, but the increase in accuracy value is not that significant. EATA achieved the best mean accuracy of 78.2%/31.5% out of the tested state-of-the-art approaches. Our method AR-TTA outperforms all of the compared techniques achieving 78.8%/32.0% of mean accuracy. This shows the effectiveness of our method on standard continual TTA benchmarks.
Natural Domain ShiftsTests on natural domain shifts involve utilizing CLAD-C and SHIFT-C benchmarks. Results for CLAD-C are shown in Table 3. Calculating BN statistics for each batch does not improve the performance over
the frozen source model and degrades the mean accuracy from 81.3% to 71.1%. Similarly, the state-of-the-art TTA methods achieve significantly lower mean accuracy, compared to the frozen source model, rendering them not effective for natural domain shifts. It suggests that benchmarking such methods on artificial domain shifts in form of corruptions is not a valuable estimate of the TTA method's performance in practical applications. Moreover, it shows that estimating the BN statistics on each batch is not a trivial task, especially considering the temporal correlations in data. Keeping the pre-calculated statistics intact might sometimes be more beneficial for less severe domain shifts, on which the source model performs relatively well. Our method, which uses pre-calculated statistics and exemplars of source data during adaptation, outperformed state-of-the-art methods and achieves higher accuracy than the source model, which shows the effectiveness and adapting capabilities.
The close performance between the SAR and EATA methods, both achieving a mean accuracy of 71.1%, is attributed to the rigorous threshold-based sample filtering in both EATA and SAR. As a result of a coarse hyperparameter grid search (see the appendix), both methods tended to update on a very low number of samples in each parameter configuration. Moreover, SAR employs a model resetting scheme that frequently resets the weights of the model.
The mentioned techniques, although effective for the artificial CIFAR10C and ImageNet-C benchmarks, do not let both methods adapt well to CLAD-C and SHIFT-C test data.
Average mean class accuracy (AMCA) values show that the usage of replay memory might be crucial for high mean per-class accuracy.
Similar conclusions can be drawn from SHIFT-C benchmark results in Table 4. The frozen source model achieves impressive results, while state-of-the-art methods significantly degrade it. Moreover, the adaptation schemes of TENT and CoTTA methods caused the accuracy to be lower than the simple BN stats adaptation approach. Only AR-TTA was able to improve the source model performance during TTA.
### Ablation Study
Component AnalysisTable 5 shows the contribution of individual components used in the proposed method. For the initial setup **A**, we used a weight-averaged teacher model to generate pseudo-labels and cross-entropy loss to adapt the model of which the BN statistics from source data
\begin{table}
\begin{tabular}{l|c} Method & Mean \\ \hline Source & 18.1 \\ BN stats adapt & 26.9 \\ TENT-continual [37] & 29.2 \\ EATA [29] & 31.5 \\ COTTA [39] & 15.5 \\ SAR [30] & 30.8 \\ \hline Ours (AR-TTA) w/o replay & 30.0\({}_{\pm 0.45}\) \\ Ours (AR-TTA) & **32.0\({}_{\pm 0.07}\)** \\ \hline \end{tabular}
\end{table}
Table 2: Classification accuracy (%) for the standard ImageNet-to-ImageNetC online continual test-time adaptation task.
\begin{table}
\begin{tabular}{l|c c c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{\begin{tabular}{c} **\#P** \\ \#P** \\ \end{tabular} } & \multirow{2}{*}{Mean} & \multirow{2}{*}{\begin{tabular}{c} **\#P** \\ \#P** \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} **\#P** \\ \#P** \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} **\#P** \\ \#P** \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} **\#P** \\ \#P** \\ \end{tabular} } & \multirow{2}{*}{
\begin{tabular}{c} **\#P** \\ \#P** \\ \end{tabular} } & \multirow{2}{*}{Mean} \\ \hline Source & 27.7 & 34.3 & 27.1 & 53.1 & 45.7 & 65.2 & 58.0 & 74.9 & 58.7 & 74.0 & 90.7 & 53.3 & 73.4 & 41.6 & 69.7 & 56.5 \\ BN stats adapt & 67.3 & 69.4 & 59.7 & 82.7 & 60.4 & 81.4 & 83.0 & 78.1 & 77.7 & 80.6 & 87.3 & 83.4 & 71.4 & 75.3 & 67.9 & 75.0 \\ TENT-continual [37] & 67.9 & 71.4 & 62.5 & 83.2 & 62.9 & 82.1 & 83.8 & 79.5 & 79.7 & 81.4 & 87.8 & 84.3 & 73.5 & 78.2 & 71.6 & 76.7 \\ EATA [29] & 70.3 & 74.9 & 67.1 & 83.0 & 65.6 & 82.3 & 84.0 & 80.3 & 81.4 & 82.2 & 88.0 & 85.1 & **74.7** & **80.1** & 73.8 & 78.2 \\ CoTTA [39] & **72.5** & **76.4** & **70.5** & 80.6 & 66.8 & 78.3 & 80.1 & 75.8 & 77.0 & 77.1 & 83.8 & 77.3 & 72.0 & 75.5 & 72.2 & 75.7 \\ SAR [30] & 67.4 & 69.6 & 60.8 & 82.6 & 61.4 & 81.5 & 82.8 & 78.1 & 77.7 & 80.5 & 87.4 & 83.4 & 71.5 & 75.2 & 68.2 & 75.2 \\ \hline Ours (AR-TTA) w/o replay & 69.5 & 73.6 & 63.3 & 83.5 & 63.0 & 82.5 & 84.5 & 80.2 & 80.4 & 81.9 & **88.4** & 83.8 & 74.2 & 76.9 & 74.5 & 77.3\({}_{\pm 0.07}\) \\ Ours (AR-TTA) & 69.2 & 74.8 & 66.4 & **84.5** & **67.8** & **83.7** & **85.2** & **81.4** & **82.7** & **83.4** & 88.0 & **84.7** & 73.9 & 78.6 & **77.0** & **78.8\({}_{\pm 0.13}\)** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Classification accuracy (%) for the standard CIFAR10-to-CIFAR10C online continual test-time adaptation task.
Figure 3: Batch-wise classification accuracy averaged in a window of 100 batches on CLAD-C benchmark for the chosen methods continually adapted to the sequences of data. The ticks on x-axis symbolize the beginning of next sequence and at the same time a different domain. Window for calculating average values is cleared in between sequences. Best viewed in color.
are discarded. Adding a simple replay method (**B**, **E**) by injecting randomly augmented exemplars from memory to the batch in a 1:1 ratio, did not improve the performance on every dataset. It can be seen that mixup data augmentation can boost the performance of a simple replay method (**C**). Moreover, dynamic BN statistics significantly contribute to the accuracy increase (**D**, **E**, **AR-TTA**), especially on the CLAD-C benchmark.
## 5 Conclusion
In this paper, we evaluate existing continual test-time adaptation (TTA) methods in real-life scenarios using more realistic data. Our findings reveal that current state-of-the-art methods are inadequate in such settings, as they fall short of achieving accuracies better than the frozen source model. This raises concerns about the applicability of certain TTA methods in real world and sheds light on the frequent model resets observed in some approaches. To address these limitations, we propose a novel and straightforward method called AR-TTA, based on the self-training framework. AR-TTA utilizes a small memory buffer of source data, combined with mixup data augmentation, and dynamically updates the batch norm statistics based on the intensity of domain shift.
Through experimental studies, we demonstrate that the AR-TTA method achieves state-of-the-art performance on various benchmarks. These benchmarks include realistic evaluations with small batch sizes, long test sequences, varying levels of domain shift, as well as artificial scenarios such as corrupted CIFAR10-C. Notably, AR-TTA consistently outperforms the source model, which serves as the ultimate baseline for feasible TTA methods. Our more realistic evaluation of TTA with a variety of different datasets provides a better understanding of their potential benefits and shortcomings.
**Limitations** The main limitation of our method is that we use a memory buffer from the source data, which might be an issue in resource-constrained scenarios or if there are some privacy concerns.
**Impact Statement** Test-time adaptation methods in machine learning might have a significant social impact. By improving accuracy, fairness, and robustness, these methods enhance the effectiveness of machine learning models in real-world applications. They contribute to reducing algorithmic bias, increasing equity, and promoting ethical considerations. However, responsible development and de
\begin{table}
\begin{tabular}{l|c c c c c|c c c c|c c c c|c c c} \multicolumn{13}{c}{\(t\)} & \multicolumn{4}{c|}{dawn/dusk} & \multicolumn{4}{c|}{night} & \multicolumn{4}{c}{Mean} & \multicolumn{4}{c}{AMCA} \\ \hline \multirow{2}{*}{Method} & \multirow{2}{*}{MCA} & \multirow{2}{*}{MCA} & \multirow{2}{*}{MCA} & \multirow{2}{*}{MCA} & \multirow{2}{*}{MCA} & \multirow{2}{*}{MCA} & \multirow{2}{*}{MCA} & \multirow{2}{*}{MCA} & \multirow{2}{*}{MCA} & \multirow{2}{*}{MCA} & \multirow{2}{*}{MCA} & \multirow{2}{*}{MCA} & \multirow{2}{*}{MCA} \\ & & & & & & & & & & & & & & & & & & & & & & & & & & & & \\ \hline Source & **97.9** & **98.2** & **97.5** & 92.5 & 93.6 & 94.1 & 94.0 & 93.5 & 91.5 & 89.1 & 89.3 & 90.6 & 89.1 & 90.7 & 93.5 & 89.5 \\ BN stats adapt & 89.1 & 88.9 & 88.0 & 86.2 & 85.3 & 84.8 & 87.3 & 83.5 & 84.8 & 81.3 & 81.2 & 80.3 & 79.6 & 83.5 & 85.1 & 69.9 \\ TENT-continual [37] & 89.6 & 88.8 & 87.5 & 84.6 & 83.3 & 81.2 & 85.0 & 80.7 & 80.2 & 78.0 & 77.0 & 76.1 & 75.7 & 77.6 & 82.7 & 57.6 \\ EATA [29] & 89.1 & 88.9 & 88.0 & 86.2 & 85.3 & 84.8 & 87.4 & 83.6 & 84.9 & 81.4 & 81.4 & 80.3 & 79.7 & 83.7 & 85.1 & 70.5 \\ CoTTA [39] & 88.2 & 87.1 & 84.1 & 80.5 & 78.7 & 76.2 & 80.5 & 74.0 & 74.9 & 71.5 & 70.3 & 67.3 & 64.9 & 66.2 & 77.4 & 47.2 \\ SAR [30] & 89.1 & 88.9 & 88.0 & 86.2 & 85.3 & 84.8 & 87.3 & 83.5 & 84.8 & 81.3 & 81.2 & 80.3 & 79.6 & 83.6 & 85.1 & 69.9 \\ \hline Ours (AR-TTA) w/o replay & 96.4 & 96.5 & 95.3 & 93.2 & 92.2 & 91.9 & 93.2 & 91.4 & 91.8 & 88.7 & 88.7 & 88.6 & 87.5 & 91.2 & 92.4\({}_{\pm 0.25}\) & 83.5\({}_{\pm 0.96}\) \\ Ours (AR-TTA) & 97.7 & 98.0 & 97.4 & **94.3** & **94.2** & **95.5** & **94.8** & **95.2** & **93.1** & **92.3** & **92.7** & **93.0** & **91.4** & **92.6** & **94.8\({}_{\pm 0.03}\)** & **90.2\({}_{\pm 0.24}\)** \\ \hline \end{tabular}
\end{table}
Table 4: Classification accuracy and average mean class accuracy (AMCA) (%) for the SHIFT-C continual test-time adaptation task.
\begin{table}
\begin{tabular}{l|c c|c} \multicolumn{1}{c}{\(t\)} & \multicolumn{4}{c}{MCA} \\ \hline Method & T1 & T2 & T3 & T4 & T5 & Mean day & Mean night & Mean & AMCA \\ \hline Source & 75.6 & 85.9 & 73.3 & 87.5 & 66.2 & 86.6 & 71.2 & 81.3 & 57.6 \\ BN stats adapt & 73.2 & 69.9 & 75.0 & 75.5 & 59.7 & 72.2 & 69.1 & 71.1 & 48.3 \\ TENT-continual [37] & 73.4 & 69.8 & 76.5 & 76.1 & 59.7 & 72.4 & 69.8 & 71.5 & 47.6 \\ EATA [29] & 73.3 & 69.9 & 75.0 & 75.6 & 59.7 & 72.2 & 69.1 & 71.1 & 48.4 \\ CoTTA [39] & 75.2 & 69.3 & 80.2 & 77.0 & 62.7 & 72.4 & 72.9 & 72.6 & 44.8 \\ SAR [30] & 73.2 & 69.9 & 75.0 & 75.5 & 59.7 & 72.2 & 69.1 & 71.1 & 48.3 \\ \hline Ours (AR-TTA) w/o replay & 76.9 & **86.7** & **81.4** & 87.9 & **73.5** & 87.2 & **77.1** & **83.9\({}_{\pm 0.30}\)** & 59.6\({}_{\pm 2.92}\) \\ Ours (AR-TTA) & **77.2** & **86.7** & 80.0 & **89.6** & 70.7 & **87.8** & 75.7 & 83.7\({}_{\pm 0.64}\) & **63.1\({}_{\pm 3.32}\)** \\ \hline \end{tabular}
\end{table}
Table 3: Classification accuracy and average mean class accuracy (AMCA) (%) for the CLAD-C continual test-time adaptation task.
\begin{table}
\begin{tabular}{l|c|c} \hline Method & CIFAR10C & CLAD-C \\ \hline
**A**: Weight-avg. teacher & 75.7\({}_{\pm 0.07}\) & 71.1\({}_{\pm 0.53}\) \\
**B**: A + Replay memory & 77.3\({}_{\pm 0.16}\) & 69.0\({}_{\pm 0.66}\) \\
**C**: B + Mixup & 78.5\({}_{\pm 0.13}\) & 72.2\({}_{\pm 0.31}\) \\ \hline
**D**: A + Dynamic BN stats & 77.3\({}_{\pm 0.07}\) & 83.8\({}_{\pm 0.82}\) \\
**E**: D + Replay memory & 79.8\({}_{\pm 0.03}\) & 82.8\({}_{\pm 1.09}\) \\ \hline
**AR-TTA (Ours)**: E + Mixup & 78.8\({}_{\pm 0.13}\) & 83.7\({}_{\pm 0.64}\) \\ \hline \end{tabular}
\end{table}
Table 5: Classification accuracy (%) for CIFAR10C and CLAD-C tasks for different configurations of the proposed method.
ployment are crucial to ensure positive outcomes and mitigate potential risks. Still, the problem of bias in the source dataset can influence the overall outcome.
|
2308.16651 | SoccerNet 2023 Tracking Challenge -- 3rd place MOT4MOT Team Technical
Report | The SoccerNet 2023 tracking challenge requires the detection and tracking of
soccer players and the ball. In this work, we present our approach to tackle
these tasks separately. We employ a state-of-the-art online multi-object
tracker and a contemporary object detector for player tracking. To overcome the
limitations of our online approach, we incorporate a post-processing stage
using interpolation and appearance-free track merging. Additionally, an
appearance-based track merging technique is used to handle the termination and
creation of tracks far from the image boundaries. Ball tracking is formulated
as single object detection, and a fine-tuned YOLOv8l detector with proprietary
filtering improves the detection precision. Our method achieves 3rd place on
the SoccerNet 2023 tracking challenge with a HOTA score of 66.27. | Gal Shitrit, Ishay Be'ery, Ido Yerhushalmy | 2023-08-31T11:51:16Z | http://arxiv.org/abs/2308.16651v1 | # SoccerNet 2023 Tracking Challenge - 3rd place MOT4MOT Team Technical Report
###### Abstract
The SoccerNet 2023 tracking challenge requires the detection and tracking of soccer players and the ball. In this work, we present our approach to tackle these tasks separately. We employ a state-of-the-art online multi-object tracker and a contemporary object detector for player tracking. To overcome the limitations of our online approach, we incorporate a post-processing stage using interpolation and appearance-free track merging. Additionally, an appearance-based track merging technique is used to handle the termination and creation of tracks far from the image boundaries. Ball tracking is formulated as single object detection, and a fine-tuned YOLOv8l detector with proprietary filtering improves the detection precision. Our method achieves 3rd place on the SoccerNet 2023 tracking challenge with a HOTA score of 66.27.
## 1 Introduction
The SoccerNet 2023 tracking challenge presents a unique and challenging task of detecting and tracking both the soccer players and the ball. Tracking multiple objects in a dynamic and fast-paced sport such as soccer is challenging. In this work, we present an approach for formulating player and ball tracking as separate tasks.
## 2 Related Work
Multi-Object Tracking (MOT) is the task of identifying and maintaining multiple object trajectories or tracks from a video stream. For the online scenario, often applied to live video streams, predictions cannot rely on future frames. A common approach is Tracking-by-detection [1, 2, 3], a two-step approach that first detects the objects in each frame of the video, and associates these detections over time to form trajectories (tracks). One way to create coherent trajectories is by applying a constant velocity Kalman Filter (KF) [2]. To better distinguish between objects, OC-SORT [4] added a velocity consistency term, whereas DeepSORT [1] addressed the issue by integrating additional appearance cues [1]. StrongSORT [3] improved the latter by replacing the appearance model with Bag of Tricks [5] (BoT), and the addition of a Camera Motion Compensation (CMC). DeepOC-SORT [6] combined components from both StrongSORT and OC-SORT holistically to benefit from both approaches. When information from future frames is available, for instance, when there is a delay between broadcasting and acquiring the video, additional offline methods may be applied. StrongSORT++ [3] proposed to impute missing detections by interpolating and smoothing using a Gaussian process (GSI). Additionally, tracks are merged using the Appearance Free Linking (AFLink) model.
## 3 Method
In soccer games, there is a significant difference between ball and player tracking. Thus, the presence of multiple players in each frame contrasts with the single occurrence of a ball, and the ball can undergo considerable acceleration within short time intervals. Furthermore, ball detection is difficult due to its relatively small size, frequent occlusions, and tendency to blend in with the players' uniforms or crowd. Hence, our tracking approach was split to separate ball and player tracking. The overall algorithm flow can be seen in Fig. 1.
**Player Tracking** Player tracking was approached through a two-step approach. In the initial step, an optimized state-of-the-art (SoTA) online multi-object tracker was employed in conjunction with a contemporary object detector. However, due to the inherent limitations of this online approach, unable to predict future events or modify past outcomes, a post-processing stage was applied to refine the tracking data. The post-processing comprises three phases. First, missing track detections are interpolated using GSI. Secondly, a fine-tuned AFLink model is utilized to merge tracks. The combination of these two techniques is denoted as "++". Last, an appearance-based track merging is employed assum
ing that, in our particular setup, new track IDs can only be created or terminated near the image boundaries. As a result, an iterative merging process is applied to identify track IDs that were terminated far from the image boundaries. Subsequently, if a new track is generated within a short time span, an attempt is made to merge it with the previously terminated track if their appearance similarity remains consistently high throughout most of the track duration.
**Ball Tracking** We approached ball tracking by treating it as a single object detection problem. In order to maximize the detection recall, we employed a fine-tuned YOLOv8l [7] detector with a low confidence score of 0.05. This configuration yields multiple detections for each frame within the video clip. To enhance the precision of the detections, we leveraged the temporal nature of the data through a series of filtering techniques. First, the center coordinates of the most confident detection in each frame were smoothed using 3rd order polynomial with a temporal window size of 51 frames. Subsequently, we retained only the detection closest to this smoothed trajectory if its distance was below 100 pixels. To address any missing detections, the resulting detections were then linearly interpolated to ensure tracking box continuity.
## 4 Experiments
We present a process aimed to improve and assess the performance of the MOT tracker by optimizing its individual components, namely the appearance model, detector, and tracker type, followed by their integration and evaluation. The optimization process involved fine-tuning each component in isolation and assessing its effectiveness. Subsequently, the optimized components were integrated into the tracker to evaluate the overall MOT performance.
### Data Preparation and Setup
**Player Detector** Our tracker heavily depends on the performance of the player detector. To improve performance on the target domain, a YOLOv8l detection model was fine-tuned to detect players on SoccerNet [8] data, and the Average Precision (AP) at IoU of 0.5 was measured. The image resolution was 576x1024, with a Non-Maxima Suppression (NMS) IoU threshold of 0.45. The bounding boxes of players were extracted from 1710 frames of the tracking training set. These frames were selected by sampling at a rate of 1 frame per second (FPS) from all available training clips. Additionally, 200 manually inspected frames from the tracking test set were selected for validation. We note that the labels were not modified or changed in any manner during this process.
**Ball Detector** A separate YOLOv8l ball detection model was fine-tuned to detect the ball on SoccerNet data and the AP at IoU of 0.5 was measured. The image resolution was 1080x1920. The bounding boxes of balls were extracted from a total of 2084 frames belonging to the tracking training set. Due to errors in the ground-truth labels, a COCO2017 pre-trained detector was used to select only frames with an IoU\(>\)0.7 between the label and the ball detection. This procedure ensures that only accurate labels appear in the training set. Additionally, 200 manually inspected frames from the tracking test set were selected for validation. The labels were not modified or changed in any manner during this process.
**Appearance Model** An appearance model (OSNetain [9]) was fine-tuned to match the target domain. Three different sizes of the model were trained, the smallest being \(\times 0.25\) and the largest \(\times 1\). The crop resolution was 256x128. The Rank-1 accuracy was measured along with the mAP. The appearance dataset contained IDs extracted from the tracking train set. To ensure ID consistency between different clips, the team metadata, track ID, and jersey number metadata was utilized. Tracks without known jersey numbers were discarded from the dataset. The dataset comprises six distinct games, three games for the training set and three games for the validation set, featuring a total of 199 unique IDs, with 123 IDs in the training set and 76 IDs in the validation set. The training set encompasses an average of 1600 images per ID, while the validation set is composed of 10 images per ID, featuring eight galleries and two query images, totaling 760 images. The validation images were drawn from the
Figure 1: Tracking algorithm flow. The ball tracking components are marked in blue and the player tracking components are marked in green. The post-processing steps for the player tracker are marked with a red line.
tracking test set.
**Player Tracker** Two different trackers were evaluated with the fine-tuned components. The appearance model was used with cosine similarity. Furthermore, the upper limit of the tracker's performance was assessed by conducting the same experiment using GT boxes. The HOTA metric [10] was used for evaluation on SoccerNet test set.
## 5 Results
**Player Detector** Fine-tuning the player detector on the SoccerNet data improves its accuracy ([email protected] 0.954 to 0.994), following Table 1. After analyzing the failure cases, we found that some errors occur when several players overlap with each other.
**Ball Detector** Fine-tuning the ball detector results in [email protected] of 0.95 and [email protected]:0.95 of 0.71. This indicates that the bounding box produced by the detector is not sufficiently tight. Furthermore, our investigation revealed that the detector struggles in scenarios involving partial occlusion, as well as when the ball's visual characteristics merge with other objects such as the crowd or white shoes.
**Appearance Model** Detection models of different sizes achieved similar mAP scores, with crop augmentation providing the greatest improvement, following Table 2. The comparability of performance can be attributed to a multitude of factors: the relatively low number of identities (199 IDs as opposed to 1501 IDs in Market1501 [11]), which can lead to rapid overfitting, errors in training data that impede model improvement, and the nearly indistinguishable appearances of certain players on the same team.
**Player Tracker** DeepOC-SORT++ achieves slightly better HOTA results than Strong-SORT++ (66.0% and 65.2% respectively), following Table 3. The appearance merging during the post-processing further improves the HOTA by +0.38%. Using GT boxes instead of detections greatly improves the HOTA metric by a notable margin of 21.85 points. The results suggest that the tracking accuracy is related to the detector's precision. This is further supported by the ablation study (see Table 4), demonstrating that the fine-tuned detector has the most substantial impact on the HOTA metric, resulting in a gain of 5.9 points.
**Final Tracker** The final player and ball tracker achieved HOTA of 66.27, DetA of 70.32 and AssA of 62.62 on the challenge set.
|
2309.16578 | Overcoming the Barrier of Orbital-Free Density Functional Theory for
Molecular Systems Using Deep Learning | Orbital-free density functional theory (OFDFT) is a quantum chemistry
formulation that has a lower cost scaling than the prevailing Kohn-Sham DFT,
which is increasingly desired for contemporary molecular research. However, its
accuracy is limited by the kinetic energy density functional, which is
notoriously hard to approximate for non-periodic molecular systems. Here we
propose M-OFDFT, an OFDFT approach capable of solving molecular systems using a
deep learning functional model. We build the essential non-locality into the
model, which is made affordable by the concise density representation as
expansion coefficients under an atomic basis. With techniques to address
unconventional learning challenges therein, M-OFDFT achieves a comparable
accuracy with Kohn-Sham DFT on a wide range of molecules untouched by OFDFT
before. More attractively, M-OFDFT extrapolates well to molecules much larger
than those seen in training, which unleashes the appealing scaling of OFDFT for
studying large molecules including proteins, representing an advancement of the
accuracy-efficiency trade-off frontier in quantum chemistry. | He Zhang, Siyuan Liu, Jiacheng You, Chang Liu, Shuxin Zheng, Ziheng Lu, Tong Wang, Nanning Zheng, Bin Shao | 2023-09-28T16:33:36Z | http://arxiv.org/abs/2309.16578v2 | M-OFDFT: Overcoming the Barrier of Orbital-Free Density Functional Theory for Molecular Systems Using Deep Learning
###### Abstract
Orbital-free density functional theory (OFDFT) is a quantum chemistry formulation that has a lower cost scaling than the prevailing Kohn-Sham DFT, which is increasingly desired for contemporary molecular research. However, its accuracy is limited by the kinetic energy density functional, which is notoriously hard to approximate for non-periodic molecular systems. In this work, we propose M-OFDFT, an OFDFT approach capable of solving molecular systems using a deep-learning functional model. We build the essential nonlocality into the model, which is made affordable by the concise density representation as expansion coefficients under an atomic basis. With techniques to address unconventional learning challenges therein, M-OFDFT achieves a comparable accuracy with Kohn-Sham DFT on a wide range of molecules untouched by OFDFT before. More attractively, M-OFDFT extrapolates well to molecules much larger than those in training, which unleashes the appealing scaling for studying large molecules including proteins, representing an advancement of the accuracy-efficiency trade-off frontier in quantum chemistry.
## 1 Introduction
Density functional theory (DFT) is a powerful quantum chemistry method for solving electronic states hence energy and properties of molecular systems. It is among the most popular choices due to its appropriate accuracy-efficiency trade-off, and has fostered many scientific discoveries [1; 2]. The prevailing Kohn-Sham formulation (KSDFT) [3] solves a system by minimizing the electronic energy as a functional of \(N\) orbital functions \(\{\phi_{i}(\mathbf{r})\}_{i=1}^{N}\), where \(N\) is the number of electrons. Although the orbitals allow explicitly calculating the non-interacting part of kinetic energy, optimizing \(N\) functions deviates from the original idea of DFT [4; 5; 6; 7] to optimize one function, the (one-body reduced) electron density \(\rho(\mathbf{r})\), hence immediately increases the cost scaling by an order of \(N\) (Fig. 1(a)). This higher complexity is increasingly undesired for the current research stage where large-scale system simulations for practical applications are in high demand. For this reason, there is a growing interest in methods following the original DFT spirit, now called _orbital-free DFT_ (OFDFT) [8; 9; 10].
The central task in OFDFT is to approximate the non-interacting part of kinetic energy as a density functional (KEDF) \(T_{\mathrm{S}}[\rho]\). Classical approximations are developed based on the uniform electron gas theory [11; 12; 13; 14; 15], and have achieved many successes of OFDFT for periodic material systems [16; 17; 10]. But the accuracy is still limited for molecules [18; 19; 20; 21], mainly due to that the electron density in molecules is far from uniform.
For approximating a complicated functional, recent triumphant progress in deep machine learning creates new opportunities. Yet, existing explorations for OFDFT are still in an early stage. These methods use a regular grid to represent density as the model input, which is not efficient enough to represent the uneven density in molecular systems. Even an irregular grid requires unaffordably many points for a nonlocal calculation, while the nonlocality has been found indispensable to approximate KEDF [22; 14; 8; 23]. As a result, these works only studied molecules of up to a dozen atoms, either due to the unaffordable cost of a nonlocal calculation [24; 25; 26; 27; 28] or limited accuracy of a (semi-)local approximation [29; 30; 31; 32]. Moreover,
few work showed the accuracy on molecules much larger than those in training, but such an extrapolation study is imperative as it is on molecules larger than other methods could afford to generate abundant data that an OFDFT method could demonstrate the dominating value of its scaling advantage.
In this work, we develop an OFDFT method called **M**-OFDFT that can handle **M**olecules using a deep-learning KEDF model. To account for the nonlocal nature of KEDF with affordable cost, we take the expansion coefficients of the density on an atomic basis set as the model input (Fig. 1(b)), which constitute a much more concise representation than a grid-based representation. Each coefficient represents a density component around an atom, and can be treated as a feature associated to that atom. To process such input, we build a deep-learning model based on the Graphormer architecture [33; 34], a variant of Transformer [35] for processing molecular data. The model iteratively processes features on each atom, with the interaction with features on other atoms through the attention mechanism, which covers the non-local effect. We note that learning a functional model faces unconventional challenges, for which we propose method to generate multiple density datapoints with gradient labels per molecular structure, and techniques to handle geometric invariance and vast gradient range. After the KEDF model is learned, M-OFDFT solves a given molecular system by optimizing the density coefficients, where the KEDF model is used to construct the energy objective (Fig. 1(c)).
We demonstrate the practical utility and advantage in the following aspects. **(1)** M-OFDFT achieves _chemical accuracy_ compared to KSDFT on a range of molecular systems in similar scales as those in training. This is hundreds times more accurate than classical OFDFT. The optimized density shows a clear shell structure, which is regarded challenging for an orbital-free approach. **(2)** M-OFDFT achieves an attractive _extrapolation capability_ that its per-atom error stays constant or even decreases on increasingly larger molecules all the way to 10 times (224 atoms) beyond those in training. The absolute error is still much smaller than classical OFDFT. In contrast, the per-atom error keeps increasing by end-to-end energy prediction counterparts. M-OFDFT also shows a more efficient utilization of a few large-scale data after trained on abundant affordable-scale data. **(3)** With the accuracy and extrapolation capability, M-OFDFT unleashes the scaling advantage of OFDFT to large-scale molecular systems. We find its empirical time complexity is \(O(N^{1.46})\), indeed lower by order-\(N\) over \(O(N^{2.49})\) of KSDFT. The absolute time is always shorter, achieving a 27.4-fold speedup on the protein B system (2,750 electrons). In all, M-OFDFT pushes the accuracy-efficiency trade-off frontier in quantum chemistry, and provides a powerful tool for solving large-scale molecular science problems.
## 2 Results
### Overview of M-OFDFT
OFDFT solves the electronic state of a molecular structure \(\mathcal{M}\) by minimizing the electronic energy as a functional of the electron density \(\rho\), which is typically decomposed in the same way as KSDFT: \(E[\rho]=T_{\mathrm{S}}[\rho]+E_{\mathrm{H}}[\rho]+E_{\mathrm{XC}}[\rho]+E_{ \mathrm{sat}}[\rho]\), where \(T_{\mathrm{S}}[\rho]\) is the kinetic energy density functional (KEDF) covering the non-interacting kinetic energy, \(E_{\mathrm{H}}[\rho]\) covers the classical internal potential energy, the exchange-correlation (XC) functional \(E_{\mathrm{XC}}[\rho]\) accounts for the rest of kinetic and internal potential energy, and \(E_{\mathrm{ext}}[\rho]\) is the external potential energy (Supplementary Sec. A.1). Terms \(E_{\mathrm{ext}}[\rho]\) and \(E_{\mathrm{H}}[\rho]\) have exact expressions, and \(E_{\mathrm{XC}}[\rho]\) already has accurate approximations. As for the non-interacting kinetic energy, in KSDFT it can be calculated from orbital solutions, but as a density functional for conducting OFDFT, the expression \(T_{\mathrm{S}}[\rho]\) is unknown and requires an accurate approximation.
The proposed M-OFDFT uses a deep machine learning model to approximate the KEDF (Fig. 1(b)). For an efficient density representation to allow a nonlocal architecture, we adopt an atomic orbital basis set \(\{\omega_{\mu}(\mathbf{r})\}_{\mu=1}^{M}\) to expand the density \(\rho(\mathbf{r})=\sum_{\mu}\mathbf{p}_{\mu}\omega_{\mu}(\mathbf{r})\), and take the coefficients \(\mathbf{p}\) as the model input. Each basis function \(\omega_{\mu=(a,\tau)}(\mathbf{r})\) depicts a density pattern \(\tau\) around atom \(a\), which aligns with the pattern of electron density in a molecule where electrons distribute around atoms. Moreover, they are designed to mimic the nuclear cusp condition [36] for sculpting the sharp density change near a nucleus. They also naturally form a shell structure, _i.e._, concentrate on different distances from the center atom. These features further fit the details of density in molecules, facilitating the representation with high efficiency. As a result, commonly required basis functions is thousands times fewer than grid points.
Under this representation, the KEDF model follows the form \(T_{\mathrm{S},\theta}(\mathbf{p},\mathcal{M})\), where \(\theta\) is the learnable parameters, and the molecular structure \(\mathcal{M}:=\{\mathbf{X},\mathbf{Z}\}\) is required for specifying the locations and types of basis functions, where \(\mathbf{X}:=\{\mathbf{x}^{(a)}\}_{a=1}^{A}\) and \(\mathbf{Z}:=\{Z^{(a)}\}_{a=1}^{A}\) are the coordinates and atomic numbers (types) of the atoms (conformation and constitution). As each coefficient \(\mathbf{p}_{\mu=(a,\tau)}\) can be associated to one atom \(a\), the input \((\mathcal{M},\mathbf{p})\) is a set of pinpointed atoms each with a type and density coefficient features (Fig. 1(b)). To process such input, we build a graph neural network based on the Graphormer
architecture [33; 34], which improves graph-theoretical expressiveness over Transformer [35] by incorporating pairwise features (_e.g._, distance features) into the attention mechanism, the module responsible for nonlocal interactions between density features on two different atoms (Supplementary Sec. B.1). In contrast to commonly used multi-layer perceptrons (MLPs), the Transformer-based model captures "relative relation" instead of "absolute values" of the input feature, which generalizes better for varying-length inputs.This nonlocal formulation has a low cost prefactor by the virtue of the concise density representation, while is indeed crucial for KEDF according to our results in Supplementary Sec. D.4.2.
Perhaps unexpectedly, learning a functional model is more challenging beyond conventional machine learning. Primarily, the model is used as an optimization objective. This requires a higher quality than for end-to-end prediction, as the error would accumulate during optimization hence deviate or even diverge the process. The model needs to capture the energy landscape on the coefficient space, for which only one datapoint per molecular structure is far from sufficient. We hence design methods to produce _multiple_ coefficient data points, each also with a _gradient_ (w.r.t the coefficients) label, for each molecular structure (Methods 4.1). Moreover, the input coefficients are tensors equivariant to the rotation of coordinate system, but the output energy is invariant. To guarantee this geometric invariance in the model, we employ atom-wise _local frames_. They also stabilize the coefficients for the same type of bonds (Methods 4.2). Finally, the model is aimed at a physical mechanism by which the output energy would increase sharply when the input density deviates from the ground state. To allow the model expressing such large gradients, we introduce a series of _enhancement modules_ that balances the sensitivity over coefficient dimensions, rescales the gradient in each dimension, and offsets the gradient with a reference (Methods 4.3).
After the \(T_{\mathrm{S},\theta}(\mathbf{p},\mathcal{M})\) model is learned, M-OFDFT solves the electronic energy and density of a given molecular structure \(\mathcal{M}\) through the density optimization procedure (Fig. 1(c)):
\[\min_{\mathbf{p}\cdot\mathbf{p}^{\top}\mathbf{w}=N}E_{\theta}(\mathbf{p}, \mathcal{M}):=T_{\mathrm{S},\theta}(\mathbf{p},\mathcal{M})+E_{\mathrm{H}}( \mathbf{p},\mathcal{M})+E_{\mathrm{XC}}(\mathbf{p},\mathcal{M})+E_{\mathrm{ ext}}(\mathbf{p},\mathcal{M}),\]
where \(E_{\mathrm{H}}\), \(E_{\mathrm{ext}}\) and \(E_{\mathrm{XC}}\) can be computed from \((\mathbf{p},\mathcal{M})\) by the conventional way (Supplementary Sec. A.3.2). The constraint on \(\mathbf{p}\) fulfills a normalized density, where \(\mathbf{w}_{\mu}:=\int\omega_{\mu}(\mathbf{r})\,\mathrm{d}\mathbf{r}\) is the basis
Figure 1: **Overview of M-OFDFT.****(a)** Kohn-Sham DFT solves the properties of a molecular structure \(\mathcal{M}\) by optimizing \(N\) orbital functions \(\{\phi_{i}(\mathbf{r})\}_{i=1}^{N}\), while orbital-free DFT only needs to optimize one density function \(\rho(\mathbf{r})\) if the kinetic energy density functional (KEDF) \(T_{\mathrm{S}}[\rho]\) is available, which reduces the complexity by an order of \(N\). **(b)** The proposed M-OFDFT uses a deep-learning model to approximate KEDF, which is learned from data. The model incorporates nonlocal interaction of density over the space, which is made affordable by inputting the concise density representation of expansion coefficients \(\mathbf{p}\) on an atomic basis \(\{\omega_{a,\tau}(\mathbf{r})\}_{a,\tau}\). Each basis function concentrates around an atom, and they altogether span a similar pattern as the density, making the representation concise. **(c)** M-OFDFT solves a molecular structure \(\mathcal{M}\) by optimizing the density coefficients \(\mathbf{p}\), for which the learned KEDF model \(T_{\mathrm{S},\theta}(\mathbf{p},\mathcal{M})\) constitutes the energy objective.
normalization vector. The optimization is solved by gradient descent:
\[\mathbf{p}^{(k+1)}:=\mathbf{p}^{(k)}-\varepsilon\bigg{(}\mathbf{I}-\frac{\mathbf{ w}\mathbf{w}^{\top}}{\mathbf{w}^{\top}\mathbf{w}}\bigg{)}\nabla_{\mathbf{p}}E_{ \theta}(\mathbf{p}^{(k)},\mathcal{M}), \tag{1}\]
where \(\varepsilon\) is a step size, and the gradient is projected onto the admissible plane in respect to the linear constraint. Notably, due to directly operating on density, the complexity of M-OFDFT in each iteration is \(O(N^{2})\) (Supplementary Sec. A.3.2), which is order-\(N\) less than that \(O(N^{3})\) (with density fitting; Supplementary Sec. A.3.1) of KSDFT.
### M-OFDFT Achieves Chemical Accuracy on Molecular Systems
We first evaluate the performance of M-OFDFT on molecules in similar scales but unseen in training. We generate datasets based on two settings: ethanol structures from the MD17 dataset [37; 38] for studying conformational space generalization, and molecular structures from the QM9 dataset [39; 40] for studying chemical space generalization. Each dataset is split into three parts for the training and validation of the KEDF model, and the test of M-OFDFT. For ease of training, we use the APBE functional [41] as a base KEDF and let the deep-learning model learn the residual (Supplementary Sec. B.4.1).
We evaluate M-OFDFT in terms of the mean absolute error (MAE) from KSDFT results in energy, as well as in the Hellmann-Feynman (HF) force (Supplementary Sec. C.5). The results are \(0.18\,\mathrm{kcal/mol}\) and \(1.18\,\mathrm{kcal/mol}/\mathrm{\SIUnitSymbolAngstrom}\) on ethanols, and \(0.93\,\mathrm{kcal/mol}\) and \(2.91\,\mathrm{kcal/mol}/\mathrm{\SIUnitSymbolAngstrom}\) on QM9 (Supplementary Sec. D.1.1 shows more results). We see M-OFDFT achieves chemical accuracy (\(1\,\mathrm{kcal/mol}\) energy MAE) in both cases.
To show the significance of this result, we compare M-OFDFT with classical OFDFT using well-established KEDFs, including the Thomas-Fermi (TF) KEDF [4; 5] which is exact in the uniform electron gas limit, its corrections \(\mathrm{TF}\)+\(\frac{1}{9}\)vW [12] and TF+vW [42] with the von Weizsacker (vW) KEDF [43], and the base KEDF APBE (Supplementary Sec. C.3). We note that different KEDFs may have different absolute energy biases, so for the energy error we compare the MAE in relative energy. On ethanol structures, the relative energy is taken w.r.t the energy on the equilibrium conformation. On QM9, as each molecule only has one conformation, we evaluate the relative energy between every pair from the 6,095 isomers of \(\mathrm{C_{7}H_{10}O_{2}}\) in the QM9 dataset. These isomers can be seen as different conformations of the same set of atoms [44]. As shown in Fig. 2(a), M-OFDFT still achieves chemical accuracy on relative energy, and is two orders more accurate than classical OFDFT.
As a qualitative investigation of M-OFDFT, we visualize the density on a test ethanol structure optimized by these methods in Fig. 2(b) (Supplementary Sec. D.1.2 shows more). Radial density by spherical integral around the oxygen atom is plotted. We find that the M-OFDFT curves coincide with the KSDFT curve precisely. Particularly, the two major peaks around \(0\,\mathrm{\SIUnitSymbolAngstrom}\) and \(1.4\,\mathrm{\SIUnitSymbolAngstrom}\) correspond to the density of core electrons of the oxygen atom and the bonded carbon atom, while the minor peak in between reflects the density of electrons in the covalent bonds with the hydrogen atom and the carbon atom. M-OFDFT successfully recovers this shell structure, which is deemed difficult for OFDFT. In comparison, the classical OFDFT using the APBE KEDF does not align well with the true density around the covalent bonds. These results suggest that M-OFDFT is a working OFDFT for molecular systems.
### M-OFDFT Extrapolates Well to Larger-Scale Molecules
To wield the advantage of the lower cost scaling of M-OFDFT for a more meaningful impact, we evaluate its accuracy on molecular systems with a scale beyond affordable for generating abundant training data. For running on large molecules, we train the deep-learning model targeting the sum of the kinetic and XC energy to get rid of the demanding calculation on grid (Supplementary Sec. B.4.2). This modification does not lead to obvious accuracy lost (Supplementary Sec. D.1.1).
To evaluate the significance of the extrapolation performance, we compare M-OFDFT with a natural variant of deep machine learning method that directly predicts the ground-state energy from the molecular structure \(\mathcal{M}\) in an end-to-end manner, which we call M-PES (following "potential energy surface"). We also consider a variant named M-PES-Den that additionally takes the MINAO initialized density into input for investigating the effect of density feature on extrapolation. Both variants use the same nonlocal model architecture and training settings as M-OFDFT for fair comparison (Supplementary Sec. C.4).
QMugsWe first study the extrapolation on the QMugs dataset [45], containing much larger molecules than those in QM9 which have no more than 9 heavy atoms. We train the models on QM9 together with QMugs molecules with no more than 15 heavy atoms, and test the methods on larger QMugs molecules up to 101 heavy atoms, which are grouped according to the number of heavy atoms into bins of width
5, and are randomly subsampled to ensure the same number (50) of molecular structures in each bin to eliminate statistical effects.
The result is shown in Fig. 3(a). We see that the per-atom MAE of M-OFDFT is always orders smaller than M-PES and M-PES-Den in absolute value, even though M-PES and M-PES-Den achieve a lower validation error (Supplementary Table 6). More attractively, the error of M-OFDFT keeps constant and even decreases (note the negative exponent) when the molecule scale increases, while the errors of M-PES and M-PES-Den keep increasing, even though they use the same nonlocal architecture capable of capturing long-range effects, and M-PES-Den also has a density input. We attribute the qualitatively better extrapolation to appropriately formulating the machine-learning task. The ground-state energy of a molecular structure is the _result_ of an intricate, many-body interaction among electrons and nuclei, leading to a highly challenging function to extrapolate from one region to another. M-OFDFT converts the task into learning the objective function for the target output. The objective only needs to capture the _mechanism_ that the particles interact, which has a reduced level of complexity, while transferring a large portion of complexity to the optimization process, for which optimization tools can handle effectively without an extrapolation issue. Similar phenomena have also been observed recently in machine learning that learning an objective shows better extrapolation than learning an end-to-end map [46; 47].
To further substantiate the significance of the extrapolation capability of M-OFDFT, we investigate the magnitude by which the training molecule scale must be increased for M-PES and M-PES-Den to achieve the same level of performance as M-OFDFT on a given workload of large-scale molecules. We take 50 QMugs molecules with 50-60 heavy atoms as the extrapolation benchmark, and train the models on a series of equal-sized datasets that include increasingly larger molecules up to 30 heavy atoms. As shown in Fig. 3(b), M-PES and M-PES-Den require at least twice as large molecules in the training dataset (\(30\,\mathit{vs.}\,15\) heavy atoms) to achieve a commensurate accuracy (\(0.068\,\mathrm{kcal/mol}\)) as M-OFDFT provides.
Figure 2: **Results of M-OFDFT compared with classical OFDFT on molecular systems.****(a)** Relative energy and Hellmann-Feynman (HF) force results in mean absolute error (MAE) from KSDFT, with 95% confidence interval bars. **(b)** Visualization of optimized density. Each curve plots the integrated density on spheres with varying radii centered at the oxygen atom in an ethanol structure.
These extrapolation results suggest that M-OFDFT can be applied to systems much larger than training to exploit the scaling advantage, and is more affordable to develop for solving large-scale molecular systems.
Figure 3: **Extrapolation performance of M-OFDFT compared with other deep-learning methods.** Considered are M-PES and M-PES-Den that use deep-learning models to predict the ground-state energy end-to-end. The shades and bars show 95% confidence intervals. **(a)** Mean absolute error (MAE) in per-atom energy on increasingly larger molecules from the QMugs dataset, using models trained on molecules with no more than 15 heavy atoms from QM9 and QMugs datasets. **(b)** Energy error on 50 QMugs molecules with 56-60 heavy atoms, using models trained on a series of datasets containing increasingly larger QMugs molecules up to 30 heavy atoms. The horizontal dotted black line marks the performance of M-OFDFT trained on the first dataset. **(c)** Relative energy error on 50 Chignolin structures, using models trained on all peptides (lengths 2-5). Also shown is the result of the classical OFDFT using APBE. **(d)** Energy error on 1,000 Chignolin structures, using models trained on a series of datasets including increasingly longer peptides. **(e)** Energy error on 50 Chignolin structures, using models trained on all peptides without (’Pretrain’) and with ’Finetune’ on 500 Chignolin structures. Also marked are error reduction ratios by the finetuned models over models trained ’FromScratch’ on the 500 Chignolin structures only.
ChignolinAn increasingly important portion of the demand for large-scale quantum chemistry calculation comes from biomolecular systems, particularly proteins, which are not touched by OFDFT previously. We assess the capability of M-OFDFT for protein systems on the Chignolin protein (10 residues, 168 atoms after neutralization). We consider the practical setup that it is unaffordable to generate abundant data for the large target system hence requiring extrapolation. We generate training data on smaller-scale systems of short peptide structures containing 2 to 5 residues, cropped from 1,000 Chignolin structures selected from [48]. To account for non-covalent effects, training data also include systems containing two peptides of lengths 2 and 2, and 2 and 3, where each peptide pair is cropped from the same Chignolin structure. See more details in Supplementary Sec. C.1.4. For this task, we let the model target the total energy for a learning stability consideration (Supplementary Sec. B.4.3).
We first train the model on all available peptides, and compare the relative energy error on Chignolin with other methods in Fig. 3(c). Notably, M-OFDFT achieves a significantly lower per-atom error than the classical OFDFT using the APBE KEDF (\(0.098\,\mathrm{kcal/mol}\)_vs._\(0.684\,\mathrm{kcal/mol}\)), providing an effective OFDFT method for biomolecular systems. M-OFDFT also outperforms deep-learning variants M-PES and M-PES-Den, indicating a better extrapolation capability. To investigate extrapolation in more detail, we train the deep-learning models on peptides with increasingly larger scale and plot the error on Chignolin in Fig. 3(d) (similar to the setting of Fig. 3(b)). Remarkably, M-OFDFT consistently outperforms end-to-end energy prediction methods M-PES and M-PES-Den across all lengths of training peptides, and halves the required length for the same level of accuracy. We note the spikes of M-PES and M-PES-Den at peptide length 3 despite extensive hyperparameter tuning, possibly due to that their harder extrapolation difficulty magnifies the gap between in-scale validation and larger-scale performance in this case.
After being trained on data in accessible scale, which is called "pretraining" in the following context, a deep-learning model for a larger-scale workload can be further improved if a few larger-scale data are available for finetuning. In this situation, a method capable of good extrapolation could be roughly aligned with the larger-scale task in advance using accessible data, more efficiently leveraging the limited larger-scale data, and outperforming the model trained from scratch on these limited data only. To investigate the benefit of M-OFDFT in this scenario, we build a finetuning dataset on 500 Chignolin structures. Results in Fig. 3(e) show that M-OFDFT achieves the most gain from pretraining, reducing the energy error by 35.4% over training from scratch, showing the appeal of extracting a more generalizable rule from accessible-scale data. With finetuning, M-OFDFT still gives the best absolute accuracy. These results suggest that M-OFDFT could effectively handle as large a molecular system as a protein, even without abundant training data on the same large scale.
### M-OFDFT Has a Lower Empirical Time Complexity than KSDFT
After validating the accuracy and extrapolation capability, we now demonstrate the scaling advantage of M-OFDFT empirically. The time cost for running both methods on increasingly large molecules from the QMugs dataset [45] is plotted in Fig. 4. We see the absolute running time of M-OFDFT is always shorter than that of KSDFT, achieving up to 6.7-fold speedup. The empirical complexity of M-OFDFT is \(O(N^{1.46})\), which is indeed at least order-\(N\) less than the empirical complexity \(O(N^{2.49})\) of KSDFT. Supplementary Sec. C.2 details the running setup.
Figure 4: **Empirical time cost of M-OFDFT compared with KSDFT on molecules in various scales.** Each plotted value is the average of running times on molecules whose number of electrons falls in the corresponding bin of width 20.
To further wield the advantage, we run M-OFDFT on two molecular systems as large as proteins: **(1)** the peripheral subunit-binding domain BBL-H142W (PDB ID: 2WXC) [49] containing 2,676 electrons (709 atoms), and **(2)** the K5I/K39V double mutant of the Albumin binding domain of protein B (PDB ID: 1PRB) [50] containing 2,750 electrons (738 atoms). Such a scale exceeds the typical workload of KSDFT [51]. M-OFDFT costs 0.41 and 0.45 hours on the two systems, while using KSDFT costs 10.5 and 12.3 hours, hence a 25.6-fold and 27.4-fold speedup is achieved. Supplementary Sec. D.3 provides more details.
## 3 Conclusion and Discussion
This work has developed M-OFDFT, an orbital-free density functional theory that works successfully on molecules. The central task to approximate the kinetic energy density functional (KEDF) is regarded challenging, especially for molecular systems. We have shown that such approximation can be achieved much more accurately by modern deep machine learning models with proper architecture design and training techniques. M-OFDFT achieves working accuracy on molecules and shows desirable extrapolation capability, unleashing the attractive scalability of OFDFT for large molecular systems.
This work introduces a few technical improvements for learning a functional model. Instead of a grid-based representation, we used the density coefficients on an atomic basis as model input, whose much lower dimensionality allows our construction of a nonlocal architecture to enhance accuracy and extrapolation. Some works [52; 53] on learning the XC functional also adopt the coefficient input, but without the molecular structure input, hence cannot properly capture inter-atomic density feature interaction. Regarding the additional challenge for learning an objective, we generated _multiple_ data points each also with a _gradient_ label for each molecular structure. Although the possibility has been noted by previous works [24; 25], none has fully leveraged such abundant data for training (some only incorporated gradient [29; 30; 32; 28]). There are other ways to regularize the optimization behavior of a functional model [54; 55; 56; 53], but our trials in Supplementary Sec. D.4.4 show that they are not as effective. To express intrinsically large gradient, we introduce enhancement modules in addition to a conventional neural network. For stable density optimization using a learned model, prior works [24; 25; 27; 29] used projection onto the training-data manifold in each step, while M-OFDFT only needs the initialization be on the manifold (Methods 4.4).
While statistical guarantees are established for in-distribution generalization [57], reliable extrapolation remains an open challenge and has long bothered the application of machine learning in the science domain [58; 59]. This work has demonstrated the improved extrapolation by choosing an appropriate formulation of quantum chemistry: learning a density functional extrapolates qualitatively better than direct energy prediction. Incorporating exact properties of KEDF into the model also benefits extrapolation. We have geometric invariance built into the model using local frame. Nevertheless, it is not always straightforward to gain benefits from these properties, since some would introduce more training challenges or unintended model capacity restrictions. For example, we tried the von Weizsacker KEDF as the base KEDF which introduces positivity to the residual model [60, Thm. 1.1], but the resulting gradient labels are too large to learn effectively. The KEDF also has a scaling property, but it cannot be translated into an exact equation under atomic basis (Supplementary Sec. A.5).
For better extrapolation, another possibility from the machine learning perspective is using more data and larger model size with a proper architecture. Recent progress in large language model [61; 62] has shown the emergence of the capacity to solve seemingly all language tasks given large enough data and model. A similar trend in the Graphormer architecture is hinted by a recent study [63] for equilibrium distribution, indicating opportunity to further improve the universality of the functional model.
In summary, M-OFDFT represents an advancement of the frontier of accuracy-efficiency trade-off in quantum chemistry, creating a new powerful tool for exploring large and complex molecules with a higher level of detail and scale.
## 4 Methods
In response to the more challenges beyond conventional (deep) machine learning, we describe methodological details for KEDF model training (Methods 4.1), additional design for geometric invariance (Methods 4.2) and large gradient capacity (Methods 4.3), and density optimization strategies (Methods 4.4) of M-OFDFT.
### Training the KEDF Model
Although learning the KEDF model \(T_{\mathrm{S},\theta}(\mathbf{p},\mathcal{M})\) can be converted to a supervised machine learning task, it is more challenging than the conventional form. The essential difference is rooted in the way that the model is used: instead of as an end-to-end mapping to predict the kinetic energy of \((\mathbf{p},\mathcal{M})\) queries, the model is used as the objective to optimize the density coefficients \(\mathbf{p}\) for a given molecular structure \(\mathcal{M}\) (Fig. 1(c)). To eliminate instability and achieve accurate optimization result, the model is required to capture how to vary with \(\mathbf{p}\) for a fixed \(\mathcal{M}\), _i.e._, the optimization landscape on the coefficient space. Conventional data format \(\{\mathcal{M}^{(d)},\mathbf{p}^{(d)},T_{\mathrm{S}}^{(d)}\}_{d}\) does not effectively convey such information, since only one labeled \(\mathbf{p}\) data point is seen for each \(\mathcal{M}\). Hence the first requirement on training data is _multiple_ coefficient data points per structure, following the format \(\{\mathcal{M}^{(d)},\{\mathbf{p}^{(d,k)},T_{\mathrm{S}}^{(d,k)}\}_{k}\}_{d}\). On such data, the model is trained by minimizing:
\[\sum_{d}\sum_{k}\Bigl{|}T_{\mathrm{S},\theta}(\mathbf{p}^{(d,k)},\mathcal{M}^{ (d)})-T_{\mathrm{S}}^{(d,k)}\Bigr{|}. \tag{2}\]
After some trials, we found this is still not sufficient. The trained model, although accurately predicts the kinetic energy value, still decreases the electronic energy in density optimization (Eq. (1)) even starting from the ground-state density. This indicates the gradient \(\nabla_{\mathbf{p}}T_{\mathrm{S},\theta}(\mathbf{p},\mathcal{M})\) w.r.t the coefficients is still not accurately recovered. We hence also desire a _gradient_ label for each data point, which constitutes data in the format \(\{\mathcal{M}^{(d)},\{\mathbf{p}^{(d,k)},T_{\mathrm{S}}^{(d,k)},\nabla_{ \mathbf{p}}T_{\mathrm{S}}^{(d,k)}\}_{k}\}_{d}\). As only the projected gradient matters for density optimization following Eq. (1), the gradient data is used for training the model by minimizing:
\[\sum_{d}\sum_{k}\biggl{\|}\biggl{(}\mathbf{I}-\frac{\mathbf{w}^{(d)}{\mathbf{ w}^{(d)}}^{\top}}{\mathbf{w}^{(d)}}\biggr{)}\biggl{(}\nabla_{\mathbf{p}}T_{ \mathrm{S},\theta}(\mathbf{p}^{(d,k)},\mathcal{M}^{(d)})-\nabla_{\mathbf{p}}T _{\mathrm{S}}^{(d,k)}\biggr{)}\biggr{\|}. \tag{3}\]
The gradient label provides additional information on the local landscape near each coefficient data point. As the model is used in density optimization only through its gradient, the gradient data directly stabilizes and regularizes density optimization, and enforces stationary-point condition for correct convergence. Supplementary Sec. D.4.1 verifies the improvement empirically through an ablation study.
To generate such multiple-coefficient and gradient-labeled data, we note that it is tractable from running the conventional KSDFT on each molecular structure \(\mathcal{M}^{(d)}\), which conducts a self-consistent field (SCF) iteration. The rationale is that, the task in each SCF step \(k\) is to solve a non-interacting fermion system in an effective one-body potential constructed from previous steps. The ground-state wavefunction solution is a Slater determinant specified by the \(N\) orbital solutions in that step, by which the non-interacting kinetic energy \(T_{\mathrm{S}}^{(d,k)}\) can be directly calculated. The corresponding density coefficients \(\mathbf{p}^{(d,k)}\) can be calculated from these orbitals by density fitting [64]. For the gradient label, since \(\mathbf{p}^{(d,k)}\) represents the ground-state density of the non-interacting system, it minimizes the energy of the non-interacting system as a function of density coefficient, \(T_{\mathrm{S}}(\mathbf{p},\mathcal{M}^{(d)})+\mathbf{p}^{\top}\mathbf{v}_{ \mathrm{eff}}^{(d,k)}\), where \(\mathbf{v}_{\mathrm{eff}}^{(d,k)}\) is the effective potential in SCF step \(k\) in vector form under the atomic basis. This indicates \(\nabla_{\mathbf{p}}T_{\mathrm{S}}(\mathbf{p}^{(d,k)},\mathcal{M}^{(d)})=- \mathbf{v}_{\mathrm{eff}}^{(d,k)}\) up to the normalization projection. Supplementary Sec. A.2 elaborates more on the reasoning, and Supplementary Sec. A.4 provides calculation details, including an efficient implementation to generate the gradient label.
In our implementation of M-OFDM, the atomic basis for representing density is taken as the even-tempered basis set [65] with tempering ratio \(\beta=2.5\). For generating data, restricted-spin KSDFT is conducted at the PBE/6-31G(2df,p) level, which is sufficient for the considered systems which are uncharged, in near-equilibrium conformation, and only involve light atoms (up to fluorine).
### Geometric Invariance
Another challenge beyond conventional machine learning is that the target physical functional exhibits symmetry w.r.t transformations on the input \((\mathbf{p},\mathcal{M}=\{\mathbf{X},\mathbf{Z}\})\) arising from the translation and rotation of the molecule. This is formally referred to as \(\mathrm{SE}(3)\)-invariance, following "3-dimensional special Euclidean group" that comprises these transformations. This is because the non-interacting kinetic energy of electrons does not change with the translation and rotation of the molecule, but the input atomic coordinates \(\mathbf{X}\) do, and the input density coefficients \(\mathbf{p}\) also change with the rotation. The change of \(\mathbf{p}\) is due to that the electron density rotates with the molecule, but the atomic basis functions do not, since their orientations are aligned with the (global) coordinate system, a.k.a frame. Formally, such input features are geometric vectors and tensors that change equivariantly with the translation and/or rotation of the molecule. Subsequently, the model is expected to have this \(\mathrm{SE}(3)\)-invariance built-in. This allows the model to learn the essential dependency of the energy on the density irrespective of geometric variability, reducing the problem space, and facilitating data efficiency and effective training. The invariance
also enhances generalization and extrapolation performance, as an important physical property is always guaranteed [66; 67].
For the invariance w.r.t atomic coordinates \(\mathbf{X}\), the neural network model of Graphormer is naturally \(\mathrm{SE}(3)\)-invariant, since the model only uses relative distances of atom pairs for later processing, which are inherently invariant w.r.t the translation and rotation of the molecule. To ensure the invariance of the model w.r.t the density coefficients \(\mathbf{p}\), we introduce a transformation on \(\mathbf{p}\) under _local frames_ to make invariant coefficient features. Each local frame is associated to an atom, and specifies the orientation of atomic basis functions on that atom. It is determined by the relative positions among nearby atoms, hence the basis function orientations rotate with the molecule and the density, making the density coefficients under the local frame invariant. Specifically, the local frame on the atom located at \(\mathbf{x}_{a}^{(0)}\) is determined following previous works (_e.g._, [68; 69]): the x-axis unit vector \(\hat{\mathbf{x}}:=\mathrm{Normalize}(\mathbf{x}_{a}^{(1)}-\mathbf{x}_{a}^{(0)})\) is pointed to its nearest heavy atom located at \(\mathbf{x}_{a}^{(1)}\), then the z-axis is pointed to \(\hat{\mathbf{z}}:=\mathrm{Normalize}\big{(}\hat{\mathbf{x}}\times(\mathbf{x }_{a}^{(2)}-\mathbf{x}_{a}^{(0)})\big{)}\), where \(\mathbf{x}_{a}^{(2)}\) is the coordinates of the second-nearest heavy atom not collinear with the nearest one, and finally the y-axis is pointed to \(\hat{\mathbf{y}}:=\hat{\mathbf{z}}\times\hat{\mathbf{x}}\) following a right-handed system. See Supplementary Sec. B.2 for more details.
Moreover, the local frame approach offers an additional benefit that the coefficient features are stabilized for local molecular substructures, _e.g._, bond or functional group, of the same type. Such substructures on one molecule may have different orientations relative to the whole molecule, but the electron density on them are naturally close, up to a rotation. Other invariant implementations, _e.g._, using an equivariant global frame [70; 71] or processing tensorial input invariantly [72; 73; 74; 75], bind the basis orientations on different atoms together, so the resulting coefficients on the substructures appear vastly different. In contrast, using local frames, basis orientations on different atoms are decoupled, and since they are determined only by nearby atoms, the basis functions rotate from one substructure to another accordingly. Hence, the resulting density coefficients are aligned together, whose difference only indicates the minor density fluctuation on the same type of substructure but not the different orientations of the copies. This makes the model much easier to identify that such local density components follow the same pattern and contribute similarly to the energy. Supplementary Sec. B.2 provides an illustrative explanation. We numerically demonstrate the benefit in Supplementary Figs. 9-10 that using local frame instead of equivariant global frame significantly reduces the variance of both density coefficients and gradients on atoms of each type. Especially, on most basis functions of hydrogen, the coefficient and gradient scales are reduced by over 60%. This significantly stabilizes the training process and immediately reduces training error, resulting in a considerable improvement of overall performance as empirically verified in Supplementary Sec. D.4.3.
### Enhancement Modules for Vast Gradient Range
After reducing the geometric variability of data using local frame, the raw gradient values still show a vast range, which conventional neural networks are not designed for (_e.g._, [76]) and indeed causes training difficulties in our trials. This is an intrinsic challenge for learning a physical functional since we require non-ground-state density in the data, which would increase the energy steeply. The large gradient range cannot be trivially reduced by conventional data normalization techniques, since its scale is associated with the scale of energy and coefficient, hence downscaling the gradient would either proportionally downscale the energy values which requires a higher prediction resolution, or inverse-proportionally upscale the coefficients which is also numerically unfriendly to process. To handle this challenge, we introduce a series of enhancement modules to allow expressing a vast gradient range, including dimension-wise rescaling, a reparameterization of the density coefficients, and an atomic reference module to offset the large mean of gradient.
Dimension-wise RescalingWe first upgrade data normalization more flexibly to trade-off coefficient-gradient scales dimension-wise. Considering the number of coefficient dimensions vary from different molecules, we propose to center and rescale the coefficients using biases \(\bar{\mathbf{p}}_{Z,\tau}\) and factors \(\lambda_{Z,\tau}\) each specific to one coefficient/gradient dimension \(\tau\) associated with one _atom type_ (_i.e._, chemical element) \(Z\) but not one atom. The bias \(\bar{\mathbf{p}}_{Z,\tau}:=\mathrm{mean}\{\mathbf{p}_{a,\tau}^{(d,k)}\}_{a:Z ^{(a)}=Z,\;k,\;d}\) for \((Z,\tau)\) is the average over coefficient values in dimension \(\tau\) on all atoms of type \(Z\) in all molecular structures in the training dataset. After centering the coefficients using the bias (which does not affect gradients), the scaling factor \(\lambda_{Z,\tau}\) is determined by upscaling the centered coefficient and simultaneously inverse-proportionally downscaling the gradient, until the gradient achieves a chosen target scale \(s_{\text{grad}}\) or the coefficient exceeds a chosen
maximal scale \(s_{\text{coeff}}\). In equation:
\[\lambda_{Z,\tau}=\begin{cases}\min\left\{\frac{\max\_\text{grad}_{Z,\tau}}{s_{ \text{grad}}},\,\frac{s_{\text{coeff}}}{\text{std\_coeff}{Z,\tau}}\right\},& \text{if }\max\_\text{grad}_{Z,\tau}>s_{\text{grad}},\\ 1,&\text{otherwise},\end{cases} \tag{4}\]
where the scales of gradient and coefficient for \((Z,\tau)\) are measured by the maximum of gradient \(\max\_\text{grad}_{Z,\tau}:=\max\{\nabla_{\mathbf{p}_{a,\tau}}T_{\mathbf{S}}^ {(d,k)}\}_{a:Z^{(a)}=Z,\ k,\ d}\) and standard derivation of coefficient \(\text{std\_coeff}_{Z,\tau}:=\text{std}\{\mathbf{p}_{a,\tau}^{(d,k)}\}_{a:Z^{(a) }=Z,\ k,\ d}\) on the dataset. Using the rescaling factors, each centered coefficient is rescaled by \(\mathbf{p}_{a,\tau}^{\prime}:=\lambda_{Z^{(a)},\tau}\mathbf{p}_{a,\tau}\), and gradient by \(\nabla_{\mathbf{p}_{a,\tau}}T_{\mathbf{S}}^{\prime}:=\nabla_{\mathbf{p}_{a, \tau}}T_{\mathbf{S}}\,/\,\lambda_{Z^{(a)},\tau}\) (\(\lambda_{Z,\tau}>1\) in most cases).
Natural ReparameterationOn quite a few dimensions, both the coefficient and gradient scales are large, making dimension-wise rescaling ineffective. We hence introduce _natural reparameterization_ applied before rescaling to balance the rescaling difficulties across dimensions hence reduce the worst-case difficulty. The unbalanced scales come from the different sensitivities of the density function on different coefficient dimensions: the change of density function from a coefficients change \(\Delta\mathbf{p}\) is measured by the L2-metric in the function space, \(\int\lvert\Delta\rho(\mathbf{r})\rvert^{2}\,\mathrm{d}\mathbf{r}\), which turns out to be \(\Delta\mathbf{p}^{\top}\mathbf{W}\Delta\mathbf{p}\), in which different dimensions indeed contribute with different weights since the overlap matrix \(\mathbf{W}_{\mu\nu}:=\int\omega_{\mu}(\mathbf{r})\omega_{\nu}(\mathbf{r})\, \mathrm{d}\mathbf{r}\) therein is generally anisotropic. The reparameterized coefficients \(\tilde{\mathbf{p}}\) are expected to contribute equally across the dimensions: \(\int\lvert\Delta\rho(\mathbf{r})\rvert^{2}\,\mathrm{d}\mathbf{r}=\Delta \tilde{\mathbf{p}}^{\top}\Delta\tilde{\mathbf{p}}\). We hence take:
\[\tilde{\mathbf{p}}:=\mathbf{M}^{\top}\mathbf{p},\]
where \(\mathbf{M}\) is a square matrix satisfying \(\mathbf{M}\mathbf{M}^{\top}=\mathbf{W}\). See Supplementary Sec. B.3.2 for more details. This reparameterization also leads to natural gradient descent [77] in density optimization, which is known to converge faster than vanilla gradient descent.
Atomic Reference ModuleRecall that in dimension-wise rescaling, the large bias of coefficients can be offset by the mean on a dataset, but this does not reduce the bias scale of gradient labels. To further improve the coefficient-gradient scale trade-off, we introduce an _atomic reference module_:
\[T_{\mathrm{AtomRef}}(\mathbf{p},\mathcal{M}):=\bar{\mathbf{g}}_{\mathcal{M}}^ {\top}\mathbf{p}+\bar{T}_{\mathcal{M}},\]
which is linear in the coefficients \(\mathbf{p}\) and whose output is added to the neural network output as the kinetic energy value. By this design, the gradient of the atomic reference model \(\nabla_{\mathbf{p}}T_{\mathrm{AtomRef}}(\mathbf{p},\mathcal{M})=\bar{\mathbf{ g}}_{\mathcal{M}}\) is a constant, which offsets the target gradient for the neural network to capture, effectively reducing the scale of gradient labels and facilitating neural network training. The weights \(\bar{\mathbf{g}}_{\mathcal{M}}:=\mathrm{concat}\{\bar{\mathbf{g}}_{Z^{(a)}, \tau}\}_{\tau,\,a\in\mathcal{M}}\) and bias \(\bar{T}_{\mathcal{M}}:=\sum_{a\in\mathcal{M}}\bar{T}_{Z^{(a)}}+\bar{T}_{\mathrm{ global}}\) of the linear model are constructed by tiling and summing the per-type statistics, which are derived over all atoms of each type in a dataset. The per-type gradient statistics is defined by \(\bar{\mathbf{g}}_{Z,\tau}:=\mathrm{mean}\{\nabla_{\mathbf{p}_{a,\tau}}T_{ \mathbf{S}}^{(d,k)}\}_{a:Z^{(a)}=Z,\ k,\ d}\), which represents the average response to \(T_{\mathbf{S}}\) from the change of coefficients on an atom of type \(Z\). Per-type bias statistics \(\{\bar{T}_{Z}\}_{Z}\) and \(\bar{T}_{\mathrm{global}}\) are fit by least squares. See Supplementary Sec. B.3.3 for more details.
The final KEDF model is constructed from these enhancement modules and the neural network model in the following way (Supplementary Fig. 6(a)): the density coefficients are first transformed under local frame and processed by natural reparameterization; the processed coefficients, through one branch, are fed to the atomic reference module to calculate the reference part of output energy, and through another branch, are processed by dimension-wise rescaling and then input to the neural network model which produces the rest part of output energy. Comparative results in Supplementary Sec. B.3 and D.4.3 highlight the empirical benefits of each module.
### Density Optimization
In the deployment stage, M-OFDFT solves the ground state of a given molecular structure \(\mathcal{M}\) by minimizing the electronic energy as a function of density coefficients \(\mathbf{p}\), where the learned KEDF model \(T_{\mathrm{S},\theta}(\mathbf{p},\mathcal{M})\) is used to construct the energy function (Fig. 1(a)). As described in Results 2.1, we use gradient descent to optimize \(\mathbf{p}\) (Eq. (1)), since it is unnatural to formulate the optimization problem into a self-consistent iteration. Gradient descent has also been used in KSDFT, which bears the merit of being more stable [78].
A subtlety in density optimization using a learned functional model is that the model may be confronted with densities far from the training-data manifold (or "out of distribution" in machine-learning term), which may lead to unstable optimization. Such an issue has been observed in previous machine-learning
OFDFT [24; 25; 27], which mitigates the problem by projecting the density onto the training-data manifold in each optimization step. A similar phenomenon is also observed in M-OFDFT. As shown in Fig. 5, when starting from the MINAO initialization [79] which is common for KSDFT, the density optimization process leads to an obvious gap from the target KSDFT energy. We note that the initial density by MINAO already lies off the manifold inherently: each density entry in the training data comes from the eigensolution to an effective one-electron Hamiltonian matrix, which exactly solves an effective non-interacting fermion system (Supplementary Sec. A.2),while the MINAO density comes from the superposition of orbitals of each atom in isolation, which is a different mechanism.
We hence propose using two other initialization methods to resolve the mismatch. The first approach is to use an established initialization that solves an eigenvalue problem, for which we choose the Huckel initialization [80; 81]. As shown in Fig. 5, although the Huckel density shows a much larger energy error than MINAO density at initialization, it ultimately indeed leads the optimization process to converge closely to the target energy.
The second choice is to project the MINAO density onto the training-data manifold, which we call ProjMINAO. In contrast to previous methods, M-OFDFT conducts optimization on the coefficient space which varies with molecular structure, so the training-data manifold of coefficient is unknown for an unseen molecular structure. We hence use another deep-learning model \(\Delta\mathbf{p}_{\theta}(\mathbf{p},\mathcal{M})\) to predict the required correction to project the input coefficient \(\mathbf{p}\) towards the ground-state coefficient \(\mathbf{p}^{*}\) of the input molecular structure \(\mathcal{M}\), which is always on the manifold. See Supplementary Sec. B.5.2 for details. From Fig. 5, we see ProjMINAO initialization indeed converges the optimization curve close to the target energy, even better than Huckel initialization. Note that even though ProjMINAO already closely approximates the ground state density, density optimization still continues to improve the accuracy. This suggests a potential advantage over end-to-end ground-state density prediction followed by energy prediction from ground-state density, which may also encounter extrapolation challenges similar to M-PES and M-PES-Den. Remarkably, Fig. 5 indicates that M-OFDFT only requires an on-manifold initialization but does not need projection in each optimization step, suggesting better robustness than previous methods. M-OFDFT in Results 2 is conducted using ProjMINAO, although using Huckel still achieves a reasonable accuracy; see Supplementary Sec. D.1.1. Supplementary Sec. B.5.1 provides curves in density error and comparison with classical KEDFs.
Figure 5: **Typical density optimization curves of M-OFDFT for a QM9 molecule with different initialization methods.** MINAO, the common KSDFT initialization, leads the optimization to a large gap from the target energy, since it is not from the eigensolution to an effective Hamiltonian matrix, hence lies off the training-data manifold (out of distribution). Hückel initialization solves an eigenvalue problem, which indeed converges the curve with a much smaller energy error of 5.34 \(\mathrm{kcal/mol}\). ProjMINAO initialization uses a deep-learning model to project the MINAO density onto the training-data manifold, which also converges the curve and achieves the best result of 0.60 \(\mathrm{kcal/mol}\) energy error. The inset figure highlights the role of density optimization even though the ProjMINAO density is close to the ground-state density.
## Acknowledgements
We thank Paola Gori Giorgi, William Chuck Witt, Sebastian Ehlert, Zun Wang, Livae Cheng, Jan Hermann and Ziteng Liu for insightful discussions and constructive feedback; Xingheng He and Yaosen Min for suggestions on protein preprocessing; Yu Shi for suggestions and feedback on model design and optimization; and Jingvun Bai for helping with figure design.
## Author information
### Author contributions
C.Liu led the research under the support from B.Shao and N.Zheng. C.Liu, S.Zheng and B.Shao conceived the project. H.Zhang, S.Zheng and J.You designed and implemented the deep-learning model. S.Liu, C.Liu, H.Zhang and J.You derived and implemented methods for data generation, the enhancement modules, training pipeline, and density optimization. H.Zhang and S.Liu conducted the experiments. Z.Lu and T.Wang contributed to the experiment design and evaluation protocol. C.Liu, H.Zhang, S.Liu and S.Zheng wrote the paper with inputs from all the authors.
### Corresponding authors
Correspondence to Chang Liu, Shuxin Zheng, and Bin Shao.
|
2309.15071 | Sensitivity Analysis of Simulation-Based Inference for Galaxy Clustering | Simulation-based inference (SBI) is a promising approach to leverage high
fidelity cosmological simulations and extract information from the
non-Gaussian, non-linear scales that cannot be modeled analytically. However,
scaling SBI to the next generation of cosmological surveys faces the
computational challenge of requiring a large number of accurate simulations
over a wide range of cosmologies, while simultaneously encompassing large
cosmological volumes at high resolution. This challenge can potentially be
mitigated by balancing the accuracy and computational cost for different
components of the the forward model while ensuring robust inference. To guide
our steps in this, we perform a sensitivity analysis of SBI for galaxy
clustering on various components of the cosmological simulations: gravity
model, halo-finder and the galaxy-halo distribution models (halo-occupation
distribution, HOD). We infer the $\sigma_8$ and $\Omega_m$ using galaxy power
spectrum multipoles and the bispectrum monopole assuming a galaxy number
density expected from the luminous red galaxies observed using the Dark Energy
Spectroscopy Instrument (DESI). We find that SBI is insensitive to changing
gravity model between $N$-body simulations and particle mesh (PM) simulations.
However, changing the halo-finder from friends-of-friends (FoF) to Rockstar can
lead to biased estimate of $\sigma_8$ based on the bispectrum. For galaxy
models, training SBI on more complex HOD leads to consistent inference for less
complex HOD models, but SBI trained on simpler HOD models fails when applied to
analyze data from a more complex HOD model. Based on our results, we discuss
the outlook on cosmological simulations with a focus on applying SBI approaches
to future galaxy surveys. | Chirag Modi, Shivam Pandey, Matthew Ho, ChangHoon Hahn, Bruno R'egaldo-Saint Blancard, Benjamin Wandelt | 2023-09-26T17:08:24Z | http://arxiv.org/abs/2309.15071v1 | # Sensitivity Analysis of Simulation-Based Inference for Galaxy Clustering
###### Abstract
Simulation-based inference (SBI) is a promising approach to leverage high fidelity cosmological simulations and extract information from the non-Gaussian, non-linear scales that cannot be modeled analytically. However, scaling SBI to the next generation of cosmological surveys faces the computational challenge of requiring a large number of accurate simulations over a wide range of cosmologies, while simultaneously encompassing large cosmological volumes at high resolution. This challenge can potentially be mitigated by balancing the accuracy and computational cost for different components of the the forward model while ensuring robust inference. To guide our steps in this, we perform a sensitivity analysis of SBI for galaxy clustering on various components of the cosmological simulations: gravity model, halo-finder and the galaxy-halo distribution models (halo-occupation distribution, HOD). We infer the \(\sigma_{8}\) and \(\Omega_{m}\) using galaxy power spectrum multipoles and the bispectrum monopole assuming a galaxy number density expected from the luminous red galaxies observed using the Dark Energy Spectroscopy Instrument (DESI). We find that SBI is insensitive to changing gravity model between \(N\)-body simulations and particle mesh (PM) simulations. However, changing the halo-finder from friends-of-friends (FoF) to Rockstar can lead to biased estimate of \(\sigma_{8}\) based on the bispectrum. For galaxy models, training SBI on more complex HOD leads to consistent inference for less complex HOD models, but SBI trained on simpler HOD models fails when applied to analyze data from a more complex HOD model. Based on our results, we discuss the outlook on cosmological simulations with a focus on applying SBI approaches to future galaxy surveys.
keywords: cosmological parameters from LSS -- Machine learning -- cosmological simulations -- galaxy surveys +
Footnote †: journal: Physics Letters
## 1 Introduction
The three-dimensional distribution of galaxies provides a powerful means to characterize the nature of dark matter and dark energy, to measure sum of the neutrino masses and to test gravity theory on cosmological scales. This has been the focus of various existing, ongoing, and planned galaxy redshift surveys, including SDSS-III BOSS (Dawson et al., 2013), Subaru Prime Focus Spectrograph (PFS; Takada et al., 2014; Tamura et al., 2016), Dark Energy Spectroscopic Instrument (DESI; Collaboration et al., 2016, 2016; Abareshi et al., 2022), the ESA _Euclid_ satellite mission (Laureijs et al., 2011), and the NASA Nancy Grace Roman Space Telescope (Roman; Spergel et al., 2015; Wang et al., 2022). However, as galaxies are a complex and biased tracer of the underlying density field, the complicated process of galaxy formation limits the ease of extracting the cosmological information from the galaxy surveys. While the clustering amplitude of the galaxy density field can be measured to percent-level precision, it cannot straightforwardly be related to the clustering amplitude of the matter density field. Traditional methods of cosmological analysis have also largely been based on using only two- or three-point clustering statistics and analytic models based on perturbation theory (PT) (Philcox and Ivanov, 2022; D'Amico et al., 2022; Chen et al., 2022). As a result, these can access only linear and quasi-linear scales and are unable to exploit the full information from galaxy redshift surveys.
Over the last few years, simulation-based inference (SBI), also called likelihood-free inference or implicit-likelihood inference, has emerged as a promising approach to overcome these limitations of traditional analysis (Alsing et al., 2018, 2019; Jeffrey et al., 2021; Hahn et al., 2022). This approach uses high fidelity cosmological simulations (or forward models1) to directly model the cosmological observables in full detail. The latest SBI methods combine these simulations with neural density estimation approaches to infer the cosmological parameters efficiently. Using cosmological forward models allows us to use any higher-order summary statistics of the data such as bispectrum, wavelet scattering coefficients, \(k\)-nearest neighbors or even machine-learnt optimal statistics that can be evaluated in the simulations (_e.g._ Banerjee and Abel, 2021; Eickenberg et al., 2022; Valogiannis and Dvorkin, 2022; Naidoo et al., 2022). It also enables us to push beyond quasi-linear scales while robustly accounting for obser
vation systematics such as imaging, completeness, fiber-collisions, etc., in our modeling (Hahn et al., 2017; Hahn et al., 2023). Meanwhile, since we use neural density estimators, we do not need to assume a Gaussian distribution for the data likelihood but can instead learn the target distributions from the simulations themselves (Hahn et al., 2019). We refer the readers to Cranmer et al. (2020) for a review on SBI. This method has also recently been applied to analyze survey data for weak lensing in Jeffrey et al. (2021) and galaxy clustering data (Hahn et al., 2023).
However, scaling SBI approaches to the next generation of surveys is not straightforward. SBI uses numerical simulations to build a model for analyzing data. Thus the accuracy and robustness of inference with SBI depends to a large extent on- i) the accuracy of the simulators and ii) the number of simulations used to train the SBI procedure. Accounting for both these criterion simultaneously can be challenging. If the underlying simulator does not accurately model the observed data, then the inference is not reliable (Cannon et al., 2022). This is known as model-mispecification, and the only way to safeguard against it is by using the most accurate simulations for analysis. However, this makes these simulations increasingly computationally expensive and hence for a fixed computational budget, there is a trade-off between the accuracy and the number of these simulations. This challenge is further exacerbated with the increasing volumes of cosmological surveys, and probing observables like emission-line galaxies that increasingly reside in lower mass halos, thus requiring higher resolution simulations. Both of these factors make the simulations more expensive for a given accuracy threshold. To put things in context, the largest simulation suite currently available for training SBI for galaxy clustering (Qui-jote simulations, Villaescusa-Navarro et al., 2020; Hahn et al., 2023) consists only of \(1~{}(\mathrm{Gpc}/h)^{3}\) in volume, which is smaller than the SDSS-III BOSS survey, and has coarse resolution of \(1~{}\mathrm{Mpc}/h\). Given the current status, scaling SBI approaches to the scale and fidelity required in the future can be computationally prohibitive and requires strategic planning.
**Motivation-** We take first steps towards investigating the simulations requirements for scaling SBI approaches to the next generation of galaxy clustering surveys, and study the sensitivity of SBI to the different components of the forward models used in cosmological simulations. Our goal is to ensure the robustness of inference while balancing the component models to potentially ease the computational requirements. This is motivated by the following observation-the different stages (component models) of simulations have very different computational cost and accuracy. Specifically, for dark-matter only simulations for galaxy clustering, there are three stages in the forward model- i) evolution of dark matter under gravity, ii) finding dark matter halos, and iii) populating these halos with observed galaxies. The gravity evolution is the most computationally expensive part of the simulation, but we are also the most confident in our understanding of the underlying physics. On the other hand, we are the most uncertain about the halo-galaxy connection models, having to infer and marginalize over its parameters during the analysis. This interplay leads us to ask the question- do we need the most accurate models of the gravity evolution if we are uncertain about other components of the model, such as how to populate galaxies in the halos? Does it bias our results if we do not use the most accurate model for all components of the forward models? A sensitivity analysis of SBI to the different components of the cosmological forward models will answer these questions.
Covering all aspects of this sensitivity analysis is beyond the scope of a single work as the number of cases to investigate increases combinatorially with different components of the forward model, summary statistics, and parameters considered. As a result, here we will focus only on the two traditional summary statistics of galaxy clustering- power spectrum multipoles and bispectrum, but push to smaller scales than the current PT-based analyses (Ivanov et al., 2020; Philcox and Ivanov, 2022; D'Amico et al., 2022). We will focus on only two cosmological parameters-\(\Omega_{m}\) and \(\sigma_{8}\), which are well constrained by these statistics. We will consider two component models for each of the aforementioned three stages of these simulations- gravity evolution, halo-finders, and galaxy occupation and study their impact on inference.
We begin in Section 2 by describing the different forward models we will consider for the sensitivity analysis. We describe the simulation data used for each of these models in Section 3 and outline our simulation-based inference methodology in Section 4. Finally we present our results in Section 5 and discuss implications in Section 6.
## 2 Forward models
In this section, we describe the different models that we will consider for each of the three stages of cosmological simulations. For every stage, we implement two different component models- a simple, often computationally cheap model, and a more complex, often computationally expensive model. Our end-to-end simulations will then consist of all possible combinations of these component models.
### Gravity Models
The first step in a cosmological simulation is to evolve dark matter particles under gravity from their initial conditions set at earlier times, to their final distribution at the time of observations. This evolution is generally the most computationally expensive part of the simulations. Here we will consider two different gravity simulations commonly used in cosmology.
**i) \(N\)-body simulations-** These are the most accurate simulations to evolve cold dark matter (CDM) particles under gravity, for e.g. Garrison et al. (2021); Springel (2005). \(N\)-body simulations accurately estimate gravitational forces for particles on all scales, including the particle-particle interactions on the smallest scales at every time-step, and the evolution is simulated with very small (often adaptive) time-stepping for many hundreds of time-steps.
We will use the Quiote \(N\)-body simulations (Villaescusa-Navarro et al., 2020) which simulate \(1024^{3}\) CDM particles in a \(1~{}\mathrm{Gpc}/h\) box with TreePM Gadget-III code, initialized at \(z=127\) using 2LPT and gravitationally evolved until \(z=0.5\). Each of these simulation requires approximately 5000 CPU hours.
**ii) Particle-mesh simulations-** Particle-mesh (PM) simulations trade-off accuracy for speed as compared to the \(N\)-body simulations. These estimate the gravitational forces by interpolating CDM particles on a uniform force grid. As a result, these lose information on scales smaller than the grid resolution but are able to solve the Poisson equations using highly efficient fast Fourier transforms. Thus, these simulations are accurate only on the large scales but can be more than 100x cheaper than the \(N\)-body simulations (e.g. Tassev et al., 2013; Feng et al., 2016). Recent GPU implementations of PM simulations further increase these computational gains (Modi et al., 2021; Li et al., 2022).
For this work, we will use FastPM particle-mesh scheme (Feng et al., 2016). In each simulation, we evolve \(1024^{3}\) CDM particles on a force grid of \(2048^{3}\) for 10 time-steps, starting from \(z=10\) until
\(z=0\). Each simulation required 200 CPU hours, a factor of 10 less than the Quijote simulations.
### Halo Model
The next step in cosmology simulations is to find high-density regions called dark matter halos, where the dark matter particles have self-collapsed under gravity. These regions serve as sites for galaxy formation. In this work, we will use two halo-finders commonly used in the community(Knebe et al., 2011).
**i) Friends-of-friends (FoF)-** FoF is a cluster-finding algorithm, where the clusters represent halos in this context. Operationally, FoF finds the clusters in the simulation as follows- if two particles, two clusters, or a particle and a cluster are separated by a distance smaller than a pre-defined distance (linking-length), then they are merged to form a bigger cluster (halo). We use the 3-D FoF halo-finder implemented in NBodykit (Hand et al., 2018). By default, this uses a linking-length of \(0.2\,l_{p}\) where \(l_{p}\) is the mean inter-particle distance2.
Footnote 2: In 3-D FoF, all the distances are measured only in the three dimensional position space as opposed to a 6-D phase space.
**ii) Rockstar-** Rockstar algorithm is a more sophisticated phase-space algorithm for finding halos. We only give an intuition of the algorithm here and refer the reader to the original paper (Behroozi et al., 2013) for further details. Briefly, the Rockstar halo finder starts by identifying FoF halos in 3-D position space with a large linking length. It then iteratively refines these clusters using both the positions and velocities of individual CDM particles by pruning those which are inconsistent with expected phase space distribution. These halos are generally considered to be more realistic than FoF halos. Rockstar halo-finder also estimates physical properties of the halo such as its spin, concentration etc., which are not estimated by FoF halos.
### Galaxy models
In CDM simulations, dark matter halos need to be populated with galaxies. This is usually done with a statistical framework called the halo-occupation distribution (HOD; Berlind & Weinberg, 2002; Zheng et al., 2007). HOD provides a prescription for determining the number of galaxies, as well as their positions and velocities within every halo. The flexibility and accuracy of this framework relates to the number of parameters in the HOD prescription, which need to be inferred and marginalized during analysis. Other approaches to populate galaxies in CDM simulations, such as sub-halo abundance matching (SHAM) and semi-analytic models (Somerville & Dave, 2015) require additional information from the simulations such as sub-halo distribution and merger trees, but this makes the forward simulations significantly more expensive. Hence here we will focus on using only the following two HOD models.
**i) Zheng07 model-** The standard HOD model Zheng et al. (2007) assumes that the galaxy occupation depends only on the halo mass, \(M_{h}\). This model has five free HOD parameters which determine the number of central and satellite galaxies: \((\log M_{\rm min},\sigma_{\log M},\log M_{0},\log M_{1},\alpha)\). Central galaxies are placed at the center of the halos and assigned the velocity same as the halo. Satellite galaxies are placed according to positions and velocities sampled from an NFW profile (Navarro et al., 1997).
**ii) Zheng07ex model-** Our second model extends the standard HOD model by including additional parameters to model assembly, concentration, and velocity biases, leading to a total of 10 free HOD parameters (Hahn et al., 2023). These are implemented using the decorated HOD prescription of Hearin et al. (2016). The assembly bias parameters (\(A_{C}\), \(A_{s}\)) modify the number of galaxies based on halo concentration. The concentration bias (\(\eta_{\rm cone}\)) modifies the positions of satellite galaxies to allow deviation from the NFW profile of their halos. Lastly, the central and satellite velocity biases (\(\eta_{C}\), \(\eta_{s}\)) re-scale the velocities of central and satellite galaxies with respect to the host halo. This HOD model was used for a recent analysis of a subset of BOSS galaxies in the South Galactic Cap with SBI in Hahn et al. (2022).
### End-to-end forward models
We combine the aforementioned components of our simulations in all possible combinations to generate simulations with different end-to-end forward models to train SBI procedure. However, there are two caveats-
1) Given the two gravity, halo-finding, and HOD models each, we can have a maximum of 8 LH with different forward models. However, in practice, we use only 6 of these as the Rockstar halo-finder is not compatible with the PM simulations in its default settings. Due to the missing small-scale forces in PM simulations, the CDM particles are less clustered in phase space and Rockstar with default configuration aggressively prunes these particles resulting in inaccurate halo mass function and clustering. While it may be possible to overcome this by modifying Rockstar, it is out of scope for this work.
2) FoF halo-finder does not estimate halo concentration accurately. Thus in our FoF catalogs, it is instead estimated using analytic mass-concentration formulas from Dutton & Maccio (2014). As a result, in the Zheng07ex model, the assembly bias parameter does not capture bias based on halo assembly but instead only results in a different dependence on halo mass than is included in the standard Zheng07 HOD model. However this caveat should not affect our conclusions.
## 3 Data
In this section, we combine the component models described in the previous section to generate training datasets for simulation-based inference.
### Simulations
Our simulated data consists of galaxy catalogs in redshift space at \(z=0.5\). The average number density of galaxies is \(\bar{n}=4\times 10^{-4}\) (\(h\)/Mpc)\({}^{3}\) with an average satellite fraction of 20%. We expect similar level of co-moving galaxy number density from the luminous red galaxies (LRG) observed using the DESI survey (Zhou et al., 2023), though our estimate of satellite fraction is approximately 5-10% higher compared to expectations from DESI LRGs (Yuan et al., 2023; Berti et al., 2023) In SBI, we need a training dataset to learn the relationship between the observed data and underlying cosmology parameters over a wide range. Thus, for each of the 6 composite forward models described above, we generate mock galaxy catalogs on a Latin-hypercube (LH) of cosmologies.
For the \(N\)-body simulations, we use the publicly available Quijote LH subset (Villaescusa-Navarro et al., 2020). It consists of 2000
simulations varying 5 cosmology parameters over the prior range
\[\Omega_{\rm m}\sim\mathcal{U}[0.1,0.5],\quad\sigma_{8}\sim \mathcal{U}[0.6,1.0],\quad\Omega_{\rm b}\sim\mathcal{U}[0.03,0.07],\] \[n_{s}\sim\mathcal{U}[0.8,1.2],\quad h\sim\mathcal{U}[0.5,0.9]\]
For exact comparison, we generated the PM simulations using the same cosmological parameters and the Gaussian initial density field as of Quijote LH. In both cases, we use 1500 of these simulations for training, 200 for validation and 300 for testing SBI.
Next, we find halos in these simulations. For the \(N\)-body simulations, we use both Rockstar and FoF. For the PM simulations, we only use FoF for the reasons explained in section 2.4.
Finally, we populate each of these three cases, we populate the halo catalogs with galaxies using the 2 HOD models described above. For each halo catalog, we sample 20 different HOD parameter values, resulting in a total of 40,000 galaxy catalogs per forward model. 7 of these HOD parameters are sampled from the following fixed priors to be consistent with previous SBI analysis for galaxy clustering (Hahn et al., 2023)
\[\alpha\sim\mathcal{U}[0.4,1.0],\quad\sigma_{\log M}\sim\mathcal{ U}[0.3,0.5],\] \[\eta_{\rm conc}\sim\mathcal{U}[0.2,2.0],\quad\eta_{c}\sim \mathcal{U}[0.0,0.7],\quad\eta_{s}\sim\mathcal{U}[0.2,2.0],\] \[A_{c}\sim\mathcal{N}(0,0.2)\;\text{over}\;[-1,1],\quad A_{s} \sim\mathcal{N}(0,0.2)\;\text{over}\;[-1,1].\]
For the 3 mass-based HOD parameters, we define priors that vary with cosmology (\(\theta\)) as follows
\[\log M_{\rm min}\sim\mathcal{U}[\log M_{\rm min}^{\theta}\pm 0.15],\] \[\log M_{0}\sim\mathcal{U}[\log M_{0}^{\theta}\pm 0.2],\] \[\log M_{1}\sim\mathcal{U}[\log M_{1}^{\theta}\pm 0.3]\]
For each cosmology, \(M_{\rm min}^{\theta}\), \(M_{1}^{\theta}\) and \(M_{2}^{\theta}\) are set to ensure that the
Figure 1: _Comparison of summary statistics for different forward models_: We show the ratio of summary statistics for galaxy catalogs generated by varying one stage of the forward model, as indicated by the title of columns, while keeping the other two stages fixed. The three rows show the ratios for power spectrum monopole (top), quadrapole (middle) and bispectrum (showing equilateral configuration only for clarity, bottom) respectively. The three colors show three different HOD realizations (different parameter values) for the same cosmology. HOD parameters are kept consistent across the columns. The first column shows the ratio for FastPM and \(N\)-body simulations (with FoF halo-finder and 10-parameter Zheng07-ex HOD model), the second column for simulations with FoF and Rockstar (with \(N\)-body gravity and Zheng07-ex HOD model), and the third column varies HOD model between 5-parameter Zheng07 and 10-parameter Zheng07-ex model (for \(N\)-body simulation with Rockstar halo finder).
number density of generated galaxy catalogs is close to the target number density of \(\bar{n}=4\times 10^{-4}\). This increases sample efficiency over using the same priors for all the cosmologies, which will need to be quite broad. We estimate \(M_{\rm min}^{\theta}\), \(M_{1}^{\theta}\) and \(M_{2}^{\theta}\) as follows- given the target number density \(\bar{n}\) and average satellite fraction of 0.2, we estimate the average number of centrals \(\bar{N}_{\rm cen}\). For every cosmology, we use this to determine the halo mass \(M_{h}\) above which the number of halos is the same as \(\bar{N}_{\rm cen}\) and set \(\log M_{\rm min}^{\theta}=\log M_{0}^{\theta}=M_{h}\). With this, we then set \(\log M_{1}^{\theta}\) to match the average number of satellites assuming a fiducial value of \(\alpha=0.7\)
### Summary statistics
In this work, we restrict ourselves to analyzing only the power spectrum multipoles \(P_{\ell}(k)\) for \((\ell=0,2,4)\) and bispectrum monopole \(B_{0}(k_{1},k_{2},k_{3})\). The power spectrum multipoles are measured with fast Fourier transforms using Nbodykit (Hand et al., 2018) on a \(512^{3}\) mesh. These multipoles are measured in the range \(k\in[0.007,0.5]\)\(h\)/Mpc, in bins of width \(\Delta k=2\pi/1000\,h\) Mpc\({}^{-1}\). This leads to a data vector of 79\(\times\)3 power spectrum coefficients. During training and testing, we also add to the power spectrum monopole a randomly sampled shot-noise contribution beyond the Poisson shot noise \(S_{n}\sim\mathcal{U}[10^{3},10^{4}]\), and marginalize over it during inference. This is done to be consistent with previous \(P_{\ell}(k)\) analyses (Hahn et al., 2023; Beutler et al., 2017; Ivanov et al., 2020; Kobayashi et al., 2021). However, we found that our conclusions remain the same without it.
Bispectrum is measured on a \(360^{3}\) mesh using the pySpectrum python package3, which implements the Scoccimarro (2015) redshift-space bispectrum estimator. We measure bispectrum in triangle configurations defined by \(k_{1},k_{2},k_{3}\) bins of width \(\Delta k=3k_{f}\), where \(k_{f}=2\pi/(1000\,h^{-1}{\rm Mpc})\) is the fundamental mode. We impose the same scale out \(f_{\rm max}=0.5\)\(h\)/Mpc as power spectrum, and this leaves us with 1980 triangle configurations.
Footnote 3: [https://github.com/changhoonhahn/pySpectrum](https://github.com/changhoonhahn/pySpectrum)
We compare the summary statistics of our galaxy catalogs for different forward models in Fig. 1. In each column, we vary one component of the simulation at a time and show the ratio of the three summary statistics- monopole, quadrapole and bispectrum (rows)-for the two different models considered for each component. For consistency, all the lines of the same color have same HOD parameters (except Zheng07 model does not include the 5 assembly bias parameters of the extended model). The largest difference is caused by varying the HOD model between the 5- and 10-parameter models. However even with the same HOD model and parameters, changing gravity models and halo-finder can lead to 10-20% difference in quadrapole and bispectrum.
## 4 Simulation-based inference
Next, we outline the details of our simulation-based inference pipeline using the Latin-hypercubes generated in the previous section as the training datasets.
**Methodology-** We have generated a training dataset of \((\mathbf{\theta},\mathbf{x})\) pairs where \(\mathbf{\theta}\) denotes the cosmology and HOD parameters, and \(\mathbf{x}\) denotes the corresponding observations i.e. the power spectrum multipoles and bispectrum. To infer the posterior \(p(\mathbf{\theta}|\mathbf{x})\), we train a conditional neural density estimator \(q_{\mathbf{\phi}}(\mathbf{\theta}|\mathbf{x})\) with parameters \(\mathbf{\phi}\) which are fit by maximizing the log-probability of the model parameters conditioned on the data over this training dataset.
**Implementation-** We use the SNPE-C algorithm implemented in sbi4 package to train masked auto-regressive flows (MAF, Papamakarios et al. (2017)) as conditional neural density estimators and learn the posterior \(q_{\mathbf{\phi}}(\mathbf{\theta}|\mathbf{x})\sim p(\mathbf{\theta}|\mathbf{x})\). For robustness, we train 400 networks for each data-statistic by varying hyperparameters corresponding to the width and the number of layers in a single MAF block, number of MAF blocks, learning rate, and the batch size. We use we use Weights-and-Biases5 package for this hyperparameter exploration. After training, we collect 10 neural density estimators with best validation loss and use them as an ensemble _i.e._ we construct a mixture distribution with uniform weighting to approximate the posterior. For posterior inference over a test observation \(\mathbf{x}^{\prime}\), we query the trained ensemble estimator \(q_{\mathbf{\phi}^{*}}\) to generate samples from the posterior i.e. \(\mathbf{\theta}\sim q_{\mathbf{\phi}^{*}}(\mathbf{\theta}|\mathbf{x}^{\prime})\).
Footnote 4: [https://github.com/mackelah/sbi](https://github.com/mackelah/sbi)
Footnote 5: [https://wandb.ai/site](https://wandb.ai/site)
**Validation-** To validate that our posteriors are well-specified, we use our trained ensemble to predict the cosmology parameters over the held-out test-dataset from the same forward model as was used for training the ensemble. We use these samples to do coverage tests as described in Talts et al. (2020); Hahn et al. (2023), and verify that all the rank histograms are uniformly distributed within the rank scatter. We will show the coverage plots corresponding to these in the next section. Note that this is a necessary but not a sufficient test to ensure that the posteriors are well calibrated. Furthermore since we use the same forward model for training and testing the SBI procedure in this validation, note that this does not test for model-misspecification.
## 5 Results
We now perform the sensitivity analysis of SBI by looking at the impact of using different component models in training and testing the SBI procedure.
**Setup-** We have generated mock data from six different forward models. We will use these to vary one of the three components (gravity model, halo-finder and HOD model) at a time between the two choices that are described in Section 2, while keeping the other two components fixed. In each case, we will consider inference in the two scenarios- when the test data is generated from the same forward model as the training dataset, and when the test data is generated from another forward model which varies one of the three components. The first scenario validate that our SBI procedure has been trained properly and our posteriors are well calibrated, while the second scenario gauges the impact of model misspecification.
In all cases, we infer the five cosmology and all HOD parameters using power spectrum multipoles and bispectrum. However for the sake of clarity, we present the results only for \(\Omega_{m}\) and \(\sigma_{8}\) which are the two parameters best constrained by these statistics. We present our results in the form of residuals, i.e. the difference between the true and the inferred mean estimate of the parameters over the held out test-dataset, as well one standard deviation of uncertainties in the posterior. Additionally, we also show the coverage plot to verify if the posteriors are well-calibrated, when relevant. In all the figures, we will use blue (and orange) color to show the results for the case when SBI is trained and tested on the same (and different) forward model.
### Gravity models
We begin by investigating the impact of varying gravity model between the \(N\)-body and PM simulations. The halo-finder is fixed to FoF since, as discussed earlier, Rockstar halo-finder is incompatible with PM simulations. The HOD model is fixed to 10-parameter Zheng07-ex model.
In Fig. 1(a), we show the residuals for SBI trained on both the gravity models when the true data is generated from the \(N\)-body simulations. For both the summary statistics (rows) and parameters
Figure 2: _Gravity models_: Varying gravity models for training SBI between the correct forward model (\(N\)-body Quijote, in blue) and the alternate forward model (FastPM simulations, in orange). The halo-finder is fixed to FoF and the galaxy model is 10-parameter Zheng07-extended HOD. In all cases, the test-data is generated from the Quijote \(N\)-body simulations.
(columns), the residuals are consistent, _indicating that we are not sensitive to model misspecification in this case_. This suggests that marginalizing over the HOD parameters due to the uncertainty in galaxy models indeed outweighs the refinements that happen at small scales with using more accurate gravity models. We note that there is a slight negative slope in the \(\sigma_{8}\) residuals with power spectrum. This effect is consistent with the bounded prior on \(\sigma_{8}\), and would likely go away with a broader prior (relative to the constraint level). However since the same trends exist in both the FastPM and Quijote posteriors, ensuring that the predictive posteriors are consistent, our conclusions regarding model misspecification still hold.
In Fig. 1(b), we show the coverage plots indicating that all the posteriors are also well-calibrated and do not under-estimate or overestimate the posterior widths. Though not shown here, we have checked for consistency that same conclusions hold when the test observations are generated from PM simulations instead of \(N\)-body simulations, other components kept the same. Overall, these results are promising as they indicate that at least for this particular experimental setting, one could generate cheaper training data from PM simulation to infer parameters for the mock data generated from the expensive \(N\)-body simulations.
Figure 3: _Halo finders_: Varying halo-finders for training SBI between Rockstar (blue) and FoF (orange). In all cases, the test-data is generated from the Rockstar halo-finder. The gravity model is fixed to \(N\)-body and the galaxy model is 10-parameter Zheng07-extended HOD.
### Halo-finders
Next, we vary the halo-finder in the simulations between FoF and Rockstar. The gravity model is fixed to \(N\)-body simulation and the HOD model is fixed to 10-parameter Zheng07-extended model.
Fig. 2(a) and 2(b) show the residuals and coverage plots for SBI trained on the two halo finders and applied to test-data generated from the Rockstar halo finder. In all cases considered, the posterior for \(\Omega_{m}\) seems to be well-calibrated and unbiased. For \(\sigma_{8}\), the posteriors are unbiased when the summary statistic is power spectrum. However when we use bispectrum, SBI trained on Rockstar halos infers well calibrated posteriors for Rockstar data, but the SBI trained on FoF halos consistently under-predicts \(\sigma_{8}\). While not shown here, we observe similar results when the test-data is generated from FoF catalogs: all \(\Omega_{m}\) posteriors and \(\sigma_{8}\) posteriors inferred from power spectrum are well calibrated, but \(\sigma_{8}\) inferred with bispectrum from SBI trained on Rockstar catalogs is consistently biased high.
Together, these results clearly indicate that bispectrum statistic is sensitive to differences in halo-finder when inferring \(\sigma_{8}\) and SBI suffers from model-misspecification. We note that similar analyses were conducted in the robustness tests of SimBIG(Hahn et al., 2023). The test sets Test I and Test II of SimBIGwere designed to assess the
Figure 4: _Galaxy Model I_: Varying the halo-galaxy occupation model between the 5-parameter (blue) and 10-parameter HOD model (orange). In all cases, the test-data is generated from the 5-parameter HOD. The gravity model is fixed to \(N\)-body and we use Rockstar halos.
sensitivity of a SBI model trained with Rockstar to the choice of the halo finder (FoF and CompaSO). However a direct comparison is not possible since other components of the forward models were varied simultaneously (for Test I, the HOD model was also changed to the 5-parameter Zheng07 HOD model, while for Test II the gravity model was changed to Abacus). These tests were also done only on a single cosmology. Despite this, similar robustness issues were observed for wavelet scattering statistics in Regaldo-Saint Blancard et al. (2023) which forced them to use aggressive scale-cuts to mitigate model misspecification.
### Galaxy models
Finally, we change the galaxy occupation model for the simulations between the 5-parameter Zheng07 and 10-parameter Zheng07-extended HOD models. The gravity model is fixed to \(N\)-body and we use Rockstar halo-finder.
We begin by considering the test-data generated from 5-parameter HOD model in Fig. 3(a) and 3(b). SBI trained on both the HOD models gives consistent inference for both the parameters and using either of the summary statistics. This is not completely surprising given that the 5-parameter HOD model is a subset of the 10-parameter
Figure 5: _Galaxy Model II:_ Varying the galaxy model between the 10-parameter (blue) and 5-parameter HOD model (orange). In all cases, the test-data is generated from 10-parameter HOD. The gravity model is fixed to \(N\)-body and we use Rockstar halos.
HOD model, it can simply be generated by setting the assembly bias concentration and velocity bias parameters to zero.
We turn to the more interesting case in Fig. 5a and 5b where the test-data is generated from 10-parameter HOD model. In this case, SBI trained on the correct forward model results in well-calibrated posteriors for both the parameters from both summary statistics. However for SBI trained on the 5-parameter HOD, both power spectrum and bispectrum suffer from model misspecification albeit to different degree. While posterior inferred by power spectrum is still sometimes consistent with the truth, bispectrum almost always leads to incorrect posteriors for both the parameters. This suggests that when trained on a simplistic galaxy occupation model, SBI struggles in doing inference with more complex galaxy models and this is aggravated as the summary statistics used become more informative.
Based on the results of this and the previous section, it is clear that access to accurate galaxy models will likely be the limiting factor in moving forward with all the methods that try to construct models for small scales using cosmological simulations (for e.g. SBI, machine learning and emulator based approaches (Yuan et al., 2022)).
## 6 Discussion and Outlook
We have taken the first steps towards a sensitivity analysis of SBI for galaxy clustering to answer the question- how sensitive are we to different components of our simulations? Studies like this are necessary to scale SBI approaches for the future cosmological surveys, especially as these surveys increase in volume and require higher resolution simulations to model observables. It is becoming increasingly urgent to consider the trade-offs between accuracy and the number of simulations that can be run to generate training datasets.
In this work, we have considered the problem of constraining \(\sigma_{8}\) and \(\Omega_{m}\) from galaxy catalog using power spectrum and bispectrum statistics. We have varied three components of the forward simulations- gravity evolution, halos-finders and galaxy occupation and investigated their impact on inference. We find that inference in the current setup is not sensitive to changing the gravity model between \(N\)-body and particle mesh simulations. However surprisingly, changing the halo-finder between FoF and Rockstar leads to biased estimate of \(\sigma_{8}\) with bispectrum. For varying galaxy models, SBI results in consistent inference when trained on a 10-parameter HOD model and tested on 5-parameter HOD model, but not the other way round. When trained on 5-parameter HOD and tested on the 10-parameter model, both power spectrum and bispectrum can lead to biased results but the degree of bias for bispectrum is much larger than power spectrum.
We summarize these findings below with discussions on a more general outlook for SBI in large scale structures.
\(\bullet\) We have demonstrated that with 2,000 cosmology simulations, carefully sampling HOD parameters to maximize sampling efficiency, and properly combining neural density estimators into ensembles, we are able to obtain well calibrated posteriors for galaxy clustering analysis with simulation-based inference. Hence for most intents and purposes, we are not limited anymore by methodological challenges in using SBI for cosmological parameter inference, at least for the realistic configurations discussed in this work. Moving forward, the primary factor in driving the quality of simulation-based inference will be the forward models used for the simulations.
\(\bullet\) We find that using particle-mesh simulations instead of \(N\)-body simulations does not lead to any biases in inference with power spectrum and bispectrum when combined with FoF halos and HOD with assembly bias. Taken on its own, this has the potential of making the computational cost of SBI comparable to traditional analyses where similar number of simulations are required to estimate the covariance matrix (Beutler et al., 2017).
\(\bullet\) However as we move towards more powerful statistics like bispectrum, wavelet coefficients, learnt neural summary statistics etc. to extract more information in cosmology, we become increasingly more sensitive to model misspecification in our simulators. For instance as shown in examples in section 5, bispectrum can lead to biased results under model-misspecification when power spectrum does not. Hence we argue that, _robustness_ of our inference is increasingly becoming a more challenging problem than developing summary statistics for optimal inference.
\(\bullet\) While we have focused on SBI as a specific tool for inference, the challenge of robustness is faced by all methods that use simulations for building a data-model (i.e. most machine learning or emulator based frameworks (Yuan et al., 2022)) on small scales where simulations can be unreliable. Since SBI learns the full likelihood (or the posterior) distribution of the data, it is simply more suited to highlight these issues than the approaches which learn only the mean prediction and assume a Gaussian likelihood.
\(\bullet\) To do sensitivity analysis of SBI, it is important to consider the end-to-end simulations rather than separately gauging the accuracy of every component. For instance, changing the halo-finder from FoF to Rockstar can cause up to \(\sim\)20% bias in both, quadrapole and bispectrum statistics. However, marginalizing over the HOD parameters results in consistent posteriors for the former, while the HOD parameterization is not flexible enough to do the same for the latter.
\(\bullet\) Testing pipelines end-to-end can also lead to surprising results, for instance finding that our inference is not sensitive to the gravity model but is sensitive to the halo-finder6.
Footnote 6: which was surprising at least for the authors.
\(\bullet\) This also serves to guide the new methodologies being developed to accelerate forward simulations (Dai and Seljak, 2021; Lanzieri et al., 2022; Jamieson et al., 2022) i.e. while it is important to report the accuracy of the simulated summary statistics, it is non-trivial to translate these to the expected results of doing inference using these accelerated simulations.
\(\bullet\) SBI for galaxy clustering is the most sensitive to the galaxy models. Hence for robustness, while one can train SBI on the most flexible HOD parameterization (Hahn et al., 2023), we still lack validation data for making sure that our inference is not susceptible to model misspecification. This cannot be done on models with less complex HOD parameterization. To build confidence, we need access to simulations that can accommodate different halo-galaxy occupation models such as complex HOD parameterizations, subhalo-abundance matching and semi-analytic models (Wechsler and Tinker, 2018; Yuan et al., 2022; Contreras et al., 2021; Nguyen et al., 2023; Modi and Philcox, 2023), both for training and validating our inference on the scales of future surveys.
\(\bullet\) Finally, we note that we have performed a sensitivity analysis only for two summary statistics (power spectrum multipoles and bispectrum) in inferring only two cosmology parameters (\(\Omega_{m}\) and \(\sigma_{8}\)). The results here cannot be directly translated for other statistics and parameters (for instance, there are configurations when \(\Omega_{m}\) is well-constrained and unbiased even when \(\sigma_{8}\) is not). However we argue that such a sensitivity analysis should be performed for any SBI based data-analysis to ensure that our inference is reliable. In the same vein, our work also motivates further research to develop filters beyond simple scale-cuts to make SBI analyses with higher
order statistics more robust to model misspecification. Our approach provides a straightforward template to study this.
## Acknowledgements
FastPM simulations were run on the KNL nodes on the Cori supercomputer at NERSC. MH and SP are supported by the Simons Collaboration on Learning the Universe. The Center for Computational Astrophysics at the Flatiron Institute is supported by the Simons Foundation. We would like to thank the Implicit Likelihood working group of Learning the Universe collaboration for useful discussions.
## Data Availability
Quijote data is publicly available at here. We plan on making FastPM simulations online through the same channel. Access to summary statistics used in this work can be requested by reaching out to any of the authors. The code used for analysis is available here.
|
2309.03613 | Evaluating ChatGPT as a Recommender System: A Rigorous Approach | Large Language Models (LLMs) have recently shown impressive abilities in
handling various natural language-related tasks. Among different LLMs, current
studies have assessed ChatGPT's superior performance across manifold tasks,
especially under the zero/few-shot prompting conditions. Given such successes,
the Recommender Systems (RSs) research community have started investigating its
potential applications within the recommendation scenario. However, although
various methods have been proposed to integrate ChatGPT's capabilities into
RSs, current research struggles to comprehensively evaluate such models while
considering the peculiarities of generative models. Often, evaluations do not
consider hallucinations, duplications, and out-of-the-closed domain
recommendations and solely focus on accuracy metrics, neglecting the impact on
beyond-accuracy facets. To bridge this gap, we propose a robust evaluation
pipeline to assess ChatGPT's ability as an RS and post-process ChatGPT
recommendations to account for these aspects. Through this pipeline, we
investigate ChatGPT-3.5 and ChatGPT-4 performance in the recommendation task
under the zero-shot condition employing the role-playing prompt. We analyze the
model's functionality in three settings: the Top-N Recommendation, the
cold-start recommendation, and the re-ranking of a list of recommendations, and
in three domains: movies, music, and books. The experiments reveal that ChatGPT
exhibits higher accuracy than the baselines on books domain. It also excels in
re-ranking and cold-start scenarios while maintaining reasonable
beyond-accuracy metrics. Furthermore, we measure the similarity between the
ChatGPT recommendations and the other recommenders, providing insights about
how ChatGPT could be categorized in the realm of recommender systems. The
evaluation pipeline is publicly released for future research. | Dario Di Palma, Giovanni Maria Biancofiore, Vito Walter Anelli, Fedelucio Narducci, Tommaso Di Noia, Eugenio Di Sciascio | 2023-09-07T10:13:09Z | http://arxiv.org/abs/2309.03613v2 | # Evaluating ChatGPT as a Recommender System:
###### Abstract
In recent times, the popularity of large language models in the field of artificial intelligence has soared. These models demonstrate impressive abilities to comprehend and respond effectively to natural language requests, significantly contributing to various natural language-related tasks. The prompt-based learning approach, the emerging strategy of repurposing pretrained models without the need for additional training, has played a crucial role in enabling the application of general-purpose language models to specific tasks with minimal resources. By leveraging this approach, the full potential of large language models is unlocked, leading to improved sentence generation precision and generalization. Consequently, research communities are deeply exploring the capabilities of large language models across various dedicated tasks, culminating in the highly acclaimed ChatGPT.
Despite extensive research on large language models, their potential in the context of a recommendation scenario remains relatively underexplored. This study seeks to address this gap by investigating ChatGPT's capabilities as a zero-shot recommender system. Specifically, we aim to evaluate its ability to utilize user preferences to provide helpful recommendations, rerank existing recommendation lists, leverage information from similar users, and effectively handle cold start situations. To assess ChatGPT's performance, we conduct a comprehensive experimental evaluation using three datasets (MovieLens Small, Last.FM, and Facebook Book).
In this evaluation, we compare ChatGPT's performance against standard recommendation algorithms, representing the current state-of-the-art in the field. Furthermore, we compare ChatGPT's performance with other Large Language
Models, including GPT-3.5 and PaLM-2, for the recommendation task. To measure the effectiveness of the recommendations, we employ widely-used evaluation metrics, such as Mean Average Precision (MAP), Recall, Precision, F1, normalized Discounted Cumulative Gain (nDCG), Item Coverage, Expected Popularity Complement (EPC), Average Coverage of Long Tail (ACLT), Average Recommendation Popularity (ARP), and Popularity-based Ranking-based Equal Opportunity (PopREO).
Through a meticulous exploration of ChatGPT's abilities in the scenario of recommender systems, our study seeks to enrich the growing body of research concerning the versatility and potential applications of large language models.
The code used for conducting the experiments in this study is publicly accessible.1
Footnote 1: [https://github.com/sisinflab/Recommender-ChatGPT](https://github.com/sisinflab/Recommender-ChatGPT)
ChatGPT, Recommender Systems, Evaluation, Zero-Shot
## 1 Introduction
The increasing growth of social network applications and digital platforms highlights the vital role of information sharing and retrieval in our everyday lives. With human and corporate activities generating a vast amount of data, particularly in text format, the web has become a treasure trove of valuable information. Needs, opinions, and knowledge are quickly and efficiently expressed through natural language (NL) sentences. To effectively manage and automatically analyze and understand the abundance of this textual content, natural language processing (NLP) algorithms are essential [1]. In such a way, automatic systems can interface with users to understand their intents and offer personalized services. A prime example of the applications of NLP is seen in information filtering systems. These systems aim to alleviate the well-known information overload problem impacting users' digital experience: sifting through a massive amount of data to find valuable information [2]. By implementing NLP algorithms, these systems can efficiently assist users in finding relevant information and items in the vast sea of data available.
Recently, researchers have found that incorporating interactive systems to engage users during their research leads to more accurate outcomes [3]. This explains the great diffusion of conversational agents (CAs) such as Amazon Alexa, Google Assistant, Microsoft Cortana, and Apple Siri [4]. Language models (LMs) play a crucial role with CAs and have garnered significant attention. An LM is a method for estimating the probability distribution of linguistic units such as words, sentences, and entire documents [5]. The natural progression of LMs led to the development of large language models (LLMs), which are pre-trained on vast unlabelled corpora. These LLMs exhibit remarkable adaptation capabilities in various downstream tasks [6].
Numerous CAs based on LLMs have emerged, aiming for better performance, wider scope, and reduced harm. Notable examples include BARD1, Vicuna [7], and Alpaca [8], each implementing unique features like wider accessibility for Alpaca and
cost-effectiveness for Vicuna. In this context, ChatGPT2 gained particular attention. ChatGPT is a conversational agent fine-tuned from the GPT-3.5 pre-trained generative transformer [6] and continuously updated through reinforcement learning with human feedback [9]. With its robust architecture and vast knowledge from extensive training, ChatGPT delivers highly relevant and informative answers to users engaging with it, enriched with convincing explanations and supporting facts. As a result, researchers have taken a keen interest in exploring ChatGPT's potential in various applications [10], tailoring it for diverse and specific tasks [11]. In particular, there has been a notable increase in interest from researchers in exploring its potential for the recommendation task [12; 13]. However, most of these studies have predominantly focused on ChatGPT's fairness or have only built initial Recommender Systems (RSs) based on its LLM, rather than conducting a thorough and comprehensive evaluation of its effectiveness. Consequently, there is a need for further exploration and in-depth evaluations of ChatGPT's capabilities as a recommender system to fully understand its potential and applicability in this specific scenario.
Footnote 2: [https://openai.com/blog/chatgpt/](https://openai.com/blog/chatgpt/)
In this work, we promote a holistic analysis of ChatGPT's ability to naturally act as an RS with a meticulous and reproducible investigation. We conduct a meticulous and replicable investigation by designing a well-structured experimental setup. Through this approach, we aim to identify and highlight the fundamental characteristics of an efficient RS. Since ChatGPT is a black-box model subject to continuous changes, this work reports empirical results on ChatGPT-3.5 evaluations on May 2023. In detail, we evaluate its ability to exploit user preferences in offering practical recommendations benefiting from similar users' information.
Our main contributions are multifaceted. Firstly, we develop a rigorous pipeline of prompt-based experimental settings, ensuring a fair comparison and accurate positioning of ChatGPT alongside state-of-the-art RS baselines. Secondly, we unveil the natural capabilities of ChatGPT in recommending items to users based on their preferences, showcasing diverse and interesting behaviour across different domains, such as movies, music, and books.
In pursuit of these goals, we address the following research questions:
* **RQ1:** Is ChatGPT able to recommend items with a quality comparable to the state-of-the-art recommendation models? This question is further divided into sub-questions: **RQ1a**: How much is ChatGPT accurate compared with the state-of-the-art?, **RQ1b** How much is ChatGPT proposing diverse and novel recommendations compared with the state-of-the-art?, **RQ1c** How much is ChatGPT biased compared with the state-of-the-art?, **RQ1d** Which type of recommender system is ChatGPT more similar to?)
* **RQ2:** Is ChatGPT able to exploit user preferences to re-rank a recommendation list?
* **RQ3:** Does the substantial amount of knowledge utilized to train ChatGPT compensate for the absence of a complete user history in a cold-start scenario?
In our investigation, we deliberately avoid applying the Prompt Engineering approach. Instead, we design a unique prompt tailored to each experimental setup. This choice
allows us to isolate the effect of prompt engineering from recommendation performance, thereby establishing a lower bound for further investigations. We employ ChatGPT using a Zero-Shot approach. Zero-shot refers to the ability of an LLM to perform tasks different from the one learned during its training phase.
Our primary focus is to uncover the inherent capabilities of the standard ChatGPT3 as a recommender system. We recognize the potential impact of diverse prompts on the outcomes, and we acknowledge the necessity for a dedicated and targeted investigation, which exceeds the scope of our current study. Our primary objective is to comprehensively evaluate vanilla ChatGPT's capabilities as a recommender system. Instead of presenting an optimized version with best performance, we focus on assessing its inherent abilities in this context.
Footnote 3: With the term ‘standard (or vanilla) ChatGPT’, we refer to a model that operates without employing further artefacts that improve its performance. Such artefacts include prompt learning approaches, where prompts are a set of instructions that are learned to customise, enhance, or refine the capabilities of an LLM [14].
We validate our study with an exhaustive analysis that engages three diverse datasets (i.e. MovieLens [15], Last.FM [16], and Facebook Book4) and a broad spectrum of baseline recommender algorithms and LLMs. To ensure reproducibility, we utilize the Elliot framework [17] to compute baseline performances, and we provide our code as an accessible GitHub repository5.
Footnote 4: [https://2015.eswc-conferences.org/program/semwebeval.html](https://2015.eswc-conferences.org/program/semwebeval.html)
Footnote 5: [https://github.com/sisinflab/Recommender-ChatGPT](https://github.com/sisinflab/Recommender-ChatGPT)
To the best of our knowledge, this study marks the first endeavour to analyze zero-shot ChatGPT in comparison to LLMs and specialized Recommender Systems without employing prompt engineering or in-context examples, making our investigation unique in its approach.
## 2 Related Work
Pretrained Foundation Models (PFMs) are powerful general models effectively studied and exploited in various fields such as natural language processing, computer vision, and graph learning [18]. PFMs use large amounts of data and can be fine-tuned in several downstream applications. ChatGPT is an excellent example of a PFM application and is fine-tuned from the generative pre-trained Transformer GPT-3.5 using Reinforcement Learning with Human Feedback approach [9, 19], which has become a promising way to align Large Language Models (LLMs) with a human's intent [20].
Since re-training such models requires enormous computational resources, prompt learning [21] is a helpful technique for adapting pre-trained models without the cost of a fine-tuning procedure.
Accordingly, it emerged that the great potential of a pre-trained Transformer is to perform novel tasks for which it was not targeted during training [22] through prompts especially tailored. In detail, prompt learning relies on a suite of appropriate prompts, either hard text templates [6], or soft continuous embeddings [23], to reformulate the downstream tasks. Kojima et al. [24] give additional insights on the advantages that refined prompting approaches bring to the solution of such downstream tasks through LLMs. The recommendation is one of those downstream tasks, but the investigations
are at a very early stage (this is the motivation because most cited works are in a pre-printing version).
Li et al. [22] propose two prompt learning approaches to exploit the rich knowledge contained in pre-trained language models (i.e., GPT-2) for recommendation explanation generation. Extensive experiments demonstrate the effectiveness of their approach in generating high-quality explanations as measured by text quality and explainability metrics.
GPT-2 is also leveraged in [25] for building a recommender system that uses prompts to reformulate the session-based recommendation task to a multi-token cloze task. The method is evaluated on a movie recommendation dataset in zero-shot and fine-tuned settings with limited training data. In the zero-shot setting, the Pretrained Language Model (PLM)-based method outperforms a random recommendation baseline, but under-performs traditional recommender systems such as GRU4Rec [26].
GPT-2 is also the basis of GPT4Rec [27], a flexible framework that generates hypothetical "search queries" given item titles in a user's history and then retrieves items for recommendation by searching these queries. GPT4Rec combines GPT-2, to learn both item and user embeddings in the language space and the BM25 search engine to retrieve items for recommendation.
Similar to GPT-3, M6 [28] is an existing large-scale industrial pretrained language model on which M6-Rec [28] is based. The M6-Rec framework unifies various tasks in an industrial recommender system. It is able to perform retrieval, ranking, zero-shot recommendation, explanation generation, personalised content creation, and conversational recommendation by representing user behaviour data as plain texts and converting the tasks to either language understanding or generation. The authors verify the M6-Rec's ability to perform zero-shot ranking on three datasets of different domains, and they demonstrate that it can match the performance of a traditional ID-based ranker trained on a million samples.
Another unified PLM-based framework which integrates the item recommendation into the generation process is RecInDial [29]. RecInDial finetunes the powerful PLMs like DialoGPT [30] together with a Relational Graph Convolutional Network (RGCN) to encode the node representation of an item-oriented knowledge graph. Besides, the authors design a vocabulary pointer mechanism to unify the response generation and item recommendation into the existing PLMs. Extensive experiments on the Conversational Recommender System (CRS) benchmark dataset REDIAL show that RecInDial significantly outperforms the state-of-the-art methods.
Wang and Lim [31] propose a strategy that incorporates a 3-step prompting that guides GPT-3 to carry subtasks that capture the user's preferences, select representative previously watched movies, and recommend a ranked list of 10 movies. The proposed approach is evaluated on MovieLens 100K dataset, and it shows strong zero-shot performance, even outperforming some strong sequential recommendation models trained on the entire training dataset.
P5 (Pretrain, Personalized Prompt, and Predict Paradigm) [32], proposed by Geng et al., demonstrates that it is possible to learn multiple recommendation-related tasks by formulating these problems as prompt-based natural language tasks, where
user-item information and corresponding features are integrated with personalised prompt templates as model inputs. P5 is able to make predictions in a zero-shot or few-shot manner and largely reduces the necessity for extensive fine-tuning. In the same vein, Personalized prompt-based recommendation (PPR) [33] implements a prompt generator based on user profiles for cold-start recommendations. Evaluations on three large-scale datasets in few-shot and zero-shot scenarios confirm that PPR significantly improves both scenarios.
Gao et al. [34] are the first to use ChatGPT to improve recommendations given by a traditional recommender system. They exploit the conversations with ChatGPT to inject the users' preferences in order to refine recommendations generated by an existing recommender system. They do not investigate the main capabilities of vanilla ChatGPT and its behaviour in a recommendation scenario against sota RSs.
As demonstrated by the highly recent literature, there is great excitement in the research community to investigate the potential of PFM and LLM to support recommendation tasks. However, to the best of our knowledge, this is the first extensive investigation comparing zero-shot prompt-based recommendations, precisely the capabilities of vanilla ChatGPT, to other LLMs (e.g. PaLM2 [35, 36] or GPT-3.5 [6]) and traditional content-based/collaborative filtering approaches with the twofold aim of assessing the most similar paradigm and the effectiveness in cold-start settings without making use of prompt engineering.
## 3 ChatGPT as a Recommender
Chat Generative Pretrained Transformer (ChatGPT) is an advanced generative pretrained Large Language Model (LLM) developed by OpenAI. It is fine-tuned from the generative pre-trained transformer GPT-3.5 [6]. The development process involves three crucial steps: supervised model training dialogue, enhanced training optimization, and model fine-tuning. Through these steps, ChatGPT emulates human language learning and knowledge acquisition while also aligning with user intent. The model aims to encompass both explicit intentions, such as following instructions, as well as implicit intentions, such as maintaining truthfulness and avoiding bias or any other harmful behavior [20].
With its language understanding and text-generation capabilities, ChatGPT is trained on massive data to possess a broad knowledge. It can engage in continuous multi-round conversations based on contexts and has a particular writing ability to support various tasks such as art creation, technical transfer, office learning, and logical reasoning [37].
As an autoregressive generative language model, ChatGPT processes input sentences (questions/requests) along with the preceding exchange of messages, treated as a sequence of words. It computes the response by generating a series of terms, identifying the most probable ones to form a coherent reply. The output is constructed word by word, with each word selection depending on the context and the preceding words, ensuring a coherent and contextually relevant answer.
Formally, given the sequence of terms as input \(\mathbf{x}=[x_{1},x_{2},...,x_{n}]\), the output \(\mathbf{y}=[y_{1},y_{2},...,y_{m}]\) results from:
\[p(\mathbf{y}|\mathbf{x})=\prod_{i=1}^{m}p(y_{i}|\mathbf{x},y_{k<i})\]
where \(k=0\) identifies the prediction case of the first term of the answer \(y_{1}\), which takes support only from the input words \(\mathbf{x}\)[38]. Hence, the vast knowledge learned by ChatGPT during its training dictates the probability computed at inference time, making it highly efficient in solving the intended task.
Furthermore, ChatGPT is explicitly designed to excel in user-oriented tasks, leading us to presume that its vanilla Large Language Model (LLM) is naturally more inclined to address user-related assignments, such as item recommendation, compared to other state-of-the-art LLMs. However, it is crucial to recognize that the primary goal of ChatGPT is not solely to discern individual user preferences and provide tailored item recommendations. Instead, its broader objective lies in understanding and generating human-like text across a wide range of tasks and contexts. While ChatGPT may demonstrate proficiency in user-related tasks due to its user-centric design, it is not exclusively optimized for such specific functions. Its capabilities extend beyond user recommendations and encompass various language tasks, making it a versatile language model suitable for diverse applications.
Therefore, we adopt the prompt learning paradigm, which allows us not to specialise its vanilla model behaviour (i.e., fine-tuning) but yet to target the scenario we want to explore. Specifically, the prompt learning paradigm, also known as prompting, is a method of conditioning LLMs through well-designed sentences to reach goals different from the ones perceived during training [14].
Given a pre-trained and fixed LLM \(\mathcal{L}\), a test set of input \(X\) and output \(Y\) to assess a specific task, the prompting approach transforms each sample of \(X\) through a template \(\mathcal{T}\), which will guide \(\mathcal{L}\) in generating the predictions \(\hat{Y}\) close to \(Y\). The template \(\mathcal{T}\) assumes the following form:
\[\mathcal{T}=prefix[X]suffix[\hat{Y}]\]
where \(\hat{Y}\) is a slot later filled by the \(\mathcal{L}\) predictions, while \(prefix\) and \(suffix\) identify some text specifically designed to guide \(\mathcal{L}\) in solving the task of interest. This paradigm constitutes the foundations of Prompt Engineering, which targets the prompts' search problem in optimising the downstream tasks where a given LLM is applied.
To this purpose, White et al. [39] defined a comprehensive catalogue of prompt patterns to enhance prompt engineering for ChatGPT, and we find the "_person pattern_" particularly fits our purposes (cf. Section 4). Nevertheless, modelling a prompt to grant ChatGPT to reach high performances in the recommendation task is out of the scope of this study.
In our experimental setup, we adopt a single, straightforward prompt to evaluate the inherent capabilities of ChatGPT in performing recommendations6. By doing so, we avoid any biased influence that may arise from complex prompt designs. This configuration highlights the potential for future applications of ChatGPT as a Recommender System (RS) in subsequent analyses.
Footnote 6: As previously stated in the introduction, creating an optimized prompt to maximize ChatGPT’s performance in the recommendation task falls outside the scope of this study.
Furthermore, according to Adomavicius and Tuzhilin [40], the recommendation problem can be defined as the task of maximising a utility function. In this context, we extent their definition by considering the utility function \(u\) serving as a measure of how useful an item \(s\) is to a recommendation context \(c\), represented as \(u:C\times S\to R\), where \(R\) is a totally ordered set. Here, the recommendation context \(c\) is intended as an information collection composed of the user profile, i.e. the list of her preferred items \(S_{\text{user}}\), and those recommended by the system at recommendation time \(S_{\text{rec}}\), s.t. \(C=\{S_{\text{user}},S_{\text{rec}}\}\). The primary objective is to select an item \(s^{\prime}\in S\) for each recommendation context \(c\in C\) that maximises their utility. More formally:
\[\forall c\in C,\ s_{c}^{\prime}=\operatorname*{arg\,max}_{s\in S}u(c,s).\]
The item thus recommended will increase the set of suggested items \(S_{\text{rec}}\), hence expanding the recommendation context \(c\), to calculate the subsequent suggestions.
In contrast, a language model estimates the probability distribution of a sentence, which is represented as a sequence of words, symbols, tokens, or token sequences. This estimation involves modeling the probability of the next word given the preceding words [41].
Formally, considering a text sequence of length \(T\) with tokens \(x_{1},x_{2},...,x_{T}\) forming a sentence. The language model aims to compute the probability \(P(x_{1},x_{2},...,x_{T})\) of observing all these tokens in the given order, namely estimating the entire sequence's joint probability. Applying the chain rule, this probability is formally defined as:
\[P(x_{1},x_{2},...,x_{T})=\prod_{t=1}^{T}P(x_{t}|x_{1},...,x_{t-1})\]
Specifically, an ideal language model would possess the ability to independently generate natural text by sequentially selecting individual tokens [42]:
\[x_{t}\sim P(x_{t}|x_{t-1},...,x_{1})\]
The intriguing parallel between modelling a Recommender System (RS) and a generative language system becomes evident. Both systems learn to predict the most probable next item (in RS) or word (in language models) based on the preceding context of items (RS) or words (language models) they encounter. This intriguing similarity, coupled with ChatGPT's training phase dedicated to comprehending and addressing human intents and requests, has been a driving force behind our decision to investigate ChatGPT's potential as a Recommender System.
To reach this goal, here we outline a prompt examples we submit to ChatGPT:
1. \(prefix=\) "_Given a user, as a recommender system, provide recommendations_", \(X=\) "_The user_ {_user_id_} _likes the following items:_ {_item_list_}", \(suffix=\) "_Give me back 50 recommendations_".
2. \(prefix=\) "_Given a user, as a recommender system, provide recommendations_", \(X=\) "_The user_ {_user_id_} _likes the following items:_ {_item_list_}", \(suffix=\) "_Re-rank me the following list:_ {_list_to_be_re-ranked_}".
It is important to mention that the items in our experiment can encompass movies, books, or songs, depending on the specific dataset being used. However, we deliberately avoid explicitly indicating the recommendation domain in the prompt. The _item_list_ is individually computed for each user present in the dataset. Please refer to the forthcoming section for more comprehensive insights and detailed information.
## 4 Experimental Setting
The primary objective of this study is to assess the core capabilities of the vanilla ChatGPT model as a recommender system. We design four experimental setups to achieve this goal, defining crucial elements to enable ChatGPT to play as a recommender system. We can effectively address the research questions guiding this analysis by comparing the outcomes with state-of-the-art baselines across different application domains.
In each scenario, we utilize the "_person pattern_" prompt as introduced by White et al. [39] to guide ChatGPT in performing recommendations. This pattern is designed to provide the Large Language Model (LLM) with a "persona" to aid in selecting the types of output to generate and the specific details to focus on. Referring to the previous prompt formalism, an example of a persona prompt might be \(prefix=\) "Act like a \(P\)", \(suffix=\) "give me the results the P will return", and \(X\) will be filled with the data on which the LLM should operate, simulating the persona \(P\).
Practically, one complete example of a prompt employed in our experiments is: _"Given a user, as a recommender system, provide recommendations. The user 1 likes the following movies: Mad Max: Fury Road (2015) 5/5, Whiplash (2014) 4/5, etc. Give me back 50 recommendations."_.
This prompt formulation aligns with the central objective of this study, which is to explore the capabilities of vanilla ChatGPT in emulating recommender systems. Our focus is not to identify the optimal ChatGPT configuration for the recommendation task. Instead, we craft a prompt that includes only essential information required to guide ChatGPT in tackling this task. As a result, the outcome of this experiment defines a lower bound, laying the foundation for future research to concentrate on the potential development of recommender systems based on ChatGPT.
Prompts enriched with more data, such as information about the type of recommender system to emulate, domain-specific details, etc. might yield significantly diverse results and performances. These variations necessitate dedicated investigations to effectively conduct prompt engineering, but this requires exploring prompts in the
latent space of the Large Language Model (LLM) and having direct access to the ChatGPT model is essential. However, currently, such access is limited to API calls, which poses a constraint in the prompt engineering process.
We have led the following experiments through the OpenAI ChatGPT3.5-turbo API7, setting the temperature parameter to zero to grant our work's reproducibility (i.e. ChatGPT generates the same answers). Also, we adapt the persona pattern prompt to fit the tokens limit imposed by the exploited API (4096 tokens per message exchanged, including the ones that compose the ChatGPT response).
Footnote 7: [https://platform.openai.com/docs/guides/chat](https://platform.openai.com/docs/guides/chat)
In ChatGPT, the training set was used only for generating the prompt for a given user, while the test set was to compute the metrics. Accordingly, we did not transfer to ChatGPT any other information, and we supposed that it acquired enough knowledge to generate recommendations during its initial training. An analogous approach was employed with the other LLMs under consideration. In contrast, the recommender baselines were trained using a conventional methodology, leveraging training and test sets through the recommendation framework Elliot [17].
In order to establish a correspondence between the recommendations produced by ChatGPT and the items in the test set, we employed the Damerau-Levenshtein distance algorithm (from the difflib Python library) on the titles. Remarkably, despite slight discrepancies between some generated titles and their corresponding item names in the dataset, all items were accurately identified, preventing any significant instances of hallucination. In other words, within the confined output of 50 elements, ChatGPT did not generate any hallucinated content, i.e., items that appear plausible but are incorrect.
We conduct our experiments through four configurations. In the first configuration, we quantitatively analyze the quality of recommendations provided by ChatGPT in an unrestricted scenario. Specifically, for each user on the selected datasets, we start a dialogue session with ChatGPT providing the set of items she liked in the past and requesting a list of the top 50 items it would recommend, sorted by relevance. In this setting, we refrain from influencing the output in any way through the prompt, allowing ChatGPT to autonomously comprehend the domain and offer its recommendations accordingly.
To enable ChatGPT to compute recommendations, we retrieve the names of each item, such as movie titles for MovieLens, authors' names for Last.FM, and book titles for Facebook Book. This allows ChatGPT to access its knowledge for generating recommendations. As for the baselines, Collaborative Filtering (CF) RSs have information on users' and items' IDs, while Content-Based Filtering (CBF) RSs learn about the item content through their genres.
By comparing the metrics, the first configuration helps us understand ChatGPT's ability to recommend items and determine which type of data it leverages, whether it's collaborative, popularity-based, or content-based. Additionally, this analysis allows us to ascertain that ChatGPT does not generate hallucinated content while shedding light on which type of RS it bears greater similarity to.
The second and third configurations of the experiments focus on assessing ChatGPT's proficiency in re-ranking a list of items using the user profile. In the second
configuration, the list of items is pre-determined and comprises the most popular items in the dataset. On the other hand, in the third configuration, the list of items is generated based on the preferences of each user's nearest neighbours. The objective of these two configurations is to determine whether ChatGPT can proficiently utilize the user preferences to accurately re-rank the list of recommendations.
The fourth and final configuration of the experiments is designed to specifically evaluate ChatGPT's performance with users in a cold start scenario. For this experiment, we identify users with the smallest number of interactions from each dataset and utilize them to assess ChatGPT's performance compared with the baseline.
The derived considerations are presented and discussed in Section 5.
### Dataset
To provide answers to our research questions, we used three state-of-the-art datasets, each one belonging to a different domain (Music, Books, and Movies). Their statistics are reported in Table 1. Hereby is a brief description of each dataset:
* The _MovieLens_ dataset is widely exploited in the RS community [15]. Different versions are available online8, but the one used in our study was collected from the MovieLens website and contains ratings for movies on a 1-5 scale. Footnote 8: [https://grouplens.org/datasets/movielens/](https://grouplens.org/datasets/movielens/)
* The _Facebook Books_ dataset has been released for the Linked Open Data challenge co-located with _ESWC 20159_, and it refers to the book domain. Only implicit feedback is available here, but for each item, we exploit the items-feature mappings available at this link10 to retrieve the data about the book titles, genres, and authors. Footnote 9: [https://2015.eswc-conferences.org/program/semwebeval.html](https://2015.eswc-conferences.org/program/semwebeval.html)
* The _Last.FM_ dataset corresponds to user-artist plays on Last.fm online music system released during _HETRec20111 Workshop_[16]. It contains social networking, tagging, artists, and music-listening information from a set of \(2,000\) users. As for Facebook Books, we fetch titles and genre information through the aforementioned mapping. Footnote 10: [https://github.com/sisinflab/LinkedDatasets/](https://github.com/sisinflab/LinkedDatasets/)
It is essential to note that due to the token limit of the ChatGPT API (i.e., 4,096 tokens), certain preprocessing steps were required for the MovieLens data. The objective was to handle users with interaction histories exceeding this limit effectively. Consequently, users with interaction histories surpassing a specific threshold (i.e., 230 interactions) were excluded from the dataset, resulting in the filtered MovieLens dataset, denoted as \(\dagger\). In the second and third experimental configurations, this threshold was further reduced (i.e., 200 interactions), leading to another MovieLens dataset, indicated as \(\ddagger\). This adjustment was crucial to have a prompt for the entire list of items for re-ranking purposes.
### Metrics
To ensure a comprehensive evaluation that sheds light on ChatGPT's behaviour in a recommendation scenario, we carefully select the following metrics implemented in the Elliot framework:
* _Accuracy metrics_: We utilize various metrics, including Hit Ratio (HR), Mean Average Precision (MAP), Recall, Precision, and F1, to robustly quantify the relevance of the recommended items. Furthermore, we employ the normalized Discounted Cumulative Gain (nDCG) metric to evaluate the quality of the ranking in the recommendation list. Higher scores in these metrics indicate better recommendations, while lower scores suggest the opposite.
* _Coverage and novelty metrics_: These metrics provide valuable insights into the selection of items from the catalogue and the novelty of the recommendations offered to the users. Specifically, ItemCoverage quantifies the extent to which items are recommended, encompassing the coverage of the entire catalogue (i.e., the fraction of all available items that can potentially be recommended). On the other hand, the Gini score assesses the distribution of items, shedding light on the diversity of recommendations. Additionally, the Expected Popularity Complement (EPC) metric measures the expected number of relevant recommended items that were not previously seen by the user, reflecting the system's ability to introduce novelty in the recommendations. For all the metrics higher values indicate superior performance.
* _Bias metrics_: Through the use of these metrics, our goal is to uncover the extent to which the recommendations generated by the system are influenced by biases. The Average Coverage of Long Tail (ACLT) metric enables us to evaluate the exposure that long-tail items receive in the entire recommendation process, providing insights into what fraction of the long-tail items the recommender has successfully covered [43]. Higher is values better is the recommendation. On the other hand, the Average Recommendation Popularity (ARP) metric assesses the average rating popularity of the recommended items across testing users [44], helping us understand the distribution of popularity among the recommendations. Furthermore, the Popularity-based Ranking-based Equal Opportunity (PopREO) metric measures the bias that items in one or more groups may face, particularly concerning their lower recommendation probabilities when users express interest in these items [45]. Moreover, lower values of ARP and PopREO signify less biased
\begin{table}
\begin{tabular}{l r r r r r r r r r} \hline \hline \multirow{2}{*}{Dataset} & \multicolumn{2}{c}{Interaction} & \multicolumn{1}{c}{Users} & \multicolumn{1}{c}{Items} & \multicolumn{1}{c}{Sparsity} & \multicolumn{2}{c}{Interaction} & \multicolumn{1}{c}{Users} & \multicolumn{1}{c}{Items} & \multicolumn{1}{c}{Sparsity} & \multicolumn{2}{c}{Content} \\ \cline{2-10} & \multicolumn{4}{c}{_before pre-processing_} & \multicolumn{4}{c}{_after pre-processing_} & \multicolumn{2}{c}{_type_} & \multicolumn{2}{c}{_features_} \\ \cline{2-10} MovieLens & 100836 & 610 & 9724 & 98.30\% & 51576 & 603 & 1862 & 95.41\% & genre & 20 \\ MovieLens \(\dagger\) & 100836 & 610 & 9724 & 98.30\% & 44309 & 603 & 1862 & 96.05\% & genre & 20 \\ MovieLens \(\ddagger\) & 100836 & 610 & 9724 & 98.30\% & 42456 & 603 & 1861 & 96.22\% & genre & 20 \\ LAST.FM & 60872 & 1883 & 5280 & 99.39\% & 38733 & 1797 & 1104 & 98.05\% & genre & 9748 \\ FB Books & 18978 & 1398 & 2,933 & 99.53\% & 12496 & 1398 & 1979 & 99.55\% & genre, author & 1970 \\ \hline \hline \end{tabular}
\end{table}
Table 1: A comparative analysis of dataset characteristics before and after pre-processing, comprising interactions, user and item counts, dataset sparsity, and quantity of available content.
recommendations. By utilizing these metrics, we aim to shed light on the presence and impact of biases in the recommendation process.
* _Similarity index_: These metrics are also selected to assess the degree of similarity between the recommendation lists produced by ChatGPT and the baselines. Specifically, the Jaccard index evaluates the intersection of two given sets, offering insights into the common items recommended by both ChatGPT and the baselines. On the other hand, the Kendall index examines the same intersection but also takes into account the item positions, providing a measure of how closely the rankings of recommended items align between ChatGPT and the baselines. By leveraging these metrics, we aim to understand how ChatGPT's recommendation behaviour compares to various types of Recommender Systems. Greater values of these indices indicate a higher degree of similarity in the recommendations.
### Baselines
This section provides an outline of the foundational baselines that we deem essential for positioning ChatGPT within the current state-of-the-art of RSs. For this purpose, we select Collaborative Filtering (CF) and Content-Based Filtering (CBF) RSs as baselines to cover all the possible behaviours that ChatGPT may assume. To ensure the reproducibility of our experiments, we employ the following state-of-the-art models provided within the Elliot framework:
* Random, a non-personalized algorithm that randomly recommends items according to a uniform distribution.
* MostPopular, a non-personalized algorithm that recommends the same items' list to all the users. This list encloses items' from the most to least popular. Since it is known that it shows good performance because of statistical biases in the data [46], it is an important baseline to compare against [47].
* ItemKNN[48] is an item-based implementation of the K-nearest neighbours algorithm that finds the K-nearest item neighbours based on a similarity function like the Cosine Vector Similarity [49] and Pearson Correlation [50]. The items in the neighbourhood are then used to predict a score for each user-item pair.
* UserKNN[51], a user-based implementation of the K-nearest neighbours algorithm, similar to ItemKNN.
* RP3beta[52] a simple graph-based method conceptually similar to the ItemKNN.
* EASER[53] a linear model which works like a shallow autoencoder.
* AttributeItemKNN[54] a model that represents each item as a vector of weights computed through a TF-IDF model.
* AttributeUserKNN[54] a model that represents the users as a vector of weights instead of items.
* VSM (Vector Space Model fed with Knowledge Graph information) [55], a simple Vector Space Model in which the relevance is computed as the normalized TF-IDF value for each pair (item, feature). Thus, the recommendation list is computed based on the user-item similarities.
To give a comprehensive picture of ChatGPT's position in the state of the art, we also compare its performances with the following LLMs baselines, which solve the recommendation task in a way identical to the one designed for ChatGPT:
* GPT-3.5[12] denoted as text-davinci-003, a pure large language model created by OpenAI, distinct from ChatGPT due to not be chat-oriented.
* PaLM-2[13], specifically text-bison@001, is a state-of-the-art language model developed by Google.
Other models were considered during our experimentation but eventually discarded:
* LlaMA[56], a large language model developed by Meta. In our study, we aimed to evaluate Llama models of diverse scales, including those with 7B and 13B parameters. However, during the testing phase, we encountered challenges in generating an ordered list of 50 recommendations, resulting in incomparable outcomes, as the recommendation list was not provided. Subsequently, we attempted to explore models with parameter sizes of 33B and 65B. Regrettably, our specific hardware infrastructure posed substantial limitations, making it unsuitable for effectively running these extensive-scale models. This issue underscores the critical challenges associated with hardware resources when dealing with such large-scale language models.
* Alpaca[8], a fine-tuned LLaMA 7B model developed by Stanford University. However, its current input capacity of 512 tokens proved insufficient to feed the model with the user history.
## 5 Results
In this section, we conduct an in-depth analysis of the experimental evaluation results to assess ChatGPT's performance as a recommendation system (RS). The primary objective of this analysis is to ascertain whether ChatGPT's recommended item lists exhibit a stronger alignment with collaborative filtering, content-based recommender, or a hybrid approach.
We carried out a series of experiments on three popular recommendation datasets: MovieLens, LAST.FM, and Facebook Books, considering accuracy, coverage, novelty and bias metrics. A complete description of the prompt, datasets, metrics, and baselines is detailed in Section 4.
The Section is organized around the three research questions defined in the Introduction. Specifically, we address the RQ1 in subsections 5.1, 5.2, 5.3, and 5.4, RQ2 in subsections 5.5, and RQ3 in subsection 5.6. The results are shown in Tables 2 - 10 comparing ChatGPT against the baselines.
### RQ1a: How much is ChatGPT accurate compared with the state-of-the-art?
**MovieLens.** Tables 2, and 3 reports the accuracy of the competeing models. We can observe that the UserKNN and ItemKNN models exhibit superior accuracy performance, which aligns with recent research on reproducibility [57, 58]. We expected this outcome since the MovieLens dataset does not hold much movie content information
\begin{table}
\begin{tabular}{l l l l l l l l l l} \hline \hline & \multicolumn{6}{c}{**MovieLens**} \\ \cline{2-10} & \multicolumn{3}{c}{**cutoff 10**} & \multicolumn{3}{c}{**cutoff 20**} & \multicolumn{3}{c}{**cutoff 50**} \\ \cline{2-10}
**Model** & \(\mathbf{\gamma_{\text{nDCG}}}\mathbf{\theta_{\text{10}}}\) & \(\mathbf{\gamma_{\text{nDCG}}}\mathbf{\theta_{\text{20}}}\) & \(\mathbf{\gamma_{\text{nDCG}}}\mathbf{\theta_{\text{20}}}\) & \(\mathbf{\gamma_{\text{nDCG}}}\mathbf{\theta_{\text{10}}}\) & \(\mathbf{\gamma_{\text{nDCG}}}\mathbf{\theta_{\text{10}}}\) & \(\mathbf{\gamma_{\text{nDCG}}}\mathbf{\theta_{\text{10}}}\) & \(\mathbf{\gamma_{\text{nDCG}}}\mathbf{\theta_{\text{10}}}\) \\ \cline{2-10}
**UserKNN** & \(\mathbf{0.32358}\) & \(0.81711\) & \(\mathbf{0.32792}\) & \(\mathbf{0.31996}\) & \(0.87919\) & \(\mathbf{0.27625}\) & \(0.35223\) & \(0.93624\) & \(\mathbf{0.2077}\) \\ ItemKNN & \(0.31702\) & \(0.81711\) & \(0.31869\) & \(0.31409\) & \(0.87854\) & \(0.27227\) & \(0.34784\) & \(0.94463\) & \(0.20649\) \\ RP\({}^{3}\beta\) & \(0.3151\) & \(\mathbf{0.82215}\) & \(0.31814\) & \(0.31754\) & \(\mathbf{0.89094}\) & \(0.27204\) & \(\mathbf{0.35393}\) & \(\mathbf{0.94463}\) & \(0.20681\) \\ A-UserKNN & \(0.2494\) & \(0.73826\) & \(0.25819\) & \(0.24657\) & \(0.80369\) & \(0.21759\) & \(0.26944\) & \(0.87919\) & \(0.16451\) \\ EASE\({}^{R}\) & \(0.22447\) & \(0.68121\) & \(0.23369\) & \(0.22404\) & \(0.81208\) & \(0.19535\) & \(0.25142\) & \(0.90436\) & \(0.1493\) \\ MostPop & \(0.17172\) & \(0.6057\) & \(0.17735\) & \(0.16748\) & \(0.70302\) & \(0.14994\) & \(0.19229\) & \(0.8557\) & \(0.11676\) \\ ChatGPT-3.5 & \(0.16927\) & \(0.58221\) & \(0.181\) & \(0.15198\) & \(0.66275\) & \(0.14202\) & \(0.14081\) & \(0.70638\) & \(0.09216\) \\ GPT-3.5 & \(0.15374\) & \(0.58389\) & \(0.161\) & \(0.14542\) & \(0.67114\) & \(0.12917\) & \(0.14533\) & \(0.74161\) & \(0.08869\) \\ PaLM 2 & \(0.14032\) & \(0.49123\) & \(0.14669\) & \(0.13376\) & \(0.54035\) & \(0.1171\) & \(0.13655\) & \(0.55263\) & \(0.07874\) \\ A-ItemKNN & \(0.0438\) & \(0.29866\) & \(0.04426\) & \(0.04613\) & \(0.4094\) & \(0.04064\) & \(0.06021\) & \(0.61913\) & \(0.03609\) \\ VSM & \(0.0248\) & \(0.16107\) & \(0.02483\) & \(0.02431\) & \(0.23993\) & \(0.02222\) & \(0.0107\) & \(0.41443\) & \(0.01956\) \\ Random & \(0.0146\) & \(0.11745\) & \(0.01468\) & \(0.01464\) & \(0.1896\) & \(0.01361\) & \(0.02044\) & \(0.34732\) & \(0.0126\) \\ \hline \hline \multicolumn{10}{c}{**LAST.FM**} \\ \cline{2-10} & \multicolumn{3}{c}{**cutoff 10**} & \multicolumn{3}{c}{**cutoff 20**} & \multicolumn{3}{c}{**cutoff 50**} \\ \cline{2-10}
**Model** & \(\mathbf{\gamma_{\text{nDCG}}}\mathbf{\theta_{\text{10}}}\) & \(\mathbf{\gamma_{\text{nDCG}}}\mathbf{\theta_{\text{20}}}\) & \(\mathbf{\gamma_{\text{nDCG}}}\mathbf{\theta_{\text{20}}}\) & \(\mathbf{\gamma_{\text{nDCG}}}\mathbf{\theta_{\text{10}}}\) & \(\mathbf{\gamma_{\text{nDCG}}}\mathbf{\theta_{\text{10}}}\) & \(\mathbf{\gamma_{\text{nDCG}}}\mathbf{\theta_{\text{10}}}\) & \(\mathbf{\gamma_{\text{nDCG}}}\mathbf{\theta_{\text{10}}}\) & \(\mathbf{\gamma_{\text{nDCG}}}\mathbf{\theta_{\text{10}}}\) & \(\mathbf{\gamma_{\text{nDCG}}}\mathbf{\theta_{\text{10}}}\) & \(\mathbf{\gamma_{\text{nDCG}}}\mathbf{\theta_{\text{10}}}\) \\ \cline{2-10} RP\({}^{3}\beta\) & \(\mathbf{0.25484}\) & \(\mathbf{0.800779}\) & \(\mathbf{0.245995}\) & \(\mathbf{0.30932}\) & \(\mathbf{0.90317}\) & \(\mathbf{0.19756}\) & \(\mathbf{0.37565}\) & \(\mathbf{0.97496}\) & \(\mathbf{0.13787}\) \\ ItemKNN & \(0.254563\) & \(0.799666\) & \(0.245618\) & \(0.299969\) & \(0.89538\) & \(0.19482\) & \(0.36331\) & \(0.95715\) & \(0.13375\) \\ EASE\({}^{R}\) & \(0.246097\) & \(0.7885536\) & \(0.241016\) & \(0.29226\) & \(0.8798\) & \(0.19030\) & \(0.34523\) & \(0.96105\) & \(0.1312\) \\ UserKNN & \(0.243746\) & \(0.764051\) & \(0.237955\) & \(0.28913\) & \(0.86811\) & \(0.18851\) & \(0.35096\) & \(0.95492\) & \(0.12921\) \\ A-UserKNN & \(0.24389\) & \(0.782972\) & \(0.241725\) & \(0.29139\) & \(0.86032\) & \(0.19169\) & \(0.35333\) & \(0.936\) & \(0.13193\) \\ VSM & \(0.217008\) & \(0.75626\) & \(0.210701\) & \(0.25672\) & \(0.58476\) & \(0.16735\) & \(0.31553\) & \(0.94769\) & \(0.11525\) \\ A-ItemKNN & \(0.208757\) & \(0.239566\) & \(0.19778\) & \(0.25262\) & \(0.84
except for their genre, leading to lower performance for the Content-based Filtering (CBF) models.
Although ChatGPT-3.5 outperforms CBF and random models, it falls short of MostPop, primarily due to the high Popularity Bias present in the dataset, and it does not surpass the collaborative filtering models. However, ChatGPT outperforms traditional CBF models such as AttributeItemKNN (A-ItemKNN) and VSM, highlighting that its performance are not at random and could be an indication of its extensive learned knowledge of content information during the training phase.
While ChatGPT does not occupy the highest position in the ranking, its accuracy is within the same order of magnitude as the collaborative models, except for the top-3 UserKNN, ItemKNN, and \(\text{RP}^{3}\beta\). Increasing the cutoff value to 20 and 50 gives us further insights into the models' behaviour. The results remain relatively consistent with the cutoff of 10, indicating that the recommendation list size does not particularly impact the top-performing models (UserKNN, ItemKNN, and \(\text{RP}^{3}\beta\)) and ChatGPT.
Surprisingly, with a cutoff set to 50, GPT-3.5 slightly outperforms ChatGPT in nDCG, HR, and the other accuracy metrics. This indicates that GPT-3.5 makes better use of the increasing context it obtains for each item in generating the recommended list. This result could be attributed to GPT-3.5's main objective of generating sentences with strong semantic consistency. Conversely, ChatGPT might compromise a bit on semantic consistency to achieve better conversational results prioritising the effectiveness of user interaction. However, ChatGPT still outperforms the other LLMs in terms of MAP. To conclude, PaLM performs the worst among the LLM, but better than CBF approaches. This may depend on its training data that considers movies, but maybe it could not include vast information on popular ones.
**LAST.FM.** Concerning the music domain, the results of the top-performing models, namely UserKNN, ItemKNN, and \(\text{RP}^{3}\beta\), are aligned with those of the MovieLens Small dataset.
Regarding the performance of recommendation systems using content based on genres, notably, the available content for this particular dataset comprises a significantly larger magnitude with 9,748 genres, a substantial increase from the 20 genres found in the MovieLens dataset. Consequently, the Content-Based Filtering (CBF) models exhibit remarkable improvement compared to the MovieLens dataset, indicating the effectiveness of the approach. Furthermore, when comparing the performance of CBF models with ChatGPT, the former outperforms the latter, benefitting from its inherent recommendation-oriented nature.
Due to their performance similarity, defining distinct groups that separate model performance is intricate. However, if such groups exist, it is likely that \(\text{RP}^{3}\beta\), ItemKNN, \(\text{EASE}^{R}\), and UserKNN would be included in the same group, as they exhibit the highest comparable performance. Surprisingly, the same ranking holds for all cutoff values, except for \(\text{EASE}^{R}\) which slightly outperforms ItemKNN at cutoff 20 for the HR.
Such a phenomenon may depend on the fewer items of the Last.FM dataset (1,104 artists compared to MovieLens' 1,862 movies), making the similarity computation
more challenging. ChatGPT and the other LLMs show relatively low performance, surpassing only MostPop and Random models.
The poor results on MostPop suggest that, in the Last.FM dataset, the popularity bias is not so evident, while ChatGPT's underwhelming performance may have several underlying causes. The main cause might stem from the scarcity of music-related data or inadequate music review information in its training corpus. Nevertheless, the internal mechanisms of ChatGPT remain somewhat enigmatic, leaving room for speculation.
It is possible that the issue lies in the complexity of the recommendation task, and ChatGPT may not effectively utilise most of the content available to improve its performance. It is worth noting that all the LLMs demonstrate similar performance, which is still comparable to the state-of-the-art baselines. In particular, PaLM surpasses on average ChatGPT except for the HR metric, showing its ability to operate well in the music domain.
Facebook Books.It is noteworthy that ChatGPT consistently outperforms other models for cutoff values of 10, 20, and in most cases, 50-item recommendations. ChatGPT not only outperforms but also significantly surpasses the other large language models by a considerable margin. This performance cannot be attributed to popularity, as MostPop ranks last as stated above. In fact, we observe that the MostPop model perform poorly, indicating a limited popularity bias in this dataset.
However, although also the other LLMs can take advantage of the large textual-content availability, the results demonstrate how they are less efficient than ChatGPT at exploiting it.
Furthermore, it is worth mentioning that incorporating content from curated sources such as Linked Data enhances the performance of content-based models significantly [55, 59, 60]. As a reflection, content-based models (A-ItemKNN, VSM, and A-UserKNN) outperform collaborative models (RP\({}^{3}\beta\), UserKNN, ItemKNN, and EASE\({}^{R}\)), which could also be attributed to the limited and less significant collaborative information in the dataset. This result may be due to the datasets' small number of users (1,398), which leads to a scarcity of identifiable patterns in the collaborative data. Some results support this intuition. Specifically, ItemKNN achieves an nDCG@10 of 0.317 and 0.254 in MovieLens and LAST.FM datasets, while it only reaches an nDCG@10 of 0.02873 in the Facebook Books dataset.
Conclusion.The answer to the research question "How much is ChatGPT accurate compared with the state-of-the-art?" is: ChatGPT solves the recommendation task remarkably accurately. Even in its vanilla version, thus without applying techniques to enhance its performance, ChatGPT reaches results comparable with the RSs state-of-the-art and distinguishes itself from other LLMs. In such a way, our empirical results open up new avenues of thinking and suggest that "user-oriented" LLMs like ChatGPT have the potential to revolutionise the recommendation scenario.
RQ1b: How much is ChatGPT proposing diverse and novel recommendations, compared with the state-of-the-art?
In this analysis, we aim to assess the extent of diversity and novelty in the recommendations generated by ChatGPT through the results collected in Table 4. This study, and consequently the Table, specifically examines i) the Item Coverage metric to quantify aggregate diversity by assessing the number of suggested items, ii) the Gini index to estimate the variation across the suggested lists, and iii) the EPC metric as an indicator of the novelty of the recommended items.
\begin{table}
\begin{tabular}{l l
**MovieLens.** In this domain, the Random model obtains the highest value in terms of ItemCoverage, while the MostPop model performs poorly, aligning with their standard behaviours reported in the literature. Surprisingly, the EASE\({}^{R}\) model exhibits a trend similar to MostPop, indicating an evident inefficiency when applied to datasets affected by popularity bias. The Attribute-based ItemKNN (A-ItemKNN) model emerges as the second-best performer, although its nDCG performance is underwhelming. Such an outcome derives from the model's focus on recommending items that reflect the average content liked by the user rather than composing a highly relevant list (in this domain, popular items have a higher rank, while niche movies result in less pertinent recommendations).
In contrast, ItemKNN demonstrates an effective balance between leveraging diversity (ItemCoverage) and accuracy (nDCG), likely due to its reliance on collaborative information. However, ChatGPT's performance remains mediocre despite its poor accuracy results, and this trend persists even when considering a cutoff of 20. Surprisingly, when the cutoff is raised to 50, all models, except ChatGPT, exhibit a significant coverage of items. This observation leads to an intriguing question: Could ChatGPT possess specialized knowledge about certain movies or reviews, making it less familiar with a broader range of movies beyond the 759 it is most familiar with? Notably, ChatGPT's diversity is only half of what UserKNN achieves.
Furthermore, upon scrutinizing other large language models (LLMs) like PaLM-2 and GPT-3.5, it becomes apparent that they demonstrate comparable behaviour to ChatGPT. Consequently, the architectural characteristics of these models and their shared task of predicting the most probable token likely play a significant role in shaping this pattern. Such LLMs seem to prioritize a limited set of highly probable items, which could account for the similarity in their performance. This observation emphasizes the necessity of broadening our investigations to encompass a wider array of LLMs, aiming to understand their behaviour and potential limitations comprehensively.
**LAST.FM.** Similar outcomes are observed when examining the LAST.FM dataset. The Random model achieves the highest coverage, with AttributeItemKNN (A-ItemKNN) following closely behind. However, ItemKNN remains the best overall model. As for ChatGPT, it aligns with the top-performing models to some extent, but its coverage falls below expectations, reaching only 1,099 when evaluated with a cutoff of 50. Nevertheless, it appears that ChatGPT may have a higher upper limit for overall artists proposed compared to MovieLens, indicating a broader knowledge of the music domain and a less influential popularity bias.
Regarding the Gini index, a similar pattern is observed as in the previous dataset, although ChatGPT demonstrates relatively better performance. Additionally, ChatGPT exhibits remarkable efficiency in terms of EPC with a cutoff of 50, outperforming all other models except PaLM-2. This finding suggests that the one thousand items covered by ChatGPT may not necessarily align with the most popular ones, indicating its potential for promoting novelty.
It is worth noting that all LLM models demonstrate higher EPC values when evaluated with a cutoff of 50, emphasizing the significance of this particular cutoff in
encouraging novelty in the recommended items.
**Facebook Books.** In the domain of books, ChatGPT continues to exhibit outstanding performance, especially in terms of novelty, despite covering a relatively small number of books (1,029 with a cutoff of 50). Interestingly, other large language models (LLMs) also demonstrate a similar trend to ChatGPT, recommending novel and less popular items.
The exact reason behind ChatGPT's exceptional performance in this context remains uncertain, giving rise to several plausible hypotheses. One possibility is that ChatGPT's training data includes a substantial amount of book reviews, thereby
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{**MovieLens**} \\ \cline{2-9} & \multicolumn{4}{c}{**cutoff 10**} & \multicolumn{4}{c}{**cutoff 20**} & \multicolumn{4}{c}{**cutoff 50**} \\ \cline{2-9}
**Model** & **’ItemCov@10** & **’Gini@10** & **’\(\uparrow\)EPC@10** & **’ItemCov@20** & **’\(\uparrow\)Gini@20** & **’\(\uparrow\)EPC@20** & **’ItemCov@50** & **’\(\uparrow\)Gini@50** & **’\(\uparrow\)EPC@50** \\ \cline{2-9} UserKNN & 431 & 0.06005 & 0.22631 & 646 & 0.09069 & 0.1833 & 1055 & 0.1785 & 0.14056 \\ ItemKNN & 672 & 0.1025 & **0.23302** & 957 & 0.13176 & **0.19369** & 1427 & 0.19151 & **0.14349** \\ RD\({}^{1}\beta\) & 422 & 0.05512 & 0.22351 & 638 & 0.08305 & 0.18905 & 994 & 0.14218 & 0.14078 \\ A-U-NetKNN & 451 & 0.06181 & 0.1743 & 690 & 0.09503 & 0.14793 & 1166 & 0.16927 & 0.11061 \\ EASE\({}^{2}\) & 53 & 0.01054 & 0.14459 & 97 & 0.01876 & 0.12501 & 187 & 0.04176 & 0.09746 \\ MostPop & 40 & 0.0082 & 0.10873 & 64 & 0.01539 & 0.09289 & 124 & 0.03492 & 0.07233 \\ ChatGPT-3.5 & 408 & 0.03708 & 0.12028 & 591 & 0.0624 & 0.09455 & 759 & 0.09774 & 0.08241 \\ GPT-3.5 & 344 & 0.03725 & 0.10986 & 463 & 0.05127 & 0.08966 & 642 & 0.08087 & 0.0779 \\ P4LM 2 & 277 & 0.02758 & 0.09883 & 385 & 0.03835 & 0.08437 & 472 & 0.0496 & 0.07095 \\ A-ItemKNN & 1340 & 0.36863 & 0.03798 & 1654 & 0.42543 & 0.03477 & 1832 & 0.51702 & 0.03048 \\ VSM & 385 & 0.05159 & 0.021 & 596 & 0.08068 & 0.01841 & 952 & 0.14054 & 0.01684 \\ Random & **1798** & **0.69364** & 0.01333 & **1859** & **0.78045** & 0.0118 & **1862** & **0.85914** & 0.01151 \\ \hline \hline & \multicolumn{4}{c}{**LAST.FM**} \\ \cline{2-9}
**Model** & **’ItemCov@10** & **’Gini@10** & **’\(\uparrow\)EPC@10** & **’ItemCov@20** & **’\(\uparrow\)Gini@20** & **’\(\uparrow\)EPC@20** & **’ItemCov@50** & **’\(\uparrow\)Gini@50** & **’\(\uparrow\)EPC@50** \\ \cline{2-9} RP\({}^{1}\beta\) & 761 & 0.11652 & 0.18876 & 1024 & 0.174 & **0.14953** & 1349 & 0.27484 & 0.1004 \\ ItemKNN & 1284 & 0.286471 & **0.192606** & 1438 & 0.34802 & 0.14749 & 1504 & 0.45097 & 0.09843 \\ EASE\({}^{2}\) & 567 & 0.079701 & 0.181923 & 824 & 0.12525 & 0.14067 & 1237 & 0.22145 & 0.0943 \\ UserKNN & 803 & 0.129114 & 0.182059 & 1096 & 0.18547 & 0.14057 & 1390 & 0.28436 & 0.09414 \\ A-UserKNN & 825 & 0.16048 & 0.184956 & 1134 & 0.23609 & 0.14442 & 1423 & 0.26406 & 0.09629 \\ VSM & 653 & 0.082577 & 0.150303 & 899 & 0.11692 & 0.12427 & 1210 & 0.17866 & 0.08328 \\ A-ItemKNN & 1411 & 0.38553 & 0.158457 & 1487 & 0.45217 & 0.12461 & **1507** & 0.56704 & 0.08471 \\ P4LM-2 & 736 & 0.124381 & 0.161394 & 833 & 0.13252 & 0.1383 & 865 & 0.11606 & **0.12908** \\ ChatGPT-3.5 & 833 & 0.151743 & 0.153509 & 1018 & 0.19144 & 0.11958 & 1009 & 0.22245 & 0.10426 \\ GPT-3.5 & 654 & 0.106667 & 0.148682 & 771 & 0.12853 & 0.10962 & 895 & 0.1397 & 0.0854 \\ MostPop & 27 & 0.008103 & 0.06626 & 41 & 0.01568 & 0.05425 & 79 & 0.03702 & 0.04089 \\ Random & **1507** & **0.84102** & 0.00493 & **1507** & **0.88721** & 0.0048 & **1507** & **0.92653** & 0.00472 \\ \hline & \multicolumn{4}{c}{**Facebook Books**} \\ \cline{2-9}
**Model** & **’ItemCov@10** & **’Gini@10** & **’\(\uparrow\)EPC@10** & **’ItemCov@20** & **’\(\uparrow\)Gini@20** & **’\(\uparrow\)EPC@20** & **’ItemCov@50** & **’\(\uparrow\)Gini@50** & **’\(\uparrow\)EPC@50** \\ \cline{2-9} ChatGPT-3.5 & 689 & 0.0439 & **0.02102** & 876 & 0.05529 & **0.01707** & 1029 & 0.07221 & **0.01336** \\ A-ItemKNN & 1488 & 0.27354 & 0.01917 & 1732 & 0.33571 & 0.01472 & 1867 & 0.42203 & 0.09961 \\ VSM & 1382 & 0.24747 & 0.01748 & 1681 & 0.30591 & 0.0145 & 1856 & 0.39692 & 0.00968 \\ GPT-3.5 & 505 & 0.02219 & 0.01548 & 631 & 0.02713 & 0.01293 & 725 & 0.02949 & 0.0121 \\ A-UserKNN & 809 & 0.07523 & 0.01586 & 1291 & 0.11023 & 0.01254 & 2010 & 0.19455 & 0.00904 \\ PaLM-2 & 583 & 0.04371 & 0.01258 & 740 & 0.05053 & 0.01047 & 832 & 0.04926 & 0.01034 \\ RP\({}^{1}\beta\) & 1984 & 0.39997 & 0.01124 & 2150 & 0.46941 & 0.00933 & 2222 & 0.55403 & 0.00656 \\ UserKNN & 868 & 0.07124 & 0.01088 & 1377 & 0.10896 & 0.00957 & 2067 & 0.19848 & 0.00
enhancing its capacity to provide diverse and innovative recommendations. Moreover, ChatGPT might effectively leverage collaborative information or thoroughly exploit the content of items to deliver well-targeted recommendations.
To validate these hypotheses and attain a deeper understanding of ChatGPT's exceptional novelty in the Facebook Books dataset, subsequent experiments will thoroughly investigate and analyze ChatGPT's performance.
**Conclusion.** In summary, the question: "How much is ChatGPT proposing diverse and novel recommendations, compared with the state-of-the-art?" has a context-dependent answer. Actually, the level of diversity and novelty exhibited by ChatGPT varies depending on the considered dataset and domain. In its vanilla version, ChatGPT tends to demonstrate lower diversity but higher novelty when dealing with Books, and it showcases good novelty in the realm of Music. These findings imply that ChatGPT and other LLMs have the potential to serve as effective RS with higher levels of novelty and satisfactory diversification.
### RQ1c: How much is ChatGPT biased compared with the state-of-the-art?
Evaluating the presence of bias in ChatGPT and comparing its behaviour with other state-of-the-art models requires an analysis of several factors. Bias can manifest in various forms, including cultural, gender, racial, and ideological aspects, and it is essential to assess how each factor influences the recommendations. These biases can be ingrained in the training data, influenced by underlying algorithms, and reflected in the model's ability to generate fair and unbiased recommendations.
Given the black-box nature of ChatGPT, which limits access to its internal workings, our evaluation focuses on examining the presence of biases in the recommendations it generates. This evaluation involves scrutinizing the recommendations for potential biases based on the identified factors. The results of this experiment are reported in Table 5.
**MovieLens.** As highlighted in the previous sections, the MovieLens dataset exhibits a significant popularity bias. To contextualize our discussion, we use MostPop as a reference point, given its ACLT value indicating the highest bias (0 value).
EASE\({}^{R}\) demonstrates a biased trend similar to MostPop, which may be attributed to its simple yet effective architecture that proposes highly relevant recommendations. Although content-based models such as VSM and AttributeItemKNN exhibit different patterns with generally poor and unreliable performance in recommending relevant items. UserKNN reveals a high popularity bias trend, whereas ItemKNN is more biased on the long-tail items, further supporting previous findings on lower diversity but a higher novelty (refer to 5.2).
The evaluation of ChatGPT's performance in the movies domain indicates that it outperforms most collaborative approaches (excluding ItemKNN) and other tested large language models (LLMs). However, when compared to content-based (CB) models, it does not achieve superior results. These observations point to the presence of
bias in LLMs, which is further corroborated by the findings from the novelty measures analysis (refer to 5.2).
When considering cutoff 20 and cutoff 50, the models demonstrate similar behaviour, with Attribute-based ItemKNN emerging as the least biased model at cutoff 20. ChatGPT and other large language models (LLMs) demonstrate the newsworthy ability to avoid significant bias whenever they can recommend a longer list of items. Hence, they are able to recommend niche items to the users based on their content interests as soon as the most popular ones have already been proposed. This eventuality may imply that LLMs may combine "safe" recommendations based on the item's popularity and "risky" ones.
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{**cutoff 10**} & \multicolumn{4}{c}{**MovieLens**} & \multicolumn{4}{c}{**cutoff 20**} & \multicolumn{4}{c}{**cutoff 50**} \\ \cline{2-10}
**Model** & **ACLT@10** & **\(\Delta\)ARP@10** & **\(\Delta\)TopREO@10** & **ACLT@20** & **\(\Delta\)ARP@20** & **\(\Delta\)PoREO@20** & **ACLT@50** & **\(\Delta\)ARP@50** & **\(\Delta\)PoREO@50** \\ \hline UserKNN & 0.05705 & 125.049 & 0.9958 & 0.22987 & 108.5662 & 0.98757 & 3.25 & 86.85775 & 0.95677 \\ ItemKNN & 0.46477 & 103.1854 & 0.9062 & 1 & 156.56686 & 0.89757 & 3.25 & 82.11755 & 0.84785 \\ RP\({}^{2}\beta\) & 0.21141 & 124.6554 & 0.96631 & 0.47819 & 109.3722 & 0.95455 & 1.54027 & 87.8686 & 0.91837 \\ A-UexKNN & 0.12248 & 125.7143 & 0.98918 & 0.36409 & 108.0922 & 0.98295 & 1.73322 & 86.6751 & 0.97198 \\ EASE\({}^{2}\) & 0 & 175.4149 & 1 & 0 & 154.0077 & 0 & 121.6648 & 1 \\ MostPop & 0 & 182.5502 & 1 & 0 & 160.7216 & 1 & 0 & 127.0194 & 1 \\ ChatGPT-3.5 & 0.41946 & 121.6523 & 0.92529 & 1.5201 & 97.7426 & 0.88917 & 3.9893 & 30.34624 & 0.8408 \\ GPT-3.5 & 0.30705 & 102.7903 & 0.94715 & 0.84396 & 104.2706 & 0.92309 & 2.69966 & 85.00323 & 0.91203 \\ PALM-2 & 0.30877 & 104.7076 & 0.98709 & 0.9842 & 92.94537 & 0.96723 & 2.06842 & 81.30042 & 0.95882 \\ A-LenzNKN & **5.53523** & 23.96779 & **0.04259** & **10.88658** & 23.57572 & 0.04029 & **27.22087** & 24.4517 & **0.01073** \\ VSM & 3.65443 & 29.4047 & 0.32191 & 7.68456 & 27.65017 & 0.33438 & 20.75 & 26.11899 & 0.19908 \\ Random & 5.44631 & **22.52315** & 0.11481 & 10.09208 & **22.38414** & **0.03134** & 27.0151 & **22.25768** & 0.11414 \\ \hline \hline & \multicolumn{4}{c}{**LAST.FM**} & \multicolumn{4}{c}{**LAST.FM**} & \multicolumn{4}{c}{**cutoff 20**} & \multicolumn{4}{c}{**cutoff 50**} \\ \cline{2-10}
**Model** & **ACLT@10** & **\(\Delta\)ARP@10** & **\(\Delta\)RP@20** & **\(\Delta\)RP@20** & **\(\Delta\)RP@20** & **\(\Delta\)ACLT@50** & **\(\Delta\)ARP@50** & **\(\Delta\)RP@20** \\ \hline RP\({}^{2}\beta\) & 1.160287 & 165.9932 & 0.608086 & 2.52977 & 137.5507 & 0.54217 & 7.24845 & 104.9025 & 0.45809 \\ ItemKNN & 2.169727 & 117.7532 & 0.407507 & 4.64162 & 95.52040 & 0.37312 & 13.09627 & 74.46029 & 0.29508 \\ EASE\({}^{2}\) & 0.254869 & 188.3455 & 0.484687 & 0.68447 & 155.7025 & 0.80695 & 3.18976 & 116.8123 & 0.68211 \\ LuckNN & 0.553144 & 1159.6465 & 0.77285 & 1.49092 & 134.85455 & 0.71990 & 5.21556 & 106.6595 & 0.57066 \\ A-UexKNN & 1.074569 & 129.3922 & 0.75219 & 1.67317 & 108.5136 & 0.60619 & 9.57718 & 84.93656 & 0.53294 \\ VSM & 0.584684 & 117.9949 & 0.722211 & 1.40122 & 152.82326 & 0.65290 & 4.86311 & 120.8941 & 0.55553 \\ A-LenzNNNN & 0.420489 & 87.88622 & 0.1958941 & 0.55314 & 75.6611 & 0.16821 & 18.19811 & 61.03848 & 0.13711 \\ PaLM-2 & 0.869847 & 111.7305 & 0.586115 & 1.49293 & 104.1728 & 0.56617 & 13.66337 & 103.0726 & 0.56133 \\ ChatGPT-3.5 & 1.047858 & 106.5101 & 0.516102 & 2.49193 & 93.3427 & 0.47508 & 4.86366 & 85.93759 & 0.44051 \\ GPT-3.5 & 0.691152 & 114.9996 & 0.68311 & 2.15748 & 93.3919 & 0.61524 & 9.46828 & 78.49711 & 0.57589 \\ MostPop & 0 & 348.33038 & 0 & 0 & 293.6111 & 1 & 0 & 216.525 & 1 \\ Random & **5.716194** & **31.34936** & **0.184579** & **11.31942** & **31.6756** & **0.0374** & **28.16973** & **31.47403** & **0.02255** \\ \hline \hline & \multicolumn{4}{c}{**Facebook Books**} & \multicolumn{4}{c}{**cutoff 20**} & \multicolumn{4}{c}{**cutoff 50**} \\ \cline{2-10}
**Model** & **ACLT@10** & **\(\Delta\)ARP@10** & **\(\Delta\)RP@20** & **\(\Delta\)ACLT@20** & **\(\Delta\)ARP@20** & **\(\Delta\)CP@20** & **ACLT@50** & **\(\Delta\)RP@50** & **\(\Delta\)RP@20** & **\(\Delta\)ACLT@50** & **\(\Delta\)RP@50** \\ \cline{2-10} ChatGPT-3.5 & 1.73475 & 58.12016 & 0.43521 & 3.71932 & 4.14003 & 0.38361 & 8.38575 & 36.16197 & 0.34827 \\ A-IenzNN & 5.84644 & 7.24181 & 0.07496 & 11.99265 & 7.27719 & **0.08573** & 30.80456 & 7.0872 & 0.08533 \\ VSM & 5.77663 & 7.36657 & **0.044653** & 11.63336 & 7.77355 & 0.07474 & 30.19251 & 7.73464 & 0.06966 \\ GPT-3.5 & 1.05511 & 73.713758 & 0.65831 & 2.
**LAST.FM.** Our analysis in the music domain confirms a lower dataset bias compared to the MovieLens dataset. This conclusion is supported by the better performance of EASE\({}^{R}\), which is significantly different from the MostPop approach.
Moreover, ChatGPT demonstrates performance on par with the leading models in ACLT and ARP, outperforming all other models except ItemKNN in PopREO. This consistent trend persists when evaluating the same metrics at cutoff 20 and cutoff 50. Furthermore, as expected, the Random and AttributeItemKNN models produce high values -i.e. indicating a low bias- across all evaluations.
In addition, PaLM-2 and GPT-3.5 results follow ChatGPT, further supporting the assumption that an LLM fine-tuned on conversation reaches higher performance in user-related tasks.
**Facebook Books.** Similarly to the previous analysis, ChatGPT once again exhibits remarkable performance in the domain of books. Possible motivations for this impressive performance could be attributed to ChatGPT's higher specialization in the book domain, but further exploration is needed to understand the underlying factors.
However, when comparing ChatGPT to other content-based models (A-ItemKNN, VSM, and A-UserKNN), it shows relatively worse results in terms of ACLT, indicating its higher susceptibility to popularity bias. This finding raises questions about how popularity bias affects ChatGPT's book recommendations and warrants further investigation.
This observation is further supported by ARP, where AttributeItemKNN and VSM exhibit significantly lower values. ChatGPT may still rely on collaborative data, although some content information is utilized in performing recommendations.
Moreover, when considering PopREO, AttributeItemKNN and VSM demonstrate a better tradeoff, while the other large language models (LLMs) exhibit behaviour similar to ChatGPT. There are marginal differences in ACLT and ARP, where PaLM outperforms ChatGPT and GPT-3.5. These differences may depend on PaLM's lower specialization in the book domain or the lesser exploited collaborative data.
However, despite being influenced by popularity bias, ChatGPT performs comparably to other recommendation systems, and these trends remain consistent across various cutoff thresholds.
**Conclusion.** Finally, a noteworthy level of bias is evident when comparing ChatGPT to the state-of-the-art models. ChatGPT demonstrates varying degrees of popularity bias across different datasets and exhibits the behaviour of recommending popular items. Similar observations can be made for GPT-3.5 and PaLM-2. These findings highlight the need for intensive efforts to address and mitigate bias in the final recommendations generated by ChatGPT and other LLMs.
### RQ1d: Which type of recommender system is ChatGPT more similar to?
The previous analysis positioned ChatGPT and other Large Language Models (LLMs) within the state-of-the-art. However, it remains unclear whether these models behave
as Content-based (CB) systems. In this section, we aim to address this uncertainty by analyzing the recommendations generated by ChatGPT and interpreting them in the context of state-of-the-art recommender models. The results of this evaluation are presented in Table 6, offering a comprehensive overview of the recommendation-set similarity (Jaccard index) and ranking similarity (Kendall's Tau coefficient) between the recommendation lists of ChatGPT and those generated by other recommendation system (RS) algorithms.
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline & \multicolumn{6}{c}{**MovieLens**} \\ \cline{2-7} & \multicolumn{3}{c}{**cutoff 10**} & \multicolumn{3}{c}{**cutoff 20**} & \multicolumn{3}{c}{**cutoff 50**} \\ \cline{2-7}
**Model** & \(\uparrow\)**Jaccard@10** & \(\uparrow\)**Kendall@10** & \(\uparrow\)**Jaccard@20** & \(\uparrow\)**Kendall@20** & \(\uparrow\)**Jaccard@50** & \(\uparrow\)**Kendall@50** \\ \cline{2-7} \cline{6-7} & **0.16389** & 0.03134 & **0.17189** & 0.04715 & **0.16216** & 0.05568 \\ MostPop & 0.14789 & 0.02379 & 0.16485 & **0.07266** & 0.15037 & **0.06825** \\ RP\({}^{3}\beta\) & 0.12550 & 0.02255 & 0.11907 & 0.02506 & 0.10984 & 0.01834 \\ A-UserKNN & 0.11794 & **0.03581** & 0.10937 & 0.03928 & 0.10460 & 0.03892 \\ UserKNN & 0.11571 & 0.01052 & 0.10804 & 0.01596 & 0.09870 & 0.01997 \\ ItemKNN & 0.07449 & 0.00955 & 0.08655 & 0.01441 & 0.09594 & 0.01363 \\ VSM & 0.00604 & -0.00212 & 0.00860 & -0.00557 & 0.01735 & -0.00589 \\ A-ItemKNN & 0.00515 & 0.00952 & 0.00898 & 0.02116 & 0.01843 & 0.00578 \\ Random & 0.00254 & -0.03154 & 0.00419 & -0.01742 & 0.00932 & -0.01054 \\ \hline \hline \multicolumn{7}{c}{**LAST.FM**} \\ \cline{2-7} & \multicolumn{3}{c}{**cutoff 10**} & \multicolumn{3}{c}{**cutoff 20**} & \multicolumn{3}{c}{**cutoff 50**} \\ \cline{2-7}
**Model** & \(\uparrow\)**Jaccard@10** & \(\uparrow\)**Kendall@10** & \(\uparrow\)**Jaccard@20** & \(\uparrow\)**Kendall@20** & \(\uparrow\)**Jaccard@50** & \(\uparrow\)**Kendall@50** \\ \cline{2-7
**MovieLens.** The analysis of the Jaccard index uncovers intriguing findings concerning the similarity of ChatGPT to other recommender models on the MovieLens dataset. Notably, ChatGPT, EASE\({}^{R}\), and MostPop exhibit the highest similarity, indicating that ChatGPT tends to recommend popular items. However, going beyond these two models, ChatGPT's behaviour aligns more closely with collaborative-based models like EASE\({}^{R}\), RP\({}^{3}\beta\), AttributeUserKNN (a hybrid model), and UserKNN.
The observed similarity in ChatGPT's behaviour raises the question of whether it is primarily influenced by popularity bias or the collaborative information it gathers. It is worth noting that while ChatGPT suggests the same items for recommendation, their ranking differs. This disparity underscores ChatGPT's unique position in the state-of-the-art, positioned between content-based (CB) and collaborative-based filtering (CBF) recommendation systems.
The analysis of the Kendall metric reveals interesting insights about ChatGPT's behavior at different cutoffs. At cutoff 10, ChatGPT exhibits the highest similarity to AttributeUserKNN, indicating a notable alignment between the two models. However, this similarity decreases significantly at cutoffs 20 and 50. As a result, for the MovieLens dataset, ChatGPT primarily acts as a recommender system suggesting popular items while incorporating some content-based information, especially at cutoff 10. In contrast, at cutoffs 20 and 50, ChatGPT relies more on popularity and collaborative elements rather than content-based methods.
**LAST.FM.** The previous experiments conducted on the LAST.FM dataset revealed a consistent trend of low popularity bias for ChatGPT. This finding suggests that ChatGPT lacks specialized knowledge of the music domain, highlighting intriguing aspects of the LLM's performance in the recommendation task.
A detailed analysis of both similarity metrics further confirms the validity of the earlier assumption. The results show that ChatGPT's recommended lists have low similarity with the MostPop-generated lists but demonstrate high similarity with collaborative filtering models, i.e. RP\({}^{3}\beta\) and UserKNN.
The observed low similarity with Content-based models provides additional evidence that ChatGPT has limited knowledge in the music domain. However, at cutoff 10, RP\({}^{3}\beta\), UserKNN, and AttributeUserKNN continue to be the most comparable models to ChatGPT. This finding implies that ChatGPT demonstrates greater similarity to hybrid and collaborative models rather than content-based ones, challenging the initial hypothesis that ChatGPT relies solely on content data.
This pattern remains consistent at cutoff 20 and 50, with the Kendall metric indicating that ChatGPT demonstrates purely collaborative behaviour at cutoff 50.
**Facebook Books.** As demonstrated in Section 5.3 ("How much is ChatGPT biased compared with the state-of-the-art?"), the analysis shows that ChatGPT is susceptible to popularity bias in the Facebook Books dataset. The Jaccard index analysis further validates this observation, indicating that ChatGPT exhibits the highest similarity to MostPop, followed by EASE\({}^{R}\)(Jaccard @10/@20/@50 and Kendall's Tau @50). This empirical evidence confirms our hypothesis that ChatGPT tends to prioritize collaborative-popularity data.
However, AttributeUserKNN stands out as the first non-collaborative model with similarities to ChatGPT, indicating a potential hybrid behaviour with occasional peaks of popularity influence. This pattern persists at cutoff 50, while at cutoff 20 (Kendall's Tau), AttributeUserKNN emerges as the most similar model to ChatGPT.
This pattern remains consistent at cutoff 20 and 50, and with the Kendall metric.
**Conclusion.** These findings underscore ChatGPT's tendency to align with hybrid and collaborative recommender models, showcasing its preference for a balanced approach rather than relying solely on content-based methods or popularity.
Specifically, in cases where content data is excessive or insufficient to make a recommendation, ChatGPT shows higher similarity with collaborative models, as demonstrated with Jaccard on MovieLens and LAST.FM.
This observation sheds new light on ChatGPT's unique behaviour within the realm of recommendation systems, revealing results and features that have not been previously explored.
### RQ2: Is ChatGPT able to exploit user preferences to re-rank a recommendation list?
In the previous experiments, ChatGPT has demonstrated the capability to recommend new items. Nonetheless, further investigation is required to assess its ability to understand user preferences and deliver personalised recommendations. To address this, we conducted two experiments:
* Experiment 1: Re-ranking a fixed list of the most popular items.
* Experiment 2: Dynamically generating personalised lists for each user starting from the preferences of their nearest neighbours.
The first task was to re-rank a list of items. Nevertheless, ChatGPT went beyond the assigned task. Indeed, ChatGPT replaced less relevant items from the fixed list by introducing new and different items. This behaviour was observed less frequently in the second experiment.
The accuracy, diversity, novelty, and bias evaluation results for both the re-ranking experiments are presented in Tables 7, 8, and 9.
**Re-ranking a Most Popular list.** To explore the re-ranking setting, we generated a fixed list of the fifty most popular items for each dataset. Subsequently, we have first evaluated the list in its original form across all users, and we have then incorporated it into ChatGPT with the user profile to generate and evlauate a personalised re-ranked list.
This investigation reveals that ChatGPT significantly improves overall recommendations through re-ranking, using the MostPop results as a baseline. Across all datasets and the majority of cutoff values, ChatGPT's performance improvement is twofold: (i) a higher Gini index, suggesting its ability to diversify recommendations, while (ii) a higher EPC, indicating the successful introduction of novel items. However, it is
important to note that this improvement varies for the Facebook Books dataset, with a cutoff of 50.
The enhanced ranked list suggests that ChatGPT effectively uses the provided user preferences to personalise and re-rank the recommendations. Additionally, it demonstrates its ability to introduce new items, as evidenced by a higher ItemCoverage metric value.
**Re-rank on nearest neighbors' preferences.** We conducted a second experiment to gain deeper insights into ChatGPT's ability to re-rank. Unlike the previous one, where a fixed list was used for all users, we generated personalized lists for each user and tasked ChatGPT with re-ranking them. We carefully curated these lists by selecting the five highest-rated items from each nearest neighbour, creating a tailored list of fifty items for each user. To ensure objectivity and minimize ranking bias based
\begin{table}
\begin{tabular}{l l c c c c c c c c} \hline \hline & & \multicolumn{6}{c}{**MovieLens**} \\ \cline{2-10} & **model** & **nDCG\(\uparrow\)** & **Recall\(\uparrow\)** & **Precision\(\uparrow\)** & **ACLT\(\uparrow\)** & **ARP\(\downarrow\)** & **PopREO\(\downarrow\)** & **ItemCoverage\(\uparrow\)** & **Gini\(\uparrow\)** & **EPC\(\uparrow\)** \\ \hline \multirow{2}{*}{**cutoff 10**} & MostPop & 0.0805 & 0.03473 & 0.06812 & 0.0000 & **108.63205** & **1** & 25 & 0.00614 & 0.05819 \\ & ChatGPT-3.5 & **0.14880** & **0.07446** & **0.12265** & **0.00336** & 153.13389 & **1** & **80** & **0.01202** & **0.09903** \\ \hline \multirow{2}{*}{**cutoff 20**} & MostPop & 0.08946 & 0.07117 & 0.06980 & 0.00000 & **105.21535** & **1** & 47 & 0.01256 & 0.05713 \\ & ChatGPT-3.5 & **0.13959** & **0.10780** & **0.09010** & **0.02181** & 133.63866 & **1** & **133** & **0.01880** & **0.08066** \\ \hline \multirow{2}{*}{**cutoff 50**} & MostPop & 0.10983 & 0.15480 & **0.05054** & 0.00000 & **107.36097** & **1** & 50 & 0.02452 & 0.05832 \\ & ChatGPT-3.5 & **0.13479** & **0.14450** & 0.01758 & **0.13423** & 108.35477 & **1** & **246** & **0.02758** & **0.07017** \\ \hline \hline \multirow{2}{*}{**cutoff 10**} & **model** & **nDCG\(\uparrow\)** & **Recall\(\uparrow\)** & **Precision\(\uparrow\)** & **ACLT\(\uparrow\)** & **ARP\(\downarrow\)** & **PopREO\(\downarrow\)** & **ItemCoverage\(\uparrow\)** & **Gini\(\uparrow\)** & **EPC\(\uparrow\)** \\ \cline{2-10} & Nearest Neighbors & 0.10568 & 0.06681 & 0.09983 & **1.51678** & **65.15302** & **0.72954** & **1146** & **0.28106** & 0.08347 \\ \cline{2-10} & ChatGPT-3.5 & **0.20359** & **0.11696** & **0.16594** & 0.22483 & 119.277131 & 0.98989 & 533 & 0.056791 & **0.14229** \\ \hline \multirow{2}{*}{**cutoff 20**} & Nearest Neighbors & 0.12622 & 0.13109 & 0.09740 & **2.96141** & **65.03809** & **0.73239** & **1346** & **0.30116** & 0.08229 \\ & ChatGPT-3.5 & **0.20495** & **0.18005** & **0.12970** & 1.20973 & 92.73143 & 0.90861 & 1009 & 0.15534 & **0.12133** \\ \hline \multirow{2}{*}{**cutoff 50**} & Nearest Neighbors & 0.19933 & **0.33021** & **0.090584** & **7.29027** & **64.85725** & 0.71754 & **1533** & **0.31651** & 0.08150 \\ & ChatGPT-3.5 & **0.22823** & 0.28522 & 0.08399 & 5.58054 & 69.56444 & **0.73156** & 1501 & 0.29852 & **0.10086** \\ \hline \hline \end{tabular}
\end{table}
Table 7: Experiment 2 and 3 - A comparative analysis of ChatGPT-3.5 metrics after the re-rank on MostPop list, and personalized list with various baselines, with cutoff at 10, 20, and 50 on MovieLens. The best results are high-lighted in bold.
\begin{table}
\begin{tabular}{l l c c c c c c c c c} \hline \hline & & \multicolumn{6}{c}{**LAST.FM**} \\ \cline{2-10} & **model** & **nDCG\(\uparrow\)** & **Recall\(\uparrow\)** & **Precision\(\uparrow\)** & **ACLT\(\uparrow\)** & **ARP\(\downarrow\)** & **PopREO\(\downarrow\)** & **ItemCoverage\(\uparrow\)** & **Gini\(\uparrow\)** & **EPC\(\uparrow\)** \\ \hline \multirow{2}{*}{**cutoff 10**} & MostPop & 0.07828 & 0.07915 & 0.06528 & 0.00000 & 325.16628 & 1.00000 & 24 & 0.00793 & 0.05614 \\ & ChatGPT-3.5 & **0.13569** & **0.10910** & **0.08358** & **0.06845** & **220.38203** & **0.93419** & **338** & **0.02326** & **0.09810** \\ \hline \multirow{2}{*}{**cutoff 20**} & MostPop & 0.08000 & 0.11889 & 0.04808 & 0.00000 & 272.35846 & 1.00000 & 37 & 0.01519 & 0.04574 \\ & ChatGPT-3.5 & **0.15336** & **0.14449** & **0.05526** & **0.14190** & **205.41677** & **0.92894** & **443** & **0.02791** & **0.07379** \\ \hline \multirow{2}{*}{**cutoff 50**} & MostPop & 0.13161 & **0.20401** & **0.03213** & 0.00000 & 211.94210 & 10.0000 & 50 & 0.03140 & 0.03833 \\ & ChatGPT-3.5 & **0.15362** & 0.10974 & 0.02578 & **0.25508** & **0.296386** & **0.93633** & **564** & **0.03481** & **0.06844** \\ \hline \multirow{2}{*}{**cutoff 10**} & **model** & **nDCG\(\uparrow\)** & **Recall\(\uparrow\)** & **Precision\(\uparrow\)** & **ACLT\(\uparrow\)** & **ARP\(\downarrow\)** & **PopREO\(\downarrow\)** & **ItemCoverage\(\uparrow\)** & **Gini\(\uparrow\)** & **EPC\(\uparrow\)** \\ \cline{2-10} & Nearest Neighbors & 0.07686 & 0.08915 & 0.06334 & **1.92453** & **86.53833** & **0.4509** & 1018 & **0.36418** & 0.05785 \\ \cline{2-10} & ChatGPT-3.5 & **0.17413** & **0.16134** & **0.11820** & 1.03561 & 121.27462 & 0.62086 & **1151** & 0.24543 & **0.128283** \\ \hline \multirow{2}{*}{**cutoff 20**} & Nearest Neighbors & 0.11450 & 0.16742 & 0.06024 & **3.90027** & **86.96334** & **0.35272** & 1220 & **0.39451** & 0.05605 \\ \cline{2-10} & ChatGPT-3.5 & **0.20831** & **0.23071** & **0.08492** & 2.56149 & 106.10265 & 0.5087 & **1362** & 0.31457 & **0.10210** \\ \hline \multirow{2}{*}{**cutoff 50**} & Nearest Neighbors & 0.19787 & **0.38450** & **0.05650** & **9.888949** & **86.56765** & **0.41512** &
on neighbour similarity, we further introduced disorder by randomizing the item order in the list.
By using Nearest Neighbors and Cosine Similarity as a reference, the results obtained from ChatGPT show its ability to "understand" user preferences. Across all datasets and cutoff values, ChatGPT achieves higher values in nDCG and EPC, resulting in an improved ranking and an introduction of novel items in the recommended lists. Additionally, the model exhibits lower Item Coverage but higher Popularity Bias (PopREO), along with a decrease in the Gini value. These findings suggest that the re-ranked list generated by ChatGPT tends to shift towards the most popular items, limiting diversification. However, the higher performance in nDCG and the above observations reaffirm ChatGPT's ability to offer personalized item suggestions to each user.
**Conclusion.** In summary, our exploration focuses on ChatGPT's ability to utilize user profiles for re-ranking recommendations, leading to enhanced personalization. We investigate two scenarios: one involving a fixed list of the most popular items and the other based on preferences from nearest neighbours. The findings emphasize ChatGPT's effectiveness in personalizing recommendations based on user profiles. Additionally, in scenarios with limited items (e.g., the first experimental scenario), ChatGPT showcases its reliance on its knowledge, offering the potential to address the Cold Start Problem.
RQ3: Does the substantial amount of knowledge utilized to train ChatGPT compensate for the absence of a complete user history in a cold-start scenario?
Firstly, we identify the cold start users by dividing the users in quartiles based on the number of their past interactions.
To investigate the performance of LLMs in cold-start scenarios, we employ a systematic two-step approach. Initially, we identify cold-start users by dividing them into quartiles based on their past interactions. Subsequently, we use the lower quartile as a
\begin{table}
\begin{tabular}{l l c c c c c c c c} \hline \hline & & \multicolumn{6}{c}{**Facebook Books**} \\ \cline{2-10} & **model** & **nDCG\(\dagger\)** & **Recall\(\dagger\)** & **Precision\(\dagger\)** & **ACLT\(\dagger\)** & **ARP\(\dagger\)** & **PopREO\(\dagger\)** & **ItemCoverage\(\dagger\)** & **Gini\(\dagger\)** & **EPC\(\dagger\)** \\ \hline \multirow{2}{*}{**cutoff 10**} & MostPop & 0.00091 & 0.00257 & 0.00037 & **3.61866** & **74.86106** & 1.00000 & 17 & 0.00420 & 0.00018 \\ & ChatGPT-3.5 & **0.01232** & **0.01954** & **0.00432** & 0.31845 & 90.87187 & **0.43095** & **331** & **0.01368** & **0.00452** \\ \hline \multirow{2}{*}{**cutoff 20**} & MostPop & 0.00328 & 0.01016 & 0.00110 & **9.59589** & **45.57179** & 0.65500 & 27 & 0.00864 & 0.00071 \\ & ChatGPT-3.5 & **0.01552** & **0.02965** & **0.00324** & 1.97024 & 70.71079 & **0.40133** & **430** & **0.01785** & **0.00374** \\ \hline \multirow{2}{*}{**cutoff 50**} & MostPop & 0.01148 & **0.04449** & **0.00190** & **20.90883** & **33.07245** & 0.39975 & 49 & 0.02107 & 0.00150 \\ & ChatGPT-3.5 & **0.01743** & 0.03734 & 0.00161 & 0.64504 & 57.15838 & **0.03657** & **531** & **0.02535** & **0.00291** \\ \hline \hline \multirow{2}{*}{**cutoff 10**} & model & **nDCG\(\dagger\)** & **Recall\(\dagger\)** & **Precision\(\dagger\)** & **ACLT\(\dagger\)** & **ARP\(\dagger\)** & **PopREO\(\dagger\)** & **ItemCoverage\(\dagger\)** & **Gini\(\dagger\)** & **EPC\(\dagger\)** \\ \cline{2-10} & Nearest Neighbors & 0.01191 & 0.02122 & 0.00500 & **1.84203** & **36.50306** & **0.64570** & **1630** & **0.26445** & 0.03069 \\ & ChatGPT-3.5 & **0.02993** & **0.04455** & **0.00975** & 0.5074 & 64.05953 & 0.72869 & 698 & 0.07020 & **0.01111** \\ \hline \multirow{2}{*}{**cutoff 20**} & Nearest Neighbors & 0.01928 & 0.04483 & 0.00514 & **3.73475** & **36.74467** & 0.77165 & **1887** & **0.27770** & 0.00481 \\ & ChatGPT-3.5 & **0.03390** & **0.05754** & **0.00621** & 1.69094 & 51.54417 & **0.73359** & 1237 & 0.14129 & **0.00811** \\ \hline \multirow{2}{*}{**cutoff 50**} & Nearest Neighbors & 0.03017 & **0.08816** & **0.00397** & **7.43277** & **36.75897** & 0.77610 & **2033** & **0.28374** & 0.00475 \\ & ChatGPT-3.5 & **0.03739** & 0.07111 & 0.00308 & 4.61472 & 42.91002 & **0.71285** & 1751 & 0.23747 & **0.00624** \\ \hline \hline \end{tabular}
\end{table}
Table 9: Experiment 2 and 3 - A comparative analysis of ChatGPT-3.5 metrics after the re-rank on MostPop list, and personalized list with various baselines, with cutoff at 10, 20, and 50 on Facebook Books. The best results are high-lighted in bold.
filter on the test set, creating a subset of users representing the cold-start users. This enables us to evaluate all models and datasets under similar cold-start conditions, and the accuracy, diversity, novelty, and bias results are presented in Table 10.
In our evaluation, considering metrics such as nDCG, Recall, and Precision, LLMs consistently obtain the top three positions across all datasets. Notably, ChatGPT-3.5 excels in the music and books domains, while PaLM-2 demonstrates superior performance in the movie domain. However, these models do not exhibit notable results considering other metrics, with the exception of the novelty metric (EPC). Nevertheless, it is evident that ChatGPT (and other LLMs) achieve remarkable performance in generating recommendations even in cold-start situations, outperforming state-of-the-art models.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{6}{c}{**MovieLens**} \\ \cline{2-9}
**Model** & **nDCG\(\uparrow\)** & **Recall\(\uparrow\)** & **Precision\(\uparrow\)** & **ACLT\(\uparrow\)** & **ARP\(\downarrow\)** & **PopREQ\(\downarrow\)** & **ItemCoverage\(\uparrow\)** & **Gini\(\uparrow\)** & **EPC\(\uparrow\)** \\ \cline{2-9}
**PaLM-2** & **0.14032** & 0.07665 & **0.10596** & 0.30877 & 104.70760 & 0.98709 & 277 & 0.02758 & **0.09883** \\ \cline{2-9}
**ChatGPT-3.5** & 0.08719 & 0.09375 & 0.04211 & 0.43421 & 129.05592 & 1.00000 & 211 & 0.02664 & 0.03797 \\ \cline{2-9}
**GPT-3.5** & 0.08648 & **0.09686** & 0.04474 & 0.34211 & 132.22092 & 1.00000 & 211 & 0.03071 & 0.03858 \\ \cline{2-9}
**RP\({}^{\beta}\)** & 0.03052 & 0.05000 & 0.01974 & 0.69737 & 67.09013 & 1.00000 & 518 & 0.15417 & 0.01539 \\ \cline{2-9}
**ItemKNN** & 0.02710 & 0.03575 & 0.01645 & 1.51974 & 62.83618 & 1.00000 & 652 & 0.20589 & 0.01468 \\ \cline{2-9}
**UserKNN** & 0.02329 & 0.03191 & 0.01447 & 0.99342 & 61.06711 & 0.61702 & 596 & 0.18760 & 0.01284 \\ \cline{2-9}
**EASE\({}^{\pi}\)** & 0.02253 & 0.03388 & 0.01711 & 0.00000 & 94.94227 & 1.00000 & 142 & 0.03556 & 0.01287 \\ \cline{2-9}
**MostPop** & 0.01803 & 0.02763 & 0.01184 & 0.00000 & 97.65724 & 1.00000 & 24 & 0.00763 & 0.00895 \\ \cline{2-9}
**AttributeUserKNN** & 0.01376 & 0.01941 & 0.00921 & 1.24342 & 59.71184 & 1.00000 & 648 & 0.21535 & 0.00750 \\ \cline{2-9}
**AttributeItemKNN** & 0.01201 & 0.01513 & 0.00592 & 5.31579 & 22.73355 & 0.17391 & 957 & 0.37738 & 0.00611 \\ \cline{2-9}
**VSM** & 0.00568 & 0.00800 & 0.00395 & 4.69079 & 23.93616 & **0.00264** & 583 & 0.19462 & 0.00349 \\ \cline{2-9}
**Random** & 0.00400 & 0.00647 & 0.00263 & **5.44737** & **21.70658** & 0.24752 & **1034** & **0.43162** & 0.00221 \\ \hline \hline \multicolumn{9}{c}{**LAST.FM**} \\ \cline{2-9}
**Model** & **nDCG\(\uparrow\)** & **Recall\(\uparrow\)** & **Precision\(\uparrow\)** & **ACLT\(\uparrow\)** & **ARP\(\downarrow\)** & **PopREQ\(\downarrow\)** & **ItemCoverage\(\uparrow\)** & **Gini\(\uparrow\)** & **EPC\(\uparrow\)** \\ \cline{2-9}
**ChatGPT-3.5** & **0.20311** & **0.21970** & **0.09859** & 1.92525 & 90.67174 & 0.31012 & 708 & 0.19659 & **0.11847** \\ \cline{2-9}
**GPT-3.5** & 0.17498 & 0.17811 & 0.08101 & 1.33131 & 100.79920 & 0.42954 & 570 & 0.13548 & 0.10251 \\ \cline{2-9}
**PaLM-2** & 0.17121 & 0.19503 & 0.08948 & 1.39056 & 93.32864 & 0.37234 & 588 & 0.13837 & 0.10235 \\ \cline{2-9}
**UserKNN** & 0.02378 & 0.04819 & 0.02000 & 2.21616 & 75.09649 & 0.05705 & 1067 & 0.36914 & 0.02013 \\ \cline{2-9}
**VSM** & 0.03030 & 0.04199 & 0.01960 & 1.90303 & 83.36566 & 0.08187 & 902 & 0.26529 & 0.01823 \\ \cline{2-9}
**RP\({}^{\beta}\)**\(\beta\)** & 0.02892 & 0.04178 & 0.01980 & 2.68687 & 70.10788 & 0.11289 & 1090 & 0.38758 & 0.01764 \\ \cline{2-9}
**ItemKNN** & 0.02769 & 0.03865 & 0.01798 & 4.14545 & 48.40222 & 0.04236 & 1319 & 0.56435 & 0.01689 \\ \cline{2-9}
**AttributeItemKNN** & 0.02566 & 0.03475 & 0.01616 & 4.93939 & 42.10768 & **0.01351** & 1373 & 0.06725 & 0.01511 \\ \cline{2-9}
**AttributeUserKNN** & 0.02169 & 0.03044 & 0.01495 & 4.21616 & 46.46586 & 0.26246 & 1121 & 0.40568 & 0.01344 \\ \cline{2-9}
**EASE\({}^{\pi}\)** & 0.02139 & 0.02990 & 0.01414 & 0.56162 & 105.56162 & 0.53712 & 617 & 0.16076 & 0.01311 \\ \cline{2-9}
**MostPop** & 0.01494 & 0.02175 & 0.01051 & 0.00000 & 146.27939 & 1.00000 & 21 & 0.00722 & 0.00889 \\ \cline{2-9}
**Random** & 0.00392 & 0.00572 & 0.00242 & **5.59394** & **32.74949** & 0.12686 & **1467** & **0.69767** & 0.00220 \\ \hline \hline \multicolumn{9}{c}{**Facebook Books**} \\ \cline{2-9}
**Model** & **nDCG\(\uparrow\)** & **Recall\(\uparrow\)** & **Precision\(\uparrow\)** & **ACLT\(\uparrow\)** & **ARP\(\downarrow\)** & **PopREQ\(\downarrow\)** & **ItemCoverage\(\uparrow\)** & **Gini\(\uparrow\)** & **EPC\(\uparrow\)** \\ \cline{2-9}
**ChatGPT-3.5** & **0.04871** & **0.06809** & **0.01539** & 1.92245 & 55.01169 & 0.37060 & 635 & 0.04169 & **0.01858** \\ \cline{2-9}
**PaLM-2** & 0.03975 & 0.05399 & 0.01263 & 2.38039 & 51.86428 & 0.34845 & 535 & 0.03969 & 0.01551 \\ \cline{2-9}
**GPT-3.5** & 0.03689 & 0.05440 & 0.01262 & 1.30787 & 71.96433
**Conclusion.** To summarize, ChatGPT and other Large Language Models can recommend items in cold-start scenarios. However, these models show limitations in Bias and Coverage metrics, while displaying strong performance in Novelty and Accuracy.
## 6 Conclusion
This study delves into the potential of ChatGPT as a recommender system, conducting an extensive experimental investigation on three datasets: MovieLens Small, Last.FM, and Facebook Book. We compare ChatGPT's performance against various state-of-the-art recommender systems (RSs), such as UserKNN, ItemKNN, RP\({}^{3}\beta\), EASE\({}^{R}\), AttributeItemKNN, AttributeUserKNN, and VSM.
With the first experiment, we aim to understand how much ChatGPT is accurate in performing recommendations. Thus, we compared it with the state-of-the-art solutions, including other Large Language Models, and observed that it accurately solves the recommendation task. Even without any improvements through prompts engineering, ChatGPT produces results comparable to state-of-the-art Recommender Systems and stands out from other Large Language Models.
Then, our examination explored the diversity and novelty of ChatGPT in suggesting new items to users, and our findings have shown different conclusions depending on the domain. In its vanilla version, ChatGPT reached lower diversity but higher novelty when dealing with Books, while it showcased acceptable novelty in the Music domain.
Furthermore, our investigation aimed to reveal the potential bias inherent in ChatGPT's recommendations. We proved that ChatGPT holds varying degrees of popularity bias across different datasets by recommending popular items. Also, we acknowledged an analogous biased behaviour in GPT-3.5 and PaLM-2.
Deepening the analysis of the recommendation abilities of ChatGPT, we further explored analogies with existing recommendation paradigms. By comparing the list of suggested items produced by ChatGPT with those from the baselines, we discovered a tendency to align with hybrid and collaborative recommenders. Thus, our results revealed that ChatGPT capabilities go beyond mere content information in selecting the items to suggest.
We were also interested in examining how ChatGPT leverages user profiles for re-ranking existing recommendations. We dissect two settings: the first with a fixed list of the most popular items and the second based on the user's nearest neighbours. The findings demonstrated ChatGPT's efficacy in further personalizing recommendations based on user profiles.
Finally, we investigate whether ChatGPT can address the Cold Start scenario. The experimental results proved that ChatGPT can still perform relevant suggestions, outperforming the other state-of-the-art recommenders.
In summary, our findings revealed that ChatGPT exhibited characteristics of a hybrid recommender system, leveraging collaborative and content-based information. Additionally, ChatGPT showcased domain-specific knowledge, particularly in books and movies domains. Moreover, it tended to recommend popular items and exhibited a remarkable proficiency in handling the cold start problem.
Despite its contributions, this study has certain limitations. Firstly, it does not delve into techniques for prompt-engineering, such as Chain-of-Thought or Tree-of-Thought, which could potentially enhance the quality of recommendations. Secondly, the rapid emergence of new LLMs has introduced additional baselines that were not considered in the current results. Lastly, the exploration of recommendations in scenarios with abundant user information was not feasible due to the context limits of the ChatGPT API.
Considering the promising outcomes of this study, our future research aims to delve deeper into ChatGPT's performance in the recommendation task, with a focus on accurately designing a recommender framework that incorporates ChatGPT to enhance recommendation performance. To achieve this objective, we plan to explore potential improvements in the evaluation by investigating prompt engineering techniques and domain-specific fine-tuning methodologies.
|
2309.13400 | A note on nonlinear diffusive equations on Poincaré half plane | In this paper we show some explicit results regarding non-linear diffusive
equations on Poincar\'e half plane. We obtain exact solutions by using the
generalized separation of variables and we also show the meaning of these
results in the context of the general theory of the invariant subspace method. | Roberto Garra, Francesco Maltese | 2023-09-23T15:22:17Z | http://arxiv.org/abs/2309.13400v1 | # A note on nonlinear diffusive equations on Poincare half plane
###### Abstract
In this paper we show some explicit results regarding non-linear diffusive equations on Poincare half plane. We obtain exact solutions by using the generalized separation of variables and we also show the meaning of these results in the context of the general theory of the invariant subspace method.
_Keywords:_ Nonlinear diffusion equation, Hyperbolic geometry, Exact solutions.
## 1 Introduction
The analysis of exact solutions for nonlinear diffusive equations plays a central role for the applications, for example in the physics of porous media [8]. There are many different methods to find particular exact interesting solutions for nonlinear PDEs, for example by using the invariant subspace method [1] or the generalized separation of variables [5, 6]. In the recent paper [2], the authors study the non-linear time-fractional equation
\[\frac{\partial^{\nu}}{\partial t^{\nu}}u(\eta,t)=\Delta_{H}u^{n}-u=\frac{1}{ \sinh\eta}\frac{\partial}{\partial\eta}\sinh\eta\frac{\partial u^{n}}{\partial \eta}-u,\quad n>1, \tag{1.1}\]
involving the time-fractional derivative \(\partial^{\nu}/\partial t^{\nu}\) in the sense of Caputo. Motivated by this study, here we consider two general class of nonlinear equations in the hyperbolic plane, showing that it is possible to construct interesting exact solutions for these equations by using simple methods.
We first consider a more general formulation of the equation (1.1), that is
\[\widehat{O}_{t}u(\eta,t)=\Delta_{H}u^{n}-u=\frac{1}{\sinh\eta}\frac{\partial} {\partial\eta}\sinh\eta\frac{\partial u^{n}}{\partial\eta}-u,\quad n>1, \tag{1.2}\]
where \(\widehat{O}_{t}\) is a linear differential operator acting on the variable \(t\). We show that this equation admits elementary exact solutions that can be constructed by separation of variables. These results can be proved rigorously by using the invariant subspace method for example. We consider some particular interesting cases in the wide family of nonlinear equations (1.2), when the linear operato \(\widehat{O}_{t}\) is a first order time-derivative or a Laguerre derivative or a fractional derivative. In the second part of the paper we consider the following family of nonlinear equations in the hyperbolic plane
\[\widehat{O}_{t}u(\eta,t)=u\Delta_{H}u=\frac{u}{\sinh\eta}\frac{\partial}{ \partial\eta}\sinh\eta\frac{\partial u}{\partial\eta}. \tag{1.3}\]
This case is particularly interesting, since it admits useful non trivial exact solutions.
Essentially there are few investigations about nonlinear fractional equations on the hyperbolic plane. Our main aim is to provide some ideas for the construction of exact solutions for some family of interesting nonlinear equations. By using similar methods, many other problems can be solved.
Construction of exact solutions for nonlinear diffusive equations on Poincare half plane
### A first interesting case
There are many families of nonlinear equations that can be solved by means of the invariant subspace method (see [1] for the details). As far as we know, there are few studies about the application of this method to solve nonlinear equations in the hyperbolic space. We show the potential utility of this method, starting from a generalization of the time-fractional nonlinear diffusive equation on Poincaire half plane studied in [2], that is
\[\widehat{O}_{t}u(\eta,t)=\Delta_{H}u^{n}-u=\frac{1}{\sinh\eta}\frac{\partial}{ \partial\eta}\sinh\eta\frac{\partial u^{n}}{\partial\eta}-u,\quad n>1. \tag{2.4}\]
We have the following result.
**Theorem 1**.: The equation (2.4) admits a solution of the form
\[u(\eta,t)=f(t)\sqrt[n]{c_{1}\ln(\tanh\frac{\eta}{2})+c_{2}}, \tag{2.5}\]
where \(f(t)\) is a solution of the equation
\[\widehat{O}_{t}f(t)=-f(t)\]
and \(c_{1}\) e \(c_{2}\) are real constants such that \(c_{1}\ln(\tanh\frac{\eta}{2})+c_{2}>0\).
Proof.: We observe that
\[=f^{n}(t)\frac{\partial}{\partial\eta}\sinh\eta\frac{\partial}{\partial\eta} \bigg{[}c_{1}\ln(\tanh\frac{\eta}{2})+c_{2}\bigg{]}=0\]
Thus in this case \(u(\eta,t)\) belongs to kernel of \(\Delta_{H}\). From this observation, we can insert the function \(u(\eta,t)\) in the equation (2.4) and we have
\[\widehat{O}_{t}\bigg{(}f(t)\sqrt[n]{c_{1}\ln(\tanh\frac{\eta}{2})+c_{2}}\bigg{)} =\frac{1}{\sinh\eta}\frac{\partial}{\partial\eta}\sinh\eta\frac{\partial}{ \partial\eta}\bigg{[}f(t)\sqrt[n]{c_{1}\ln(\tanh\frac{\eta}{2})+c_{2}}\bigg{]} ^{n}-f(t)\sqrt[n]{c_{1}\ln(\tanh\frac{\eta}{2})+c_{2}}\]
\[\bigg{(}\sqrt[n]{c_{1}\ln(\tanh\frac{\eta}{2})+c_{2}}\bigg{)}\cdot\widehat{O}_ {t}f(t)=-f(t)\sqrt[n]{c_{1}\ln(\tanh\frac{\eta}{2})+c_{2}}\]
\[\widehat{O}_{t}f(t)=-f(t)\]
By hypothesis on \(f(t)\) we obtain the claimed result.
By means of Theorem 2.1 we can provide an exact solution for (2.4) in different interesting cases. For example if \(\widehat{O}_{t}\) coincides with a Caputo fractional derivative of order \(\beta\in(0,1)\), i.e. in this case we have that
\[\widehat{O}_{t}u(\eta,t)=\frac{\partial^{\beta}u}{\partial t^{\beta}}=\frac{1}{ \Gamma(1-\beta)}\int_{0}^{t}(t-\tau)^{-\beta}\frac{\partial u}{\partial\tau}d\tau, \tag{2.6}\]
we have that the function
\[u(\eta,t)=E_{\beta}(-t^{\beta})\sqrt[n]{c_{1}\ln(\tanh\frac{\eta}{2})+c_{2}}, \tag{2.7}\]
is a solution for the non-linear time-fractional equation
\[\frac{\partial^{\beta}}{\partial t^{\beta}}u(\eta,t)=\frac{1}{\sinh\eta}\frac{ \partial}{\partial\eta}\sinh\eta\frac{\partial u^{n}}{\partial\eta}-u,\quad n >1. \tag{2.8}\]
In (2.7) we denoted by \(E_{\beta}(-t^{\beta})\) the one-parameter Mittag-Leffler function, i.e. the eigenfunction of the Caputo fractional derivative (see e.g. [3]).
For \(\beta=1\) we recover the solution
\[u(\eta,t)=e^{-t\ \sqrt[n]{c_{1}\ln(\tanh\frac{\eta}{2})+c_{2}}}, \tag{2.9}\]
for the classical equation
\[\frac{\partial}{\partial t}u(\eta,t)=\frac{1}{\sinh\eta}\frac{\partial}{ \partial\eta}\sinh\eta\frac{\partial u^{n}}{\partial\eta}-u,\quad n>1. \tag{2.10}\]
Another interesting case is when \(\widehat{O}_{t}=\frac{\partial}{\partial t}t\frac{\partial}{\partial t}\) that is the so-called Laguerre derivative (see for example [7]). In this case we have that the function
\[u(\eta,t)=C_{0}(t)\sqrt[n]{c_{1}\ln(\tanh\frac{\eta}{2})+c_{2}}, \tag{2.11}\]
is a solution for the equation
\[\frac{\partial}{\partial t}t\frac{\partial}{\partial t}u(\eta,t)=\frac{1}{ \sinh\eta}\frac{\partial}{\partial\eta}\sinh\eta\frac{\partial u^{n}}{\partial \eta}-u,\quad n>1. \tag{2.12}\]
In the Equation (2.11) the function \(C_{0}(t)\) is a Bessel-type function, i.e.
\[C_{0}(t)=\sum_{k=0}^{\infty}\frac{t^{k}}{k!^{2}} \tag{2.13}\]
By using this approach we can construct exact solutions for many different nonlinear PDEs involving the hyperbolic Laplacian. For example, an interesting outcome inspired by [4] is given by the following result
**Proposition 1**.: The nonlinear equation
\[\frac{\partial u}{\partial t}-\frac{i\omega}{\alpha}u=\frac{1}{\sinh\eta} \frac{\partial}{\partial\eta}\sinh\eta\frac{\partial u^{n}}{\partial\eta}, \tag{2.14}\]
admits the completely periodic separating variable solution
\[u(x,t)=\exp\left(\frac{i\omega}{\alpha}t\right)\sqrt[n]{c_{1}\ln(\tanh\frac{ \eta}{2})+c_{2}}. \tag{2.15}\]
The proof can be simply obtained by direct substitution.
### The second case
**Theorem 2**.: The nonlinear diffusion equation
\[\widehat{O}_{t}u(\eta,t)=\frac{u}{\sinh\eta}\frac{\partial}{\partial\eta}\sinh \eta\frac{\partial u}{\partial\eta} \tag{2.16}\]
admits an explicit solution of the form
\[u(\eta,t)=f_{1}(t)\ln(\sinh\eta)+f_{2}(t)\ln(\tanh\frac{\eta}{2})+f_{3}(t), \tag{2.17}\]
where \(f_{1}(t)\) is a solution of the nonlinear equation
\[\widehat{O}_{t}f_{1}(t)=f_{1}^{2}(t), \tag{2.18}\]
while \(f_{2}\) and \(f_{3}\) satisfy the equations
\[\widehat{O}_{t}f_{2}=f_{1}f_{2},\quad\widehat{O}_{t}f_{3}=f_{1}f_{3}. \tag{2.19}\]
Proof.: We observe that
\[\frac{\partial}{\partial\eta}\sinh\eta\frac{\partial}{\partial\eta}\bigg{[}f_ {2}(t)\ln(\tanh\frac{\eta}{2})+f_{3}(t)\bigg{]}=0, \tag{2.20}\]
and
\[\frac{f_{1}(t)\ln(\sinh\eta)}{\sinh\eta}\frac{\partial}{\partial\eta}\sinh \eta\frac{\partial f_{1}\ln(\sinh\eta)}{\partial\eta}=f_{1}^{2}\ln(\sinh\eta). \tag{2.21}\]
Therefore, if we search a solution of the form
\[u(\eta,t)=f_{1}(t)\ln(\sinh\eta)+f_{2}(t)\ln(\tanh\frac{\eta}{2})+f_{3}(t) \tag{2.22}\]
we have, by substitution
\[\ln(\sinh\eta)\widehat{O}_{t}f_{1}(t)+\ln(\tanh\frac{\eta}{2})\widehat{O}_{t} f_{2}(t)+\widehat{O}_{t}f_{3}(t)=f_{1}^{2}\ln(\sinh\eta)+f_{1}f_{2}\ln(\tanh \frac{\eta}{2})+f_{1}f_{3}. \tag{2.23}\]
Therefore the function (2.22) is a solution if
\[\widehat{O}_{t}f_{1}(t)=f_{1}^{2},\quad\widehat{O}_{t}f_{2}=f_{1}f_{2},\quad \widehat{O}_{t}f_{3}=f_{1}f_{3}.\]
as stated.
_Remark_.: Following the theory of the invariant subspace method, we have used the fact that the equation (2.16) admits the invariant subspace \(W^{3}:<1,\ln(\sinh\eta),\ln(\tanh\frac{\eta}{2})>\).
We now consider some interesting particular cases that are related to classical mathematical models. First of all we consider the classical nonlinear diffusive case
\[\frac{\partial u}{\partial t}=\frac{u}{\sinh\eta}\frac{\partial}{\partial\eta }\sinh\eta\frac{\partial u}{\partial\eta}. \tag{2.24}\]
In view of the previous Theorem, we can construt an exact solution by solving the following simple ODEs
\[\frac{d}{dt}f_{1}(t)=f_{1}^{2},\quad\frac{d}{dt}f_{2}=f_{1}f_{2},\quad\frac{d }{dt}f_{3}=f_{1}f_{3}.\]
Therefore we have that a solution of the equation (2.24) is given by
\[u(\eta,t)=\frac{\ln(\sinh\eta)}{t_{0}-t}+c_{1}\frac{\ln(\tanh\frac{\eta}{2})}{ t_{0}-t}+\frac{c_{2}}{t_{0}-t}. \tag{2.25}\] |
2309.03175 | Gender-specific Machine Translation with Large Language Models | While machine translation (MT) systems have seen significant improvements, it
is still common for translations to reflect societal biases, such as gender
bias. Decoder-only Large Language Models (LLMs) have demonstrated potential in
MT, albeit with performance slightly lagging behind traditional encoder-decoder
Neural Machine Translation (NMT) systems. However, LLMs offer a unique
advantage: the ability to control the properties of the output through prompts.
In this study, we leverage this flexibility to explore LLaMa's capability to
produce gender-specific translations. Our results indicate that LLaMa can
generate gender-specific translations with translation accuracy and gender bias
comparable to NLLB, a state-of-the-art multilingual NMT system. Furthermore,
our experiments reveal that LLaMa's gender-specific translations rely on
coreference resolution to determine gender, showing higher gender variance in
gender-ambiguous datasets but maintaining consistency in less ambiguous
contexts. This research investigates the potential and challenges of using LLMs
for gender-specific translations as an instance of the controllability of
outputs offered by LLMs. | Eduardo Sánchez, Pierre Andrews, Pontus Stenetorp, Mikel Artetxe, Marta R. Costa-jussà | 2023-09-06T17:24:06Z | http://arxiv.org/abs/2309.03175v2 | # Gender-specific Machine Translation with Large Language Models
###### Abstract
Decoder-only Large Language Models (LLMs) have demonstrated potential in machine translation (MT), albeit with performance slightly lagging behind traditional encoder-decoder Neural Machine Translation (NMT) systems. However, LLMs offer a unique advantage: the ability to control the properties of the output through prompts. In this study, we harness this flexibility to explore LLMa's capability to produce gender-specific translations for languages with grammatical gender. Our results indicate that LLMa can generate gender-specific translations with competitive accuracy and gender bias mitigation when compared to NLLB, a state-of-the-art multilingual NMT system. Furthermore, our experiments reveal that LLMa's translations are robust, showing significant performance drops when evaluated against opposite-gender references in gender-ambiguous datasets but maintaining consistency in less ambiguous contexts. This research provides insights into the potential and challenges of using LLMs for gender-specific translations and highlights the importance of in-context learning to elicit new tasks in LLMs.
## 1 Introduction
Decoder-only Large Language Models (LLMs) have shown Machine Translation (MT) capabilities inferior to but competitive with encoder-decoder Neural Machine Translation (NMT) systems [11, 13, 14, 15, 16]. However, LLMs have been proven to allow for more control over the properties of the output [1, 17, 18]. While NMT models are trained to accept a single sequence and output its translation, LLMs make it possible to condition the output format with a prompt.
The task of providing two translations for a gender-ambiguous source has been addressed mainly through post-editing, the most popular solution being Google Translate's post-translation gender rewriter [14]. The proposed system produces a single sentence that is then switched into the opposite gender using a second language-specific model. This approach is limited by having to train language-specific gender-switching models and the breadth of patterns it can cover.
Given the flexibility of prompting, we explore the capacity of LLMs to produce gender-specific translations for languages with grammatical gender, while maintaining a robust translation quality and minimizing gender bias. We use in-context examples (ICEs) to elicit the task of translating from a single source to two gender-specific targets (Figure 1).
We evaluate the quality of the gender-specific translations on two aspects: translation accuracy (measured in BLEU points) and gender bias mitigation (measured in coreference resolution accuracy).
We show that it is possible to generate gender-specific translations with accuracy and gender
Figure 1: Example of gender-specific translation template with two ICEs. The bold text denotes the prompt and the blue text denotes the model’s output. While the source sentence “I have friends who are orphans.” is ambiguous, i.e., it isn’t possible to infer the gender of the direct object of the sentence, the two produced sentences have clear grammatical gender markers, so that they can be translated as “I have (male) friends who are orphans.” and “I have (female) friends who are orphans.” respectively.
bias mitigation competitive with NLLB, the SOTA massively multilingual NMT system at the time of writing this paper. LLaMa achieves on average under 2 BLEU points less than NLLB; when only accounting for languages explicitly included in LLaMa, this gap lowers to less than 1 BLEU point. We also demonstrate the robustness of the gender-specific translation method, showing steep decreases in performance when using the opposite gender as an evaluation reference in a highly gender-ambiguous dataset (MultilingualHolisticBias), but exhibiting almost no variance in less gender-ambiguous datasets (FLoRes).
The structure of the article consists of giving insight into the related work in the field of MT with LLMs and their gender biases in Section 2, outlining the proposed methodology in Section 3, listing the employed datasets, models and evaluation metrics in Section 4, discussing experiments and results in Section 5 and highlighting the conclusions of our work in Section 6.
## 2 Related Work
In-context learningIn-context learning (ICL) is a well-known emergent ability of LLMs (Brown et al., 2020; Chowdhery et al., 2022), where new tasks are learned using only a few supervised examples in the form of demonstrations (Dong et al., 2023) Given a set of \(n\) labeled examples \(E=\{(x_{1},y_{1}),(x_{2},y_{2}),...,(x_{n},y_{n})\}\) and a prompt head \(H\), a templating function \(T(x,y)\) and an incomplete templating function \(\hat{T}(x)\), the output \(\hat{y}\) corresponding to the input \(\hat{x}\) is calculated according to the following formula:
\[\hat{y}=\operatorname*{arg\,max}_{y_{j}^{\prime}\in Y}P(\hat{y}|X)\]
with
\[X=H\oplus\bigoplus_{i=1}^{n}T(x_{i},y_{i})\oplus\hat{T}(\hat{x})\]
where \(Y\) denotes the set of all possible outputs and \(\oplus\) denotes the concatenation operation.
Several authors have investigated how the choice of ICEs impacts the performance of LLMs. Min et al. (2022) explore the role of ICEs and show that ground truth demonstrations are not necessary for effective ICL and that other aspects, such as label space, input text distribution, and sequence format are instead the key drivers of task performance. Agrawal et al. (2023) analyze the impact of factors such as the choice and number of examples on the output translation quality. They show that a single good prompt optimized to maximize translation quality on the development dataset can elicit learned information from the pre-trained language model. Tanwar et al. (2023) study ICL in a cross-lingual setting, proposing a prompt construction strategy aimed at mitigating the lack of alignment between input and output spaces in multilingual LLMs. Wei et al. (2023) examined the reliance of LLMs on the ICEs as the scale of the models grew. They find that larger models are able to override semantic priors from their pre-trained dataset more easily than smaller models, rendering the increasing capacity to reply on ICL an emergent ability in itself.
MT and controlled output with LLMsA few papers have evaluated the quality of MT using different models and GPT-based commercial products, such as PALM (Chowdhery et al., 2022), XGLM (Agrawal et al., 2023), GLM (Zhang et al., 2023), BLOOM (Bawden and Yvon, 2023), OPT (Zhu et al., 2023) or ChatGPT (Jiao et al., 2023; Hendy et al., 2023). They conclude that the translation quality comes close but remains behind the performance of NMTs. Using LLMs can, however, allow for more control over the properties of the output without further finetuning, such as specifying the language variety and style of the translation (Garcia et al., 2023), producing terminology-constrained translations (Moslem et al., 2023) or using an iterative prompting process to clarify ambiguities in the source sentence (Pilault et al., 2023). Challenges persist in the area of hallucinations (Zhang et al., 2023; Guerreiro et al., 2023) and in performance in low-resource languages (Bawden and Yvon, 2023; Zhu et al., 2023).
Bias in LLMsThere has been extensive work on bias for Natural Language Processing (Blodgett et al., 2020; Pennington et al., 2014; Bolukbasi et al., 2016; Caliskan et al., 2017; Costa-jussa, 2019). In particular, there has been growing interest in detecting bias in LLMs. Lin et al. (2021) propose a benchmark to evaluate truthfulness in LLMs while Gehman et al. (2020) present a dataset to discover neural degeneration that leads to toxic outputs. Shaikh et al. (2023) expand the previous work to analyze how chain-of-thought (CoT)
prompting Wei et al. (2023) increases a model's likelihood to produce biased outputs previously mitigated through alignment.
Gender Bias in MTMore narrowly, some authors have worked in analyzing and mitigating biases in Machine Translation. prates2018multi studied the bias of the commercial translation system Google Translate and found that it yields male defaults much more frequently than what would be expected from US demographic data. costa-jussa2022 investigate the role of model architecture in the level of gender bias, while Mechura (2022) looks at the source sentences and elaborates a taxonomy of the features that induce gender bias into the translations. Others have looked more closely at the challenge of gender bias mitigation. stafanovics2020multi,seman2020multi assume that it's not always possible to infer all the necessary information from the source sentence alone and a method that uses word-level annotations containing information about the subject's gender to decouple the task of performing an unbiased translation from the task of acquiring gender-specific information. saunders2020multi treat the mitigation as a domain adaptation problem, using transfer learning on a small set of trusted, gender-balanced examples to achieve considerable gains with a fraction of the from-scratch training costs. fleisig2022multi develop a framework to make NMT systems suitable for gender bias mitigation through adversarial learning, adjusting the training objective at fine-tuning time. Finally, wang2022multi focus on existing biases in person name translation, applying a data augmentation technique consisting of randomly switching entities, obtaining satisfactory results.
## 3 Proposed Methodology
Given the capacity of LLMs to produce text given a context, we leverage LLMs' ICL abilities to elicit gender-specific translations via prompting. Our template consists of the English name of the source language between square brackets, followed by the source sentence. This is followed by two translations preceded by the English name of the target language. Each translation represents, respectively, the masculine and the feminine translation of the source sentence. The three sentences are joined by a carriage return character ('\(\backslash\)n'). An example of a gender-specific prompt can be found in Figure 1.
We compare the accuracy of the gender-specific translations produced by an LLM with the translations produced by an NMT model of the MultilingualHolisticBias dataset, which contains gender-specific translations from English to 25 languages. We verify the robustness of our approach by inverting the gender of the reference and calculating the decrease in translation accuracy. We also compare the gender bias of the gender-specific translations of an LLM with respect to an NMT model by evaluating coreference resolution accuracy on the WinoMT and BUG datasets. Finally, we generate gender-specific translations with FLoRes, a general-purpose dataset, and verify the similarity between both translations in a context where very little gender ambiguity when translating from English to other languages is expected.
## 4 Experimental framework
### Data
MultilingualHolisticBiasTo extend bias analyses beyond commonly studied languages, costa2023multi provided expert-curated translations of a subset (325 sentences) of Holistic Bias Smith et al. (2022) into 50 languages. Additionally, MultilingualHolisticBias presents separate translations for each noun class or grammatical gender for those languages that make use of them. An example of an entry of the MultilingualHolisticBias dataset can be found in Table 1. For this study, we used the subset of languages that make use of grammatical genders or noun classes, as it allows us to benchmark the models' output against gender-specific translations. We have also verified that the selected 25 languages have a very high correlation between grammatical gender and natural gender, allowing us to establish a relationship between gender bias and the accuracy of coreference resolution in a model. A complete list of languages used from the MultilingualHolisticBias dataset can be found in Appendix A.
WinoMT & BUGaimed at estimating the gender bias of an MT system, stanovsky2019multiconcatenate Winogender Rudinger et al. (2018) and WinoBias Zhao et al. (2018)'s coreference test sets for 8 languages where the grammatical gender tends to be aligned with the biological gender Craig (1986); Mucchi-Faina (2005); Corbett (2007), naming the new dataset WinoMT. This benchmark relies on language-specific automatic measures for
alignment and morphological analysis, therefore it doesn't need gold translations for evaluation. Stanovsky et al.'s (2019) dataset was expanded by Levy et al. (2021) to include 108K diverse real-world English sentences in a new benchmark dubbed BUG. Given the size of the BUG dataset, we only use the GoldBUG split, made up of 1700 uniformly sampled sentences across every pattern and domain described in the dataset. The list of languages into which the WinoMT & BUG datasets are translated in this work can be found in Appendix A.
FLoResThe Facebook Low Resource (FLoRes) MT benchmark is a massively multilingual general domain dataset introduced by Guzman et al. (2019) and Goyal et al. (2021), and expanded by the NLLB Team et al. (2022). In its last version, it includes 200 languages. In this work, we use FLoRes's test set to validate the generalization of our results beyond highly ambiguous datasets such as MultilingualHolisticBias and WinoMT + BUG.
Usage of datasetsWe use MultilingualHolisticBias to generate gender-specific translations in 25 languages with LLaMa and compare them with NLLB outputs. We translate the WinoMT and BUG datasets to 7 languages and we leverage Stanovsky et al.'s (2019) references evaluation method1 to measure the gender bias of the translations both across models and across genders. We translate both WinoMT's stereotypical and non-stereotypical datasets (where gender correspondence is aligned or not with stereotypical and non-stereotypical gender role assignments, as defined by Zhao et al. (2018), who use statistics provided by the US Department of Labor2). We also use BUG's gold dataset to verify overall (stereotypical/non-stereotypical balanced) performance.
Footnote 2: [https://www.bls.gov/cpsa4t1.htm](https://www.bls.gov/cpsa4t1.htm)
Given the structure of the templates of the MultilingualHolisticBias sentences, it is possible that the model learns to force the grammatical gender of every word into masculine or feminine, rather than only setting the gender when there is ambiguity. In a general domain, where ambiguities are not prevalent, the masculine and the feminine translations should have minimal variations among each other. We translate FLoRes's test set to a group of languages and verify the similarity between masculine and feminine outputs.
### Models
We compare the performance of two models with different architectures on the task of MT: LLaMA (Touvron et al., 2023), a decoder-only model and NLLB (NLLB Team et al., 2022), an encoder-decoder model.
We use the NLLB-200 version with 3 billion parameters. For generation, we follow the standard setting (beam search with beam size 5, limiting the translation length to 100 tokens).
For LLaMa we use the 7 billion parameters version. We prompt LLaMa with ICEs (Figure 1) to elicit the translation task.
### Evaluation
To facilitate comparison with previous works using MultilingualHolisticBias (Costa-jussa et al., 2023), we use the sacrebleu implementation of spBLEU (Goyal et al., 2021) to compute the translation quality with 'add-k = 1' smoothing. When using MultilingualHolisticBias, to
\begin{table}
\begin{tabular}{l|l|l|l|l|l|l} & source sentence & masculine & feminine & neutral & generic \\ \hline est & What do you think about Hasidic children? & - & - & - & Mida arvate lastest, \\ & & & & & kes on hassidiid? \\ \hline spa & What do you think about Hasidic children? & \(\chi\)Qué piensas de los niños jadisfi-cos? & \(\chi\)Qué piensas de las niñas judías jásdicas? & - & - \\ \hline kat & What do you think about Hasidic children? & - & - & - & - \\ & & & - & - & - \\ \hline \end{tabular}
\end{table}
Table 1: Example of entries in the full MultilingualHolisticBias dataset. From the 50 languages present in MultilingualHolisticBias, we selected those that exhibit grammatical gender differences, like Spanish (spa). The sentence in Georgian (kat) has been transcribed into Latin script by the authors.
avoid test set contamination, we employ a leave-one-out cross-validation (LOOCV) strategy to ensure no example is used as ICE of itself. This is possible because the test set sentences are independent of each other. We run evaluations with masculine, feminine, and both references. The first two evaluations are meant to establish the preference of the translation toward each gender, while the third evaluation establishes the correctness of the translation, regardless of the employed gender.
Together with BLEU, we use Stanovsky et al.'s (2019) reference-less coreference resolution metric as a way of determining the level of gender bias in the translations.
## 5 Analysis and Results
### Producing gender-specific translations with LLaMa
We investigate the capability of LLaMa to learn to produce gender-specific translations with ICL. We prompt LLaMa with 5, 16, and 32 ICEs produced with the masculine and feminine translations from MultilingualHolisticBias. In case no masculine or feminine translation is available for a given example, we use the neutral translation. Hereinafter all experiments are performed with this setting.
For NLLB, we calculate three BLEU scores on the output: one with the masculine reference, one with the feminine reference and one with both. In the case of LLaMa, we calculate two BLEU score for each gender-specific output: one with the corresponding gender's reference and one with both references, for a total of four BLEU scores per generation.
After prompting LLaMa with the gender-specific template shown in Figure 1, we find that LLaMa's performance surpasses NLLB for all four reported metrics in 7 out of 25 languages, with NLLB surpassing LLaMa in 10 out of 25 languages. If we exclude the languages for which LLaMa in which LLaMa hasn't been explicitly trained, we find that LLaMa surpasses NLLB in 5 out of 14 languages, while NLLB surpasses LLaMa in 4 out of 14 languages. Overall, BLEU scores for LLaMa are on average less than 1 point lower than with NLLB. We summarize the results in Table 2.
We see a clear trend (with exceptions) of performance increase as the number of ICEs grows, which suggests that further gains are still possible. In some cases, jumping from 16 to 32 ICEs makes
\begin{table}
\begin{tabular}{c c c c} & NLLB & LLaMa & \(\Delta\) \\ \cline{2-4} arb & 31.65 & 10.42 & -21.23 \\ bel & 10.92 & 16.82 & 5.90 \\ bul & 44.22 & 12.54 & -31.68 \\ \hline cat & 37.93 & 56.87 & 18.94 \\ ces & 19.52 & 17.13 & -2.39 \\ dan & 54.32 & 49.69 & -4.63 \\ \hline deu & 23.62 & 41.81 & 18.19 \\ ell & 37.39 & 17.62 & -19.77 \\ fra & 52.59 & 68.36 & 15.77 \\ \hline ita & 31.69 & 35.52 & 3.83 \\ lit & 26.90 & 14.81 & -12.09 \\ lvs & 15.24 & 11.25 & -3.99 \\ \hline mar & 19.76 & 23.42 & 3.66 \\ nld & 24.66 & 24.16 & -0.50 \\ por & 37.30 & 27.76 & -9.54 \\ \hline ron & 22.11 & 20.08 & -2.03 \\ rus & 35.46 & 30.59 & -4.87 \\ slk & 40.89 & 25.09 & -15.80 \\ \hline slv & 33.11 & 3.30 & -29.81 \\ spa & 57.60 & 67.33 & 9.73 \\ swe & 45.94 & 36.20 & -9.74 \\ \hline tam & 14.25 & 10.65 & -3.60 \\ tha & 14.63 & 19.17 & 4.54 \\ ukr & 17.72 & 32.51 & 14.79 \\ \hline urd & 9.92 & 26.43 & 16.51 \\ \hline
**Average** & & **-1.96** \\
**Average*** & & **-0.93** \\ \end{tabular}
\end{table}
Table 2: Average BLEU scores across the four evaluation metrics (masculine reference, feminine reference, masculine to both references, feminine to both references) for gender-specific generation in LLaMa-7B compared to NLLB-3B. To make the results comparable, we have included twice the evaluation with both references of NLLB in the reported averages. In the case of LLaMa, we selected the best result out the three sets of experiments with different number of ICEs (5, 16 and 32). The underlined values denote the best-performing model per language. The value **Average***** denotes the average among only the languages explicitly present in LLaMa.
LLaMa miss the task specification, hallucinating a set of incoherent characters or either partially or totally failing to produce an output. The similarity of MultilingualHolisticBias's ICEs could play a big role in the competitiveness of the performance, which highlights the importance of relevant examples for ICL. The full results can be found in Appendix B.
Given these positive results, we wanted to estimate the significance of the gender-specificity when translating. We inverted the references for each gender of the LLaMa output (evaluating masculine translations with their corresponding feminine reference and vice versa). We found a steep decrease in performance with the wrong references (Table 3), which suggests a strong gender-specificity of the dataset with respect to the BLEU score and validates the significance of the experiments performed above. Detailed numbers are available in Appendix B.
### Gender bias evaluation of LLaMa
To assess the gender bias of LLaMa's gender-specific translations, we translate some of the test sets of WinoMT and BUG, as described in Section 4.1, with MultilingualHolisticBias as ICEs. These test sets present examples that require unambiguous coreference resolution or grammatical gender utilization regardless of stereotypical associations. Stanovsky et al. (2019) and Levy et al. (2021) found that several (encoder-decoder) NMTs are significantly prone to translate based on gender stereotypes rather than more meaningful context. We verify to which degree these errors are reproduced by LLaMa in gender-specific translations. As a baseline, we translate the selected WinoMT and BUG test sets with NLLB as well.
When performing the translation of WinoMT's and BUG's datasets, we found that the phenomenon of empty or incomplete (only one of our two genders) outputs continues to occur. Since a gender bias analysis is not defined over an empty sentence, we excluded incomplete outputs for our gender bias evaluation.
We report balanced results for the stereotypical dataset in Table 4. We find that LLaMa tends to perform better on high-resource languages it has likely been more exposed to, with NLLB surpassing LLaMa 4 out of 7 times and LLaMa overperforming NLLB 3 out of 7 times. In all cases, except for Russian and Ukrainian, both systems have a
\begin{table}
\begin{tabular}{c c c c} & right ref. & wrong ref. & \(\Delta\) \\ \cline{2-4} arb & 10.31 & 3.03 & -7.28 \\ bel & 16.72 & 12.46 & -4.26 \\ bul & 12.36 & 6.57 & -5.79 \\ \hline cat & 56.32 & 28.96 & -27.36 \\ ces & 16.98 & 13.73 & -3.25 \\ dan & 49.6 & 35.91 & -13.69 \\ \hline deu & 41.52 & 28.39 & -13.13 \\ ell & 17.42 & 7.52 & -9.9 \\ fra & 68.13 & 55.45 & -12.68 \\ \hline ita & 35.18 & 13.3 & -21.88 \\ lit & 14.54 & 4.78 & -9.76 \\ lvs & 11.14 & 2.13 & -9.01 \\ \hline mar & 23.38 & 4.32 & -19.06 \\ nld & 24.14 & 14.43 & -9.71 \\ por & 27.66 & 19.13 & -8.53 \\ \hline ron & 19.7 & 14.23 & -5.47 \\ rus & 30.43 & 23.7 & -6.73 \\ slk & 24.7 & 10.19 & -14.51 \\ \hline slv & 2.24 & 4.05 & 1.81 \\ spa & 67.18 & 40.69 & -26.49 \\ swe & 35.75 & 22.05 & -13.7 \\ \hline tam & 22.71 & 3.62 & -19.09 \\ tha & 23.81 & 3.72 & -20.09 \\ ukr & 32.19 & 19.84 & -12.35 \\ \hline urd & 26.3 & 20.09 & -6.21 \\ \hline
**Average** & & & **-11.93** \\ \end{tabular}
\end{table}
Table 3: Average BLEU score across a number of ICEs and gender of using the right gender reference for evaluation _vs_ using the wrong reference. The underlined values denote the best-performing evaluation.
higher-than-random accuracy, with improved performance on the male side over the female side.
When analyzing the results for the anti-stereotypical dataset (Table 5), the performance of LLaMa is almost on par with NLLB, albeit almost always inferior. Overall, the average difference is of little more than 1% of accuracy between both systems.
However, the performance on BUG's gold dataset, a balanced dataset constructed from a more diverse domain, favors NLLB over LLaMa in almost every language, except for Italian and Spanish, as can be seen on Table 6.
It is interesting to see that, in all three analyzed sets, the difference in performance between LLaMa's male and female outputs is almost non-existent, suggesting a reliance on coreference resolution during the gender-specific translation task, rather than forcing the gender of the nouns in the translation. Our results suggest very little variance across languages, gender outputs, and number of ICEs, further backing the hypothesis that when prompting LLaMa with the proposed gender-specific template, it does a more complex analysis than simple gender matching, relying on more complex methods such as coreference resolution to determine each of the outputs.
### Generalizing to the general domain: generating gender-specific translations from FLoRes dataset
We already verified in Subsection 5.1 the high significance of gender for the accuracy of the outputs. We also assess the difference in performance for each produced gender when there aren't major gender ambiguities to translate. In this case, a robust model should not have significant differences between both genders. We translate FLoRes's test set into a subset of the languages used in previous experiments. Given that FLoRes is a general domain dataset, ambiguities should not be prevalent and both outputs should tend to be the same. We use MultilingualHolisticBias as ICEs, reproducing the setting in Sub-section 5.1 and we compare the BLEU scores of both outputs. The list of languages we translate into for this experiment can be found in Table 10.
The results show minor differences between both
\begin{table}
\begin{tabular}{l c c c c c c} & \multicolumn{2}{c}{NLLB} & \multicolumn{2}{c}{LLaMa (m.)} & \multicolumn{2}{c}{LLaMa (f.)} \\ \cline{2-7} & acc. (\(\uparrow\)) & \(\Delta_{G}\) (\(\downarrow\)) & acc. & \(\Delta_{G}\) & acc. & \(\Delta_{G}\) \\ \cline{2-7} arb & **69.7** & 18.1 & 46.8 & 48.0 & 46.8 & 47.9 \\ ces & **71.9** & 16.9 & 39.2 & 25.2 & 39.2 & 25.2 \\ deu & **79.3** & 24.6 & 78.3 & 32.5 & 78.3 & 32.5 \\ \hline ita & 50.5 & 27.1 & **52.3** & 39.9 & **52.3** & 39.9 \\ spa & **53.5** & 33.7 & **53.3** & 37.4 & **53.3** & 37.4 \\ rus & **47.5** & 20.9 & 40.9 & 40.4 & 40.9 & 40.4 \\ \hline ukr & **45.6** & 24.9 & 43.2 & 11.5 & 43.2 & 11.5 \\ \hline \end{tabular}
\end{table}
Table 6: Noun gender prediction accuracy on the subset of BUG’s gold dataset’s fully generated gender-specific translations with LLaMa, compared to two NLLB’s prediction accuracy. \(\Delta_{G}\) denotes the F1-score difference between male and female nouns.
\begin{table}
\begin{tabular}{l c c c c c c} & \multicolumn{2}{c}{NLLB} & \multicolumn{2}{c}{LLaMa (m.)} & \multicolumn{2}{c}{LLaMa (f.)} \\ \cline{2-7} & acc. (\(\uparrow\)) & \(\Delta_{G}\) (\(\downarrow\)) & acc. & \(\Delta_{G}\) & acc. & \(\Delta_{G}\) \\ \cline{2-7} arb & **80.6** & 7.2 & 55.0 & 22.6 & 55.1 & 22.8 \\ ces & **76.0** & 10.0 & 55.3 & 15.6 & 55.3 & 15.6 \\ deu & 78.0 & 7.1 & **80.9** & 6.9 & **80.9** & 6.9 \\ \hline ita & 58.6 & 20.0 & **63.3** & 19.7 & **63.3** & 19.6 \\ spa & 69.3 & 13.2 & **77.5** & 5.9 & **77.5** & 6.0 \\ rus & **55.3** & 37.9 & 47.5 & 40.2 & 47.0 & 39.6 \\ \hline ukr & **43.4** & 36.5 & 38.5 & 22.9 & 38.5 & 22.8 \\ \end{tabular}
\end{table}
Table 4: Noun gender prediction accuracy on the subset of WinoMT’s stereotypical dataset’s fully generated gender-specific translations with LLaMa, compared to NLLB’s prediction accuracy. \(\Delta_{G}\) denotes the F1-score difference between male and female nouns.
\begin{table}
\begin{tabular}{l c c c c c c} & \multicolumn{2}{c}{NLLB} & \multicolumn{2}{c}{LLaMa (m.)} & \multicolumn{2}{c}{LLaMa (f.)} \\ \cline{2-7} & acc. (\(\uparrow\)) & \(\Delta_{G}\) (\(\downarrow\)) & acc. & \(\Delta_{G}\) & acc. & \(\Delta_{G}\) \\ \cline{2-7} arb & **49.4** & 31.2 & 45.7 & 33.4 & 45.5 & 36.6 \\ ces & **55.0** & 30.3 & 50.3 & 7.7 & 50.2 & 7.6 \\ deu & **59.5** & 16.4 & 52.1 & 24.1 & 52.1 & 24.1 \\ \hline ita & **36.7** & 31.5 & 35.4 & 24.9 & 35.3 & 24.8 \\ spa & **47.1** & 38.8 & 45.6 & 32.7 & 45.6 & 32.7 \\ rus & **32.5** & 4.1 & 29.9 & 0.9 & 29.9 & 0.9 \\ \hline ukr & 31.9 & 36.4 & **45.0** & 8.2 & **45.0** & 8.3 \\ \end{tabular}
\end{table}
Table 5: Noun gender prediction accuracy on the subset of WinoMT’s anti-stereotypical dataset’s fully generated gender-specific translations with LLaMa, compared to NLLB’s prediction accuracy. \(\Delta_{G}\) denotes the F1-score difference between male and female nouns.
genders, suggesting a robust gender-specific generation based on coreference resolution rather than on mechanically switching the grammatical gender of the words of the sentence. This result is aligned with what experiments in Sub-section 5.2 indicated.
## 6 Conclusions
In this paper, we explored the capabilities and limitations of decoder-only LLMs in producing gender-specific translations. Our study encompassed a range of experiments across 25 languages and four datasets using LLaMa-7B, with NLLB as a baseline.
To gauge LLaMa's capacity for gender-specific translations, we employed gender-specific templates in our prompts. We observed that LLaMa outperforms NLLB 7 out of 25 times _vs_ 10 out of 25 times, with NLLB scoring under 2 BLEU points on average above LLaMa and under 1 BLEU points when only measuring languages explicitly included in LLaMa. This suggests that it is possible to elicit accurate gender-specific translations with carefully constructed prompt templates and highly relevant ICEs, but more work is required to get LLMs on par with encoder-decoder models such as NLLB.
To analyze the gender bias of the gender-specific translations produced by LLaMa, we employed an automatic reference-less metric based on correct coreference resolution. We show that, while exhibiting slightly higher gender bias than NLLB translations, LLaMa's gender-specific translations consistently ranked above random for coreference resolution, suggesting a competitive baseline to improve the task of gender-specific MT.
Furthermore, we examined LLaMa's translation performance on a general-domain benchmark (FLORes). We found that LLaMa's translations tended to converge to a similar output for both masculine and feminine genders, reaffirming the model's robustness to contextual ambiguity.
In conclusion, our study shed light on the capabilities of LLMs to produce gender-specific translations, and the role of ICEs to elicit the task successfully. Our results indicate that LLaMa can generate gender-specific translations with competitive accuracy and reduced gender bias when compared to NLLB. Our experiments also reveal that LLaMa's translations are robust, showing significant performance drops when evaluated against opposite-gender references in gender-ambiguous datasets but maintaining consistency in less ambiguous contexts. While the quality of the gender-specific translations is competitive, they continue to fall slightly behind NMT models in accuracy and gender bias. These shortcomings should be addressed if the advantages and flexibility of decoder-only models are to be leveraged as an alternative to encoder-decoder models for Machine Translation.
## Limitations
Even though we performed a diverse set of experiments, some limitations arise due to the vastness of the research space we're dealing with. The study heavily relies on the effectiveness of prompt engineering, specifically in providing accurate ICEs. The conclusions drawn are thus constrained by the quality and relevance of the prompts used. Variations in prompt structure or content could yield different results. Moreover, the gender bias dataset used for fine-tuning can inherently introduce biases, impacting the model's behavior in gender-related translation scenarios.
The study focuses on a particular model, LLaMa, leaving out an exploration of alternative architec
\begin{table}
\begin{tabular}{l r r r r r r r r r} \multicolumn{1}{c}{} & \multicolumn{1}{c}{5} & \multicolumn{3}{c}{16} & \multicolumn{3}{c}{32} \\ \cline{2-11} \multicolumn{1}{c}{} & \multicolumn{1}{c}{masc.} & \multicolumn{1}{c}{fem.} & \multicolumn{1}{c}{\(\Delta\)} & \multicolumn{1}{c}{masc.} & \multicolumn{1}{c}{fem.} & \multicolumn{1}{c}{\(\Delta\)} & \multicolumn{1}{c}{masc.} & \multicolumn{1}{c}{fem.} & \multicolumn{1}{c}{\(\Delta\)} \\ \hline bel & 1.07 & 1.11 & 0.04 & 1.25 & 1.29 & 0.04 & 0.00 & 2.29 & 2.29 \\ cat & 25.97 & 25.02 & 0.95 & 26.07 & 25.27 & 0.80 & 26.62 & 26.23 & 0.39 \\ ces & 16.47 & 16.42 & 0.05 & 15.15 & 14.92 & 0.23 & 15.55 & 15.36 & 0.19 \\ \hline dan & 25.05 & 25.01 & 0.04 & 25.45 & 25.05 & 0.40 & 25.31 & 25.09 & 0.22 \\ deu & 26.33 & 25.54 & 0.21 & 26.47 & 23.74 & 2.73 & 28.14 & 24.12 & 4.02 \\ fra & 37.39 & 37.04 & 0.35 & 38.20 & 37.74 & 0.46 & 37.73 & 37.21 & 0.52 \\ \hline ita & 24.23 & 23.88 & 0.35 & 24.96 & 24.48 & 0.48 & 25.44 & 23.50 & 1.94 \\ lit & 2.17 & 2.02 & 0.15 & 2.03 & 1.99 & 0.04 & 2.23 & 2.18 & 0.05 \\ lvs & 1.60 & 1.57 & 0.03 & 1.80 & 1.69 & 0.11 & 1.73 & 1.64 & 0.09 \\ \hline nld & 23.08 & 22.53 & 0.55 & 23.74 & 22.04 & 1.70 & 23.21 & 18.26 & 4.95 \\ por & 39.28 & 38.40 & 0.88 & 38.88 & 37.89 & 0.99 & 39.11 & 37.87 & 1.24 \\ ron & 24.38 & 24.18 & 0.20 & 25.31 & 25.17 & 0.14 & 25.07 & 24.92 & 0.15 \\ \hline rus & 19.82 & 19.86 & 0.04 & 21.53 & 21.46 & 0.07 & 21.59 & 21.59 & 0.00 \\ spa & 24.45 & 24.05 & 0.40 & 24.57 & 23.78 & 0.79 & 25.03 & 23.77 & 0.74 \\ swe & 30.07 & 29.26 & 0.81 & 30.41 & 29.67 & 0.74 & 30.76 & 30.25 & 0.51 \\ \hline tha & 2.10 & 1.74 & 0.36 & 1.69 & 1.50 & 0.19 & 0.00 & 0.00 & 0.00 \\ ukr & 19.63 & 19.59 & 0.04 & 19.53 & 19.27 & 0.26 & 19.26 & 19.04 & 0.22 \\ \hline \multicolumn{1}{c}{**avg**} & \multicolumn{1}{c}{**0.31**} & \multicolumn{1}{c}{**0.63**} & \multicolumn{1}{c}{**1.02**} \\ \hline & & & & & & & & & \\ \end{tabular}
\end{table}
Table 7: BLEU scores for each output of LLaMa’s gender-specific translation on FLoRes’s testset. \(\Delta\) denotes the difference between male and female translations. Since FLoRes’s sentences are not expected to contain a high rate of ambiguity, a correct translation should tend to be identical in both outputs. Results are given for 5, 16 and 32 in-context examples.
tures that could yield different results. A notable constraint is the version of the model employed in this study. At the time of the performance of the experiments, the latest version of LLaMa was LLaMa-1 (simply known as LLaMa). At the time of publishing of this paper, the latest LLaMa version is LLaMa-2 (Touvron et al., 2023b). The results with LLaMa-2 are likely to be greater or equal to those with LLaMa-1, opening up the possibility of more nuanced conclusions about the performance of LLMs with respect to encoder-decoder NMT systems. Additionally, LLaMa-2 includes an instruction-finetuned version, which could mitigate some of the issues encountered (e.g., hallucinations) and could reduce the number of ICEs needed to elicit the task of gender-specific translation.
The study primarily offers quantitative analysis, often overlooking qualitative insights that could provide a richer understanding of the effectiveness and limitations of gender-related translations. Additionally, the evaluation focuses on short contexts, which might not encapsulate the complexities of real-world, long-form content. Moreover, the evaluation method of BLEU can excessively penalize errors in short sentences, leading to some distortions in the results of the experiments with MultilingualHolisticBias. A character-level metric like chrF (Popovic, 2015) could be used in future works to address this shortcoming.
In conclusion, while our study contributes valuable insights into the landscape of gender-related translations and bias mitigation in the context of machine learning, its limitations underscore the need for continued research. Adapting to diverse contextual gender nuances, improving evaluation methodologies, exploring alternative model architectures, addressing biases comprehensively, and considering wider ethical and societal implications are all directions that future research could take to build upon the foundation laid by this study.
## Ethics Statement
The evaluation of translation quality heavily relies on both automatic metrics and subjective human assessments. While metrics like BLEU offer quantitative insights, they might not capture the entirety of translation quality nuances. Human judgments are subjective and can vary based on individual preferences and biases, potentially affecting the overall evaluation process.
The understanding of nuanced gender contexts is intricate and can be challenging even for humans. The study tends to approach gender in a binary manner, which might not account for social perceptions among some of the users of these languages. This limitation is inherent in the current state of the field and warrants future investigations into better representation and handling of gender-related nuances.
Furthermore, although the MultilingualHolisticBias dataset presents an awareness of other biases like racial bias, it predominantly addresses gender bias, potentially overlooking a spectrum of biases present in language models. Moreover, the stereotypical and non-stereotypical datasets were built based on the US Department of Labor data. Since we work with a variety of world languages, the proportions stated on these datasets might not reflect the realities of the users of the wide range of languages employed in this study.
|
2309.12078 | Clustering-based Domain-Incremental Learning | We consider the problem of learning multiple tasks in a continual learning
setting in which data from different tasks is presented to the learner in a
streaming fashion. A key challenge in this setting is the so-called
"catastrophic forgetting problem", in which the performance of the learner in
an "old task" decreases when subsequently trained on a "new task". Existing
continual learning methods, such as Averaged Gradient Episodic Memory (A-GEM)
and Orthogonal Gradient Descent (OGD), address catastrophic forgetting by
minimizing the loss for the current task without increasing the loss for
previous tasks. However, these methods assume the learner knows when the task
changes, which is unrealistic in practice. In this paper, we alleviate the need
to provide the algorithm with information about task changes by using an online
clustering-based approach on a dynamically updated finite pool of samples or
gradients. We thereby successfully counteract catastrophic forgetting in one of
the hardest settings, namely: domain-incremental learning, a setting for which
the problem was previously unsolved. We showcase the benefits of our approach
by applying these ideas to projection-based methods, such as A-GEM and OGD,
which lead to task-agnostic versions of them. Experiments on real datasets
demonstrate the effectiveness of the proposed strategy and its promising
performance compared to state-of-the-art methods. | Christiaan Lamers, Rene Vidal, Nabil Belbachir, Niki van Stein, Thomas Baeck, Paris Giampouras | 2023-09-21T13:49:05Z | http://arxiv.org/abs/2309.12078v1 | # Clustering-based Domain-Incremental Learning
###### Abstract
We consider the problem of learning multiple tasks in a continual learning setting in which data from different tasks is presented to the learner in a streaming fashion. A key challenge in this setting is the so-called "catastrophic forgetting problem", in which the performance of the learner in an "old task" decreases when subsequently trained on a "new task". Existing continual learning methods, such as Averaged Gradient Episodic Memory (A-GEM) and Orthogonal Gradient Descent (OGD), address catastrophic forgetting by minimizing the loss for the current task without increasing the loss for previous tasks. However, these methods assume the learner knows when the task changes, which is unrealistic in practice. In this paper, we alleviate the need to provide the algorithm with information about task changes by using an online clustering-based approach on a dynamically updated finite pool of samples or gradients. We thereby successfully counteract catastrophic forgetting in one of the hardest settings, namely: domain-incremental learning, a setting for which the problem was previously unsolved. We showcase the benefits of our approach by applying these ideas to projection-based methods, such as A-GEM and OGD, which lead to task-agnostic versions of them. Experiments on real datasets demonstrate the effectiveness of the proposed strategy and its promising performance compared to state-of-the-art methods.
This work is supported by the project ULEARN "Unsupervised Lifelong Learning" and co-funded under the grant number 316080 of the Research Council of Norway.
## 1 Introduction
_Continual learning_ can be described as the ability to continually learn over time by accommodating new knowledge while retaining previously learned experiences Thrun (1998), Parisi et al. (2019). We humans typically have no problem with retaining old experiences while at the same time being able to learn new tasks. For example: when a child learns to ride a bike, she does not forget the previous experience of learning how to walk.
In sharp contrast, standard machine learning algorithms typically assume that independent and identically distributed (i.i.d.) training examples of a task are given and use Empirical Risk Minimization (ERM) to learn a model for the task Vapnik (1999). While this approach can be naturally extended to the setting in which samples arrive in an online fashion, when the task changes the conditional distribution of the data given the task also changes. As a consequence, the performance of the model on previously learned tasks significantly degrades when trained on new tasks, a phenomenon known as _catastrophic forgetting_.
Existing methods that deal with catastrophic forgetting often assume that the moment the task changes and the identity of the task are known at training time. For instance, Averaged Gradient Episodic Memory (A-GEM) Chaudhry et al. (2018) and Orthogonal Gradient Descent (OGD) Farajtabar et al. (2020) counteract catastrophic forgetting by solving a constrained optimization problem for each task change, which ensures that the loss function: a) decreases on the current task and b) does not increase on previous tasks. The constraints on previous tasks are enforced by storing either _labeled data samples_ (A-GEM) or _model gradients_ (OGD) from previous tasks as new tasks incrementally arrive. Thus, knowledge of a task change is needed to both solve the constrained optimization problem and update the pool of stored samples or gradients. Moreover, both A-GEM and OGD use pool size that grows with the number of tasks, making memory requirements prohibitive for a large number of tasks. While such memory requirements could be reduced by maintaining a constant and finite memory, this would inevitably lead to catastrophic forgetting as the number of tasks grows.
The aforementioned weaknesses raise two critical questions:
1. _Can we develop a memory and projection-based continual learning algorithm that does not require knowledge of task boundaries?_
2. _Can we address catastrophic forgetting more effectively for a large number of tasks while maintaining a constant and finite amount of memory?_
Paper contributions.In this work, we address these questions by proposing an online clustering-based approach that renders standard projection-based continual learning algorithms task-agnostic. This approach successfully counteracts forgetting in the setting of domain-incremental learning, a setting for which this problem was previously unsolved van de Ven et al. (2022). The proposed approach is generic and can be applied to different projection-based algorithms. To showcase its merits, we focus on the A-GEM and OGD algorithms and propose two new task-agnostics versions called Task Agnostic Averaged Gradient Episodic Memory (TA-A-GEM) and Task Agnostic Orthogonal Gradient Descent (TA-OGD). These algorithms reduce the amount of forgetting when training on different tasks without the need to know any task boundaries and identities. This is achieved by dynamically updating the pool of _labeled data samples_ (A-GEM) or _model gradients_ (OGD) each time a new batch becomes available. In addition, unlike A-GEM and OGD, which store a growing number of samples or gradients as the number of tasks increases, leading to prohibitive memory requirements in practical scenarios, the proposed TA-A-GEM and TA-OGD methods have constant and finite memory requirements by keeping a finite number of samples or gradients throughout the training process. To achieve this, TA-A-GEM and TA-OGD leverage the structure of the training data, which are now grouped into clusters of samples or gradients. Specifically, for each new batch, we first uniformly draw samples or gradients from the current batch
Figure 1: After the task-incremental method is finished with the training on task \(T_{k}\), the memory (containing either labeled data samples in the case of A-GEM or model gradients in the case of OGD) is updated. This method is made domain-incremental by using an online clustering-based approach for updating the memory while keeping its size fixed.
and use them to initialize a predefined number of clusters using the samples or gradients as the cluster centers. After initialization, new samples or gradients are assigned to the cluster center with minimum \(\ell_{2}\) distance. To keep a constant memory, when the maximum cluster size is reached we remove less informative cluster members and update the cluster center with the average of the cluster members.
In short, this paper makes the following contributions:
* We propose a generic clustering-based method for successfully extending projection-based continual learning algorithms to a task-agnostic context. We focus on two state-of-the-art projection-based algorithms i.e., A-GEM and OGD showing that the proposed strategy enjoys the merits of memory and projection-based methods Farajtabar et al. (2020); Lopez-Paz and Ranzato (2017); Doan et al. (2020) without requiring knowledge of the task identity or task changes.
* By leveraging the structure of the data from previously seen tasks, we can retain the information needed to address catastrophic forgetting, such as training data (A-GEM) or model gradients (OGD), while keeping the memory-size finite via a simple and efficient clustering procedure. We thus depart from the standard approach of OGD and A-GEM, which demand a growing amount of memory as new tasks sequentially arrive, which is impractical in real-world scenarios.
* We provide extensive experimental results for different continual learning settings on various datasets showing the promising performance of the proposed task-agnostic algorithms (TA-A-GEM and TA-OGD) compared to state-of-the-art methods.
## 2 Related Work
This section starts with an explanation of the three types of incremental learning. It then reviews the stability-plasticity dilemma, which continual learning methods have to face. Moreover, we present the main ideas of memory and projection-based continual learning approaches to which class the proposed TA-A-GEM and TA-OGD method belong and the main advances in task continual learning. Finally, we review the recent works leveraging representation learning for deriving efficient continual learning algorithms.
### Domain-incremental learning
In continual learning, different tasks can arrive in sequence. The learner must therefore learn new tasks incrementally. This is referred to as _incremental learning_. Three types of incremental learning can be specified: _task-incremental learning_, _domain-incremental learning_ and _class-incremental learning_ van de Ven et al. (2022). In task-incremental learning, the task identity is known to the learner during the training and testing phase. In domain-incremental learning, the task identity is not known to the learner at both training and testing time. In class-incremental learning, the learner must learn to identify a growing number of classes. Since we focus on a scenario where the number of classes is static and the task identity is not known during training and testing, we focus on the _domain-incremental_ setting. Alleviating catastrophic forgetting in such a scenario is an important unsolved challenge van de Ven et al. (2022).
### The Stability-Plasticity Dilemma
The balancing act between being able to gain new knowledge while assuring old knowledge is not lost is referred to as the _stability-plasticity dilemma_Mermillod et al. (2013). Continual learning approaches can be categorized in three major trends based on how the stability-plasticity dilemma is handled De Lange et al. (2021); Parisi et al. (2019). The first trend is to use the concept of _regularization_ of synaptic plasticity, where the plasticity of important weights is constrained in order to retain old skills, like the Memory Aware Synapses used in a continual setting in Aljundi et al. (2019). Elastic Weight Consolidation (EWC) is a seminal work of this class. When a new task arrives, EWC learns the optimal weights for this task, while penalizing changes of the weights towards values that are far from the optimal ones for the previous task Kirkpatrick et al. (2017). Several other variants of EWC have appeared in the literature and we refer the readers to De Lange et al. (2021) for a detailed review. The second trend is _expansion_Rusu et al. (2016); Aljundi et al. (2017); Mehta et al. (2021); Douillard et al. (2022), where a neural network is expanded by allocating new neural resources in order to gain new skills, while leaving old neurons unchanged in order to retain old skills. Finally, according to the third trend, which is _repetition_, old information is repeatedly fed to the network, along with new information. This can be implemented by applying a complementary learning system for integrating old and new skills and applying experience replay, or by simply mixing old and new data in the training step. In the literature, various approaches of the so-called replay-based methods which rely on the principle of repetition have come to the scene. These methods make use of memory resources and vary in the strategy they follow Rebuffi et al. (2017); Lopez-Paz and
Ranzato (2017); Shin et al. (2017); Chaudhry et al. (2019); Aljundi et al. (2019); van de Ven et al. (2020); Koh et al. (2021); Ye and Bors (2022).
This paper uses the terms "replay-based" and "memory-based" interchangeably because they represent similar concepts. Still, we tend to favor "replay-based" when a method stores samples from the dataset and "memory-based" when it stores different information. The proposed TA-A-GEM builds on A-GEM Chaudhry et al. (2018), which stores samples from the training set, and can thus be considered "replay based". The proposed TA-OGD builds on OGD Faratibar et al. (2020), and thus, in principle, falls into the category of memory-based methods since it stores gradients. At the same time, the proposed TA-A-GEM and TA-OGD use a projected gradient step and, hence, are also a projection-based approach. Note that this projection step implicitly regularizes the weights; therefore, A-GEM and OGD bear similarities with the regularization-based methods. Next, we elaborate on the specific class of memory-based and projection-based continual learning algorithms.
### Memory-based and Projection-based Continual Learning Methods
Over the last few years, several memory-based and projection-based methods have been proposed in the literature, Lopez-Paz and Ranzato (2017); Farajtabar et al. (2020). These make use of memory for storing information from the past, which helps to update the model towards non-forgetting directions. The goal is to address catastrophic forgetting by means of imposing certain constraints on the weight-updating process. Many different approaches have appeared in the literature over the last few years. In Lopez-Paz and Ranzato (2017), the authors propose to update weights in directions that do not increase the loss function values on samples of previously seen tasks. The resulting algorithm, dubbed Gradient Episodic Memory (GEM), thus stores a predefined number of gradients of the loss function corresponding to old tasks, Chaudhry et al. (2018); Lopez-Paz and Ranzato (2017). These are then used for updating the model by solving a constrained optimization problem. Orthogonal Gradient Descent (OGD) Farajtabar et al. (2020) stores a growing number of gradients of the model corresponding to old tasks' samples. In the weight update step, it projects its loss gradient to a direction that is orthogonal to all stored gradients. Specifically, gradients of the loss are projected on the orthogonal basis spanned by the stored gradients. In doing so, directions that increase forgetting of past tasks are excluded when the model learns a new task. This assumes however that the stored gradients remain relevant, even when the weights of the model move during the training process, thus arriving at a different point in the configuration space in which older tasks can have different gradients. Averaged Gradient Episodic Memory (A-GEM) Chaudhry et al. (2018) solves this problem by storing labeled data samples instead of gradients. It projects the loss gradient orthogonal to a reference gradient that is calculated at every training step from a subset of the stored labeled data. Though showing promising performance in addressing catastrophic forgetting, memory-based and projection-based methods suffer from two fundamental weaknesses: a) they require the moment of task change to be available in order to know when the memory should be updated, and b) memory cost should either scale with the number of tasks, e.g., in OGD Farajtabar et al. (2020), which is infeasible in real-world scenarios, or the stored data per task will decrease as in the case of GEM Lopez-Paz and Ranzato (2017), which also hinders the ability of the algorithm to address forgetting when it encounters a large number of tasks.
### Task Agnostic Continual Learning
Task boundaries and identities are rarely available in practical continual learning applications. In light of this, various task-agnostic continual learning methods have been proposed in the literature. In Harrison et al. (2020), the authors propose an auxiliary mechanism to detect tasks while counteracting forgetting. The resulting method operates in a task-agnostic environment showing promising empirical performance. Several other approaches have been proposed in the same spirit Caccia et al. (2020); He et al. (2019). Another line of work hinges on online learning ideas completely neglecting task identity or the need to know the moment of task change. In Zeno et al. (2018), the authors propose Bayesian Gradient Descent (BGD), an online variational Bayes approach in which model parameters with low variance are considered more important for previous tasks and, thus, are less updated. The opposite holds for parameters with high variance (hence high uncertainty). A similar idea for task-free continual learning appeared in Aljundi et al. (2019). Namely, the authors modified the so-called Memory Aware Synapses (MAS) algorithm in Aljundi et al. (2018), in order to operate in a task-agnostic online learning setup. For, they use an importance weight regularizer which penalizes changes to model parameters which negatively affect model performance on prior tasks. Finally, in Jin et al. (2020) the authors propose an online task-agnostic memory-based method. The main idea is to edit the stored-in-memory gradients used for addressing forgetting by solving an optimization problem in an online fashion. Recently, the idea of using self-supervised representations for task-agnostic continual learning was proposed in Pham et al. (2021), showing promising empirical performance.
Though the emergence of clustering in episodic memory has been recently acknowledged in the child development literature Horn et al. (2021), to the best of our knowledge, the proposed TA-A-GEM and TA-OGD are the first algorithms
that use online clustering for dynamically updating the memory of continual learning methods. While we focus on A-GEM and OGD, the adopted strategy could be applied to other memory-based and task-dependent continual learning approaches for allowing them to operate in task-agnostic environments.
### Representation Learning
Representation learning aims to find insightful data representations by exploiting their structure Ma et al. (2022). Recently, learned representations have been at the heart of several continual learning algorithms. In Chaudhry et al. (2020), the authors employed low-rank orthogonal subspace representations of the model parameters formulating continual learning as an optimization over the Stiefel manifold problem. The reported results showed promising performance and the ability of the approach to counteract forgetting. In Guo et al. (2022), _holistic_ representations learned via a mutual information maximization criterion were employed in the continual learning setting. The method can learn feature representations of the current task that are useful for the future tasks, hence leading to models that are more robust to forgetting. In Doan et al. (2020), a variant of the projection-based OGD method was proposed. The main idea is to perform principal component analysis on the set of stored gradients of the model and keep only the most informative principal components. However, the work in Doan et al. (2020), still assumes that task changes are provided to the algorithms and batch processing is utilized. Hence it is far from our proposed online clustering-based task-agnostic algorithms.
## 3 Proposed Approach
We assume that the \(n\) tasks \(\{T_{i}\}_{i=1}^{n}\) arrive sequentially and that during task \(T_{k}\) the data from tasks \(T_{i}\) for \(i<k\) are not presented to the learner. Each task consists of pairs of data points \((x,y)\in T_{k}\), where \(x\in\mathbb{R}^{d}\) is the input and \(y\) is a label. Here we assume that each task is a classification task and that all classification tasks share the same classes \(j=1,\dots,c\), where \(c\) is the number of classes. Therefore, we can represent \(y\in\mathbb{R}^{c}\) as a one-hot class encoding vector, i.e., \(y_{j}=1\) when \(j\) is the class label and \(y_{j}=0\) otherwise. We denote the network model as \(f(x;w)\in\mathbb{R}^{c}\), where \(w\in\mathbb{R}^{p}\) denotes the \(p\)-dimensional weights (parameters) of the network and \(f_{j}(x;w)\) is the \(j\)-th logit corresponding to the \(j\)-th class. The model is trained to predict the class label for input \(x\).
The proposed Task Agnostic Averaged Gradient Episodic Memory (TA-A-GEM) and Task Agnostic Orthogonal Gradient Descent (TA-OGD) methods rely on the forgetting counteracting mechanisms of Averaged Gradient Episodic Memory (A-GEM) Chaudhry et al. (2018) and Orthogonal Gradient Descent (OGD) Farajtabar et al. (2020), respectively. Next, we briefly describe the main ideas behind A-GEM and OGD and refer the reader to the Appendix or Chaudhry et al. (2018) and Farajtabar et al. (2020) for further details.
Both A-GEM and OGD assume the identity \(k_{t}\) of the task \(T_{k_{t}}\) at time step \(t\) is known. The empirical loss, during time step \(t\), with a batch size \(|T_{k_{t}}|\), is given by,
\[L_{t}(w)=\frac{1}{|T_{k}|}\sum_{(x,y)\in T_{k}}L_{(x,y)}(w), \tag{1}\]
where the per sample loss \(L_{(x,y)}(w)\) is assumed to be the cross-entropy, which is defined as
\[L_{(x,y)}(w)=-\sum_{j=1}^{c}y_{j}\log\left(\frac{\exp f_{j}(x;w)}{\sum_{m=1}^ {c}\exp f_{m}(x;w)}\right). \tag{2}\]
Both A-GEM and OGD use a pool of samples to counteract the catastrophic forgetting. The difference is that OGD stores network gradient, while A-GEM stores training data.
### Clustering-based Task Agnostic A-GEM (TA-A-GEM) and OGD (TA-OGD)
Figure 1 shows our strategy to convert a task-aware task-incremental projection algorithm to a task-agnostic domain-incremental algorithm. Task-incremental projection algorithms like A-GEM and OGD keep a pool of samples from either the training data or model gradients, respectively. This pool of samples is used to mitigate catastrophic forgetting of previous tasks through projection. When the algorithm is finished with training on one task, it stores samples from this task before it starts training on the new task. In this way, it ensures that the samples in the pool are relevant for previous tasks when addressing forgetting. However, this comes at the cost of _requiring to know the moment a task changes_. In our approach, we make this process task-agnostic by updating the pool of samples during the process of training, i.e. _the pool of samples is updated every time the model is trained on a batch._ This removes the need to know
the moment the task changes but introduces the problem that the size of the pool now grows more rapidly. However, _our goal is to keep the memory requirements constant in the number of tasks_. Hence, a strategy is necessary to decide which samples should be added to the pool and which ones should be removed during the updating process. Our strategy aims to select stored samples in a way that addresses forgetting all previous tasks in the most efficient way while being constrained by constant and finite pool size. Because we aim for a true task-agnostic setting, all tasks are made to have the same label space, so the task identity can not be inferred from the labels.
Next, we detail the proposed online clustering-based approach that consists of the following four steps:
1) _Initialization:_ We first set the number of clusters \(Q\) and consider the first \(Q\) samples becoming available as the centers \(\boldsymbol{\mu}_{i},q=1,2,\ldots,Q\) of these clusters.
2) _Cluster assignment:_ A new sample \(\mathbf{z}_{p}\) (corresponding to a training sample in the case of A-GEM or gradient logit in the case OGD) is assigned to the cluster \(q^{*}\) that minimizes the \(\ell_{2}\) norm i.e.,
\[q^{*}=\operatorname{argmin}_{q\in\{1,2,\ldots,Q\}}\|\mathbf{z}_{p}- \boldsymbol{\mu}_{q}\|_{2}^{2} \tag{3}\]
3) _Memory update:_ The size of each cluster is predefined, and once the maximum size has been reached, for new samples to that assigned to that cluster an equal number of older samples residing in the cluster should be removed. Note that the process of accepting/rejecting new samples and deciding which "old" samples to delete could be implemented using information-theoretic criteria or rejection sampling-based ideas. Here, in an effort to simplify the approach and make it computationally efficient, we follow a first-in-first-out (FIFO) approach. This dictates that samples that arrived first in the cluster are the first to be removed. Note that the strategy followed ensures that samples corresponding to a task with information distinct from other tasks will not be deleted from the pool. This will occur since these samples will "live" within clusters that will not be updated and thus remain unaffected by the memory updating process.2
Footnote 2: Empirical findings reported in the Appendix corroborate our hypothesis.
3) _Update of cluster means:_ Once samples are assigned to the clusters and the memory has been updated, the cluster means are re-computed i.e.,
\[\boldsymbol{\mu}_{q}=\frac{1}{P}\sum_{p=1}^{P}\mathbf{z}_{p}^{q},\hskip 28.452756pt \forall i=1,2,\ldots,N, \tag{4}\]
where \(P\) denotes the size of the clusters and \(\mathbf{z}_{p}^{q}\) the \(p_{th}\) element of cluster \(q\). For the case of the task-agnostic version of A-GEM, i.e., TA-A-GEM, we have \(\mathbf{z}_{p}\equiv\mathbf{x}_{p}\in\tilde{M}_{t}\) (where \(t\) here denotes the batch index) whereas for the task-agnostic OGD algorithm (TA-OGD) \(\mathbf{z}_{p}\equiv\nabla f_{j}(\mathbf{x}_{p},w_{i}^{*})\). Our clustering-based strategy is depicted at Fig. 2, while a pseudo-code of the algorithm is given in the Appendix.
_A single or a different pool for each class?_ A possible complication that can occur is that more similarity exists between samples of the same class that are of a different task than between different classes of the same task. If this happens, _class_ information will be well represented in the pool, but _task_ information can be easily lost. Since class labels of the samples are available, a way to get around that issue and disentangle the class from task information is to use a different pool for each class. In that case, samples are first assigned to a pool based on their class label. Then, the procedure described above is independently followed for each pool. It is worth noting that this is critically important for the task-agnostic version of A-GEM (TA-A-GEM) since the pool contains training samples of different classes. Samples corresponding to the same class but different tasks, e.g., a digit and its rotated version might be close in the input space. As a result, if a single pool is used, those two samples will be assigned to the same cluster, and hence task information will be lost. This phenomenon is more likely not to be observed in the case of TA-OGD since clustering takes place in the space of model gradients, which are sufficiently separated for different tasks even for samples corresponding to same classes.
_The role of hyperparameters:_ The choice of hyperparameters, such as the number of clusters \(Q\) and their size, is important. A large number of clusters \(Q\), allows more task and class diversity to be stored in different clusters in memory. The size of the clusters should be large enough so it can capture the essence of a specific task. However, the size of \(Q\) and the cluster size should be kept as small as possible to reduce the memory footprint. A trade-off can be made where \(Q\) is large, and the cluster size is small versus using a small \(Q\) with a large cluster size. In addition, we follow an adaptive strategy for the learning rate of the projected gradient step. Note that this is a form of task detection that our method does not necessarily need. Our focus is to create a truly task-agnostic method without any task detection. Specifically, the learning rate \(\eta^{t}\) at iteration \(t\) decreases as follows:
\[\eta^{t}=a\eta^{t-1}, \tag{5}\]
where \(a<0\), when the loss function is _smoothly_ increasing for a given number of iterations. This allows the algorithm to update the weights of the model following a non-increasing path for the loss function. Moreover, when a sudden
increase is observed, then the learning rate is reset to its initial value (therefore increases), i.e., \(\eta^{t}=\eta_{ini}\). The reasoning behind this rule is that spikes of the loss most likely imply task-change and therefore, a higher learning rate can help to move fast along decreasing directions of the loss corresponding to the new task. Empirical results on the effect of the sampling rate, the number, and the size of clusters on the performance of the proposed method, and more details on the adaptive updating process of learning rate, are provided in Section 4 and Appendix.
## 4 Experiments
We divide the experiments into two main classes: a) the _disjoint tasks experiment_ and b) _the continuous change experiments_. The task-aware methods are notified of the task change, while the task-agnostic methods do not get this information. In the continuous change experiments, discrete tasks still exist, but task boundaries are no longer clearly defined. Details on the experimental setting can be found in the Appendix. Since there is no clear point that a task-aware method can be notified, only task-agnostic methods are included in this experiment. For both methods, all tasks are made to have the same label space, since it should not be possible to infer the task identity from the labels. In cases where the label spaces are disjoint, the labels are cast to the same label space. Since no task identity is provided during training, the method is tested in a domain-incremental setting van de Ven et al. (2022). Following empirical observations, we use the learning rate scheduler described in Section 3.3 for the case of OGD and the proposed task-agnostic version of it i.e., TA-OGD. The network used for training is a multi-layer perceptron (MLP) with two hidden layers of 200 nodes. To compare the performance of the tested methods, we use three metrics: a) The _validation accuracy_, b) The _average validation accuracy_ over all tasks trained on thus far and c) The amount of _forgetting_. For an exact mathematical definition of these quantities, we refer to the Appendix. To create separate tasks from existing datasets, three task generation mechanisms are implemented: a) task permutation, b) task rotation and c) class splitting. For the details of this task generation, we refer to the Appendix.
### Disjoint tasks experiment
Table 1 shows the results of the first class of experiments. It shows the average accuracy over all tasks trained thus far, thereby capturing both the ability to remember old tasks and the ability to learn new tasks. The average accuracy was then averaged over 20 epochs, then over five runs. Plots of these results can be found in the Appendix. Our proposed TA-OGD and TA-A-GEM algorithms significantly outperform the state-of-the-art task-agnostic BGD algorithm, Zeno et al. (2018), on the MNIST Deng (2012), Fashion MNIST Xiao et al. (2017) and NOT MNIST datasets. Moreover, their performance is comparable to BGD on CIFAR10 and SVHN. Focusing on MNIST, Fashion MNIST and NOT MNIST, we observe that at the _permutation experiments_, no remarkable differences can be seen among the methods. This can be explained by the fact that the baseline SGD method shows little signs of forgetting in the first place. For the _rotation experiments_, A-GEM is a clear winner, it is however not task-agnostic. On MNIST and NOT MNIST, TA-OGD and TA-A-GEM are moderately effective at mitigating forgetting. On Fashion MNIST however, TA-A-GEM is clearly the best method among all the tested task-agnostic methods. We attained the most remarkable results on the _class split experiments_. On MNIST, both TA-OGD and TA-A-GEM clearly outperform the other task-agnostic methods. On Fashion MNIST, TA-A-GEM's performance is even on par with A-GEM, while on NOT MNIST, TA-OGD takes the crown by performing on par with A-GEM, which is a task-aware method.
Figure 2: The clustering mechanism to add training set samples / model gradient samples to the memory by matching it to the closest cluster (pink cluster), as used by TA-A-GEM / TA-OGD.
### Continuous task change experiment
The results of the _continuous change experiments_ are extremely similar to the results in the _disjoint tasks experiments_. They can be found in the Appendix. These experiments show that the proposed TA-OGD and TA-A-GEM fare just as well in the challenging setting where task boundaries are blurred.
### Effectiveness of the clustering-based procedure
In order to demonstrate the benefits obtained by the proposed clustering-based approach, we compared the performance of TA-A-GEM with and without clustering. To deactivate clustering we skipped the cluster assignment step and new samples were randomly to allocated clusters. Similarly to our approach, an equal number of old samples of update clusters is removed to keep the memory size constant. For this experiment, a MLP was trained on Fashion MNIST, with the task split segmentation. All settings are the same as in the _disjoint tasks experiments_.
Figure 3 and 4 show the content of each cluster during training time. Each horizontal line corresponds to a cluster. Each task is associated with a unique color, which represents the oldest task information that is present in the cluster. The horizontal line changes color the moment that the last information of the oldest task disappears from the cluster. Then, the second oldest task information becomes the new oldest task information. The moment that a new task starts -not available to the algorithms- is indicated by a black vertical line. As it can be observed in Figs 3 and 4, clustering helps in keeping a greater variety of task information in the gradient pool, with samples from Task 0 or Task 1 still being present in clusters even after the end of training on samples from Task 4. On the other hand, the use of random cluster assignment results in information of old task being almost immediately lost after a task change, thus illustrating the merits of our proposed clustering-based approach.
counteract catastrophic forgetting without providing knowledge of a task change and the need of a growing amount of memory. Extensive experimental results provided in section 4.3 and the Appendix show the benefits of our clustering-based method. As a future direction, we aspire to explore more sophisticated, yet computationally efficient, methods for the clustering and memory update step. Our goal is also to illustrate the merits of our method on larger networks such as a ResNet He et al. (2016), or more complicated datasets such as ImageNet Deng et al. (2009). It is worth noting that our proposed method is generic hence we also intend to inquire its application as an off-the-shelf tool to other projection-based methods.
|
2309.11571 | Modeling Quasar Proximity Zones in a Realistic Cosmological Environment
with a Self-consistent Light Curve | We study quasar proximity zones in a simulation that includes a
self-consistent quasar formation model and realistic IGM environments. The
quasar host halo is $10^{13}\ M_{\mathrm{\odot}}$ at $z=6$, more massive than
typical halos studied in previous work. Between $6<z<7.5$, the quasar
luminosity varies rapidly, with a mean magnitude of $M_{UV,mean}=-24.8$ and the
fluctuation reaching up to two orders of magnitude. Using this light curve to
post-process the dense environment around the quasar, we find that the
proximity zone size ($R_{p}$) ranges between $0.5-5$ pMpc. We show that the
light curve variability causes a similar degree of scatter in $R_{p}$ as does
the density fluctuation, both of which result in a standard deviation of $\sim
0.3$ pMpc). The $R_{p}$ traces the light curve fluctuations closely but with a
time delay of $\sim 10^4\ \mathrm{yr}$, breaking the correspondence between the
$R_{p}$ and the contemporaneous $M_{UV}$. This also indicates that we can only
infer quasar activity within the past $\sim 10^4$ years instead of the
integrated lifetime from $R_{p}$ in the later part of cosmic reionization.
Compared with the variable light curve, a constant light curve underestimates
the $R_{p}$ by 13% at the dim end ($M_{UV}\sim -23.5$), and overestimates the
$R_{p}$ by 30% at the bright end ($M_{UV}\sim -26$). By calculating the $R_{p}$
generated by a number of quasars, we show that variable light curves predict a
wider $R_{p}$ distribution than lightbulb models, and readily explain the
extremely small $R_{p}$ values that have been observed. | Yihao Zhou, Huanqing Chen, Tiziana Di Matteo, Yueying Ni, Rupert A. C. Croft, Simeon Bird | 2023-09-20T18:15:51Z | http://arxiv.org/abs/2309.11571v1 | Modeling Quasar Proximity Zones in a Realistic Cosmological Environment with a Self-consistent Light Curve
###### Abstract
We study quasar proximity zones in a simulation that includes a self-consistent quasar formation model and realistic IGM environments. The quasar host halo is \(10^{13}\ M_{\odot}\) at \(z=6\), more massive than typical halos studied in previous work. Between \(6<z<7.5\), the quasar luminosity varies rapidly, with a mean magnitude of \(M_{\rm UV,mean}=-24.8\) and the fluctuation reaching up to two orders of magnitude. Using this light curve to post-process the dense environment around the quasar, we find that the proximity zone size (\(R_{\rm p}\)) ranges between 0.5-5 pMpc. We show that the light curve variability causes a similar degree of scatter in \(R_{\rm p}\) as does the density fluctuation, both of which result in a standard deviation of \(\sim 0.3\) pMpc. The \(R_{\rm p}\) traces the light curve fluctuations closely but with a time delay of \(\sim 10^{4}\) yr, breaking the correspondence between the \(R_{\rm p}\) and the contemporaneous \(M_{\rm UV}\). This also indicates that we can only infer quasar activity within the past \(\sim 10^{4}\) years instead of the integrated lifetime from \(R_{\rm p}\) in the later part of cosmic reionization. Compared with the variable light curve, a constant light curve underestimates the \(R_{\rm p}\) by 13% at the dim end (\(M_{\rm UV}\sim-23.5\)), and overestimates the \(R_{\rm p}\) by 30% at the bright end (\(M_{\rm UV}\sim-26\)). By calculating the \(R_{\rm p}\) generated by a number of quasars, we show that variable light curves predict a wider \(R_{\rm p}\) distribution than lightbulb models, and readily explain the extremely small \(R_{\rm p}\) values that have been observed.
keywords: quasars: supermassive black holes - intergalactic medium - radiative transfer - galaxies: high-redshift
## 1 Introduction
A bright quasar at high redshift usually creates a large region, commonly referred to as a 'quasar proximity zone', where the ionizing radiation contributed from the quasar significantly exceeds the cosmic ionizing background. Within a quasar proximity zone, the hydrogen neutral fraction is considerably lower than typical regions in the Universe. At \(z>6\), these are the only regions where we can observe non-zero Lyman \(\alpha\) transmitted flux (Bajllik et al., 1988; Cen & Haiman, 2000; Wyithe et al., 2005; Bolton & Haehnelt, 2007, 2007, 2008). As a result, quasar proximity zones are unique windows for probing the distant universe.
One key observational measurement related to a quasar proximity zone is its size, which is traditionally defined in quasar spectra as the distance from the systematic Lyman \(\alpha\) line center to the first point where the transmitted flux drops below 10% of the continuum level after being smoothed by a 20A top-hat kernel (Fan et al., 2006; Carilli et al., 2010; Eilers et al., 2017, 2020; Mazzucchelli et al., 2017; Ishimoto et al., 2020). Fan et al. (2006) compiled the first large sample of \(z\gtrsim 6\) quasar spectra and measured a proximity zone size \(R_{\rm p}\sim 8\) Mpc at \(z=6\) for quasars of magnitude \(M_{1450}=27\). Carilli et al. (2010) analyzed the proximity zone sizes of 27 quasars with more accurate redshift measurements. In the last decade, the number of high redshift quasar spectra with well-measured Lyman \(\alpha\) proximity zone sizes has grown considerably (Reed et al., 2017; Banados et al., 2018; Matsuoka et al., 2019; Wang et al., 2019). For example, Eilers et al. (2017) studied quasar proximity zones in the redshift range \(5.77<z\leq 6.54\) with a homogeneous analysis of 34 medium resolution spectra, and Ishimoto et al. (2020) presented measurements of the proximity zone size for 11 low-luminosity (\(-26.16\leq M_{1450}\leq-22.83\)) quasars at \(z\sim 6\). An unexpected result from these observations obtained in recent years is that some quasars display very small proximity zones, such as the \(R_{\rm p}\sim 0.37\) Mpc measured in Eilers et al. (2021). One possible explanation for several small \(R_{\rm p}\) seen in \(z\gtrsim 7\) quasar spectra is that a large amount of hydrogen at \(z\sim 7\) is still neutral; e.g., Miralda-Escude & Rees 1998; Mortlock et al. 2011; Bolton et al. 2011; Bosman & Becker 2015a; Banados et al. 2018; Davies et al. 2018; Wang et al. 2020; Yang et al. 2020; Bosman & Becker 2015b; Greig et al. 2017, 2022). However, the population of small \(R_{\rm p}\) quasars at \(z\sim 6\) remains perplexing.
The sizes of quasar proximity zones could provide valuable insights into quasar activity and cosmic reionization, an epoch when the IGM transitioned from a mostly neutral state into a mostly ionized state. Several physically motivated (semi-)analytic models of proximity zones have been proposed, which lead to scaling relations between \(R_{\rm p}\) and the intrinsic properties of quasars (e.g., the lumi
nosity and the lifetime) as well as the surrounding IGM. In a nearly neutral universe, the proximity zone size is closely related to the size of the quasar ionized bubble, which is sensitive to the ionization fraction of the local IGM and the total number of ionizing photons the quasar emits (Cen & Haiman, 2000; Haiman & Cen, 2001):
\[R_{\rm ion}=\left(\frac{3\dot{N}t_{\rm q}}{4\pi\,r_{\rm H}\,x_{\rm H}}\right)^{ 1/3}, \tag{1}\]
where \(\dot{N}\) is the emitted ionizing photon rate, \(t_{\rm q}\) is the quasar lifetime, \(n_{\rm H}\) is the hydrogen number density and \(x_{\rm H_{1}}\) is the neutral hydrogen fraction. On the other hand, if the quasar is embedded in an already ionized IGM, Bolton & Haehnelt (2007a) showed that the proximity zone size quickly reaches a maximum \(R_{\rm p}^{\rm max}\) which is independent on the neutral fraction:
\[\begin{split} R_{\rm p}^{\rm max}&=\frac{3.14}{ \Delta_{\rm lim}}\,\left(\frac{\dot{N}}{2\times 10^{57}\,\,{\rm s}^{-1}} \right)^{1/2}\,\left(\frac{T}{2\times 10^{4}\,\,{\rm K}}\right)^{0.35}\\ &\times\left(\frac{\tau_{\rm lim}}{2.3}\right)^{1/2}\,\left(\frac {\alpha^{-1}\,\left(\alpha+3\right)}{3}\right)^{-1/2}\left(\frac{1+z}{7} \right)^{-9/4}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\
Figure 1: Illustration of the host environment of the quasar prior to the RT post-processing. The first two rows are the snapshots of the gas density fields, with the quasar positioned at the center of each panel, at \(z=7.5\) (upper) and \(z=6.0\) (middle). For regions with high density, the color hue is set by the hydrogen neutral fraction \(x_{\rm H}\). The sidelbar transitions from red to blue, representing the progression from ionized to neutral states. The left two panels display cubical boxes with a comoving side length of \(10\,h^{-1}\)Mpc, and the two right panels depict the zoomed-in central regions of diameter \(1\,h^{-1}\)Mpc. The blue, yellow, green, and red straight lines in the left panels show the directions of 4 lines of sight used in this work. In the bottom panel, we show the gas density contrast \(\Delta_{\rm g}\) (\(=\rho/\bar{\rho}\)) along all 48 directions at \(z=7.5\) in grey, with the four examples in the upper panels highlighted in blue, yellow, green, and red, respectively.
our quasar-hosting halo is \(5\,\sigma_{0}\) with a peculiar velocity \(\mathbf{v}=\mathbf{0}\,\mathrm{km\,s^{-1}}\), where \(\sigma_{0}\) is the density variance after convolution with a Gaussian kernel of size \(R_{\mathrm{G}}=1\ h^{-1}\) Mpc. For full details of the underlying formalism of the CR simulation, we refer the reader to van de Weygaert & Bertschinger (1996); Ni et al. (2021).
The box contains \(2\times 352^{3}\) particles, with a mass resolution \(M_{\mathrm{DM}}=1.2\times 10^{7}M_{\odot}\ h^{-1}\) for the dark matter particles and \(M_{\mathrm{gas}}=2.4\times 10^{7}M_{\odot}\ h^{-1}\) for the gas particles in the ICs. Star particles have a mass of \(M_{\odot}=6\times 10^{5}M_{\odot}\ h^{-1}\). The gravitational smoothing length is \(1.8\ h^{-1}\)ckpc for both dark matter and gas particles. The cosmological model is consistent with the nine-year _Wilkinson Microwave Anisotropy Probe_ (WMAP) data (Hinshaw et al., 2013) (\(\Omega_{0}=0.2814\), \(\Omega_{\Lambda}=0.7186\), \(\Omega_{\mathrm{b}}=0.0464\), \(\sigma_{8}=0.82\), \(h=0.697\), \(n_{\mathrm{s}}=0.971\)).
### Physics: hydrodynamics and sub-grid modelling
Most of the model physics implemented in the CR simulation is the same as in BLUETIDES (Feng et al., 2016). The gravity is solved with the treePM approach. The pressure-entropy formulation of SPH is adopted to solve the Euler equations (Read et al., 2010; Hopkins, 2013). The density estimator uses a quintic kernel to reduce the noise in the SPH density and gradient estimation (Liu & Liu, 2010). A range of sub-grid models are applied to simulate galaxy and black hole formation and associated feedback processes. Radiative cooling from both primordial gas (Katz et al., 1996) and metals (Vogelsberger et al., 2014) is considered. Star formation is implemented based on the multiphase star formation model (Springel & Hernquist, 2003), but incorporating several effects described in Vogelsberger et al. (2013). The formation of molecular hydrogen is computed according to the prescription of Krumholz & Gnedin (2011), and its effect on star formation at low metallicities is considered. Type II supernova wind feedback is included, using the same model as in the Illustris simulation (Nelson et al., 2015; Okamoto et al., 2010). Their wind speeds are assumed to be proportional to the local one-dimensional dark matter velocity dispersion \(\sigma_{\mathrm{DM}}\): \(v_{\mathrm{w}}=\kappa_{\mathrm{w}}\ \sigma_{\mathrm{DM}}\), where \(v_{\mathrm{w}}\) is the wind speed, and the dimensionless parameter \(\kappa_{\mathrm{w}}=3.7\)(Vogelsberger et al., 2013).
Black hole growth and AGN feedback are modeled in the same way as in the _MassiveBlack I & II_ simulations, based on the black hole subgrid model developed in Springel et al. (2005) and Di Matteo et al. (2005). Black holes are seeded with an initial seed mass of \(M_{\mathrm{seed}}=5\times 10^{5}M_{\odot}\ h^{-1}\) in halos with a mass larger than \(5\times 10^{10}M_{\odot}\ h^{-1}\). Note that our choice of seed mass is close to that expected from direct collapse scenarios (e.g., Latif et al., 2013; Schleicher et al., 2013; Ferrara et al., 2014), but our seeding scheme makes no direct assumption for the black hole seed formation mechanism.
The gas accretion rate of the black hole is given by the Bondi-Hoyle rate (Bondi & Hoyle, 1944):
\[\dot{M}_{\mathrm{B}}=4\pi\,G^{2}\ M_{\mathrm{BH}}^{2}\,\rho_{\mathrm{BH}}\left( c_{\mathrm{s}}^{2}+v_{\mathrm{vel}}^{2}\right)^{-3/2}, \tag{3}\]
where \(c_{\mathrm{s}}\) is the local sound speed, \(\rho_{\mathrm{BH}}\) is the gas density around the quasar, and \(v_{\mathrm{vel}}\) is the velocity of the black hole relative to the surrounding gas. Super-Eddington accretion is allowed with an upper limit of twice the Eddington accretion rate \(\dot{M}_{\mathrm{Edd}}\). Therefore the black hole accretion rate \(\dot{M}_{\mathrm{BH}}\) is determined by \(\dot{M}_{\mathrm{BH}}=\min\left(\dot{M}_{\mathrm{B}},2\dot{M}_{\mathrm{Edd}}\right)\). With a radiative efficiency \(\eta=0.1\)(Shakura & Sunyaev, 1973), the black hole radiates with a bolometric luminosity \(L_{\mathrm{bol}}\) proportional to the accretion rate: \(L_{\mathrm{bol}}=\eta\ \dot{M}_{\mathrm{BH}}c^{2}\). Five percent of the radiated energy is thermally coupled to the gas residing within twice the radius of the SPH smoothing kernel of the black hole particle, which is typically about \(1\%\sim 3\%\) of the virial radius of the halo.
The patchy reionization model (Battaglia et al., 2013) is not included in the CR simulation because of the small box, instead the reionization is assumed to occur instantaneously. The applied global ionization history is consistent with that in BLUETIDES, and reionization is almost completed at redshift \(z=8\) (Fig. 2 in Feng et al. (2016)). Consequently, the gas in the snapshots we use herein (\(z\leqslant 7.5\)) is originally highly ionized.
### Line of sight gas densities
The projected gas density fields around the quasar at \(z=7.5\) and \(z=6.0\) are shown in the first two rows of Fig. 1, with the quasar located at the center of each panel. Regions with high density are color-coded by the hydrogen ionization fraction \(x_{\mathrm{H}1}\), from red to blue indicating ionized (\(x_{\mathrm{H}1}\lesssim 10^{-4}\)) to neutral (\(x_{\mathrm{H}1}\sim 0.1\)), as shown by the color bars. The left two panels display the regions 10 \(h^{-1}\)Mpc in width centered on the quasar, while the right two panels depict the central zoomed-in regions of width \(1\ h^{-1}\)Mpc.
We use HEALPY2 to cast 48 evenly spaced lines of sight, starting from the position of the quasar, and employ the SPH formalism to calculate the gas properties \(f(\mathbf{x})\) (e.g., density, velocity, ionization fraction) at position \(\mathbf{x}\) on the line of sight (Liu & Liu, 2010):
Footnote 2: [https://github.com/healpy/healpy](https://github.com/healpy/healpy)
\[\left<f(\mathbf{x})\right>=\sum_{j=1}^{N}\frac{m_{j}}{\rho_{j}}f\left(\mathbf{x}_{j} \right)\ W\left(\mathbf{x}-\mathbf{x}_{j},q\right), \tag{4}\]
where \(\sum_{j}\) is the sum over all the neighboring gas particles within the smoothing length \(q\), and \(W\) is the quintic density kernel. \(m_{j}\), \(\rho_{j}\), and \(\mathbf{x}_{j}\) are the mass, density, and position of each particle, respectively. Taking advantage of the periodic boundary conditions of the simulation box, we extend each line of sight to a length of 40 \(h^{-1}\)Mpc. The sightlines are drawn significant off-axes (not parallel to the \(x\), \(y\) or \(z\) axes), ensuring that none of them travel through the massive halo again. The spatial resolution is set to be 30 comoving kpc, equivalent to \(\sim 4\) kpc in proper units. We indicate the directions
Figure 2: The gas density contrast (\(\Delta_{\mathrm{b}}\)) PDF for the lines of sight from the CR simulation used in this work (blue), and those from the CROC simulation (orange) used in Chen & Gnedin (2021). Each pixel is 4 pkpc in length, and all the data are drawn from \(0.1\sim 2\) pMpc regions from the quasar in the \(z=6.5\) snapshot.
of 4 lines of sight in the left panels in Fig. 1 (blue, yellow, green, and red straight lines). In the bottom panel, we show the gas density contrast \(\Delta_{\rm g}(=\rho/\bar{\rho};\) where \(\bar{\rho}\) is the mean gas density of the Universe) at \(z=7.5\) for all the 48 lines of sight. The four examples shown in the upper panels are also included, represented by correspondingly colored lines. These \(\Delta_{\rm g}\) profiles enable us to see that the density peaks in different lines of sight are located at different radii, and that the density fluctuations along each direction span about two orders of magnitude.
One of the unique properties of our simulation is the constrained initial conditions. It is thus useful to compare our sightlines with one of the simulations studied in Chen & Gnedin (2021), which models a more commonly occurring type of region without constrained conditions. Chen & Gnedin (2021) studied sightlines drawn from the B40E CROC simulation, where the box size is \(40\,h^{-1}\) cMpc on each side. The CROC simulation is run with the Adaptive Refinement Tree code (Kravtsov, 1999; Kravtsov et al., 2002; Rudd et al., 2008), with a base resolution of \(39\,h^{-1}\)kpc and a peak resolution of \(100\) pc. All the sightlines are drawn from halos with dark matter mass larger than \(1.5\times 10^{11}\)\(M_{\odot}\). Another major difference is that the CROC simulation models reionization by star particles self-consistently, resulting in a volume-weighted neutral fraction \(\langle v_{\rm H1}\rangle_{\rm v}=0.13\) at \(z=7.33\) and \(\langle v_{\rm H1}\rangle_{\rm v}<6.7\times 10^{-4}\) after \(z=6.7\).
In Figure 2, we compare the density contrast of pixels, each 4 pkpc in length, in the same range from 0.1-2 pMpc between our CR simulation and CROC at \(z=6.5\). We display the density contrast Probability Density Function (PDF) for the lines of sight drawn from the CR simulation (blue curve) and that for the CROC (orange curve) in Fig. 2. It can be seen that our lines of sight have many more pixels with high density: the fraction of points with \(\Delta_{\rm g}>100\) is
Figure 4: The statistics for the quasar light curves with \(6\leq z\leq 7.5\). Left panel: the PDF of the ionizing photon rate contrast \(\dot{N}/\dot{N}_{\rm mean}\), where \(\dot{N}_{\rm mean}\) is the mean of the ionizing photon number rate \(\dot{N}\) emitted by the quasar. Right panel: the dimensionless power spectrum \(kP(k)\) for the time evolution of \(\dot{N}/\dot{N}_{\rm mean}\).
Figure 3: The mass evolution (blue curve; using right y-axis) and the ionizing photon number emitted per second by the accretion disk (black curve; left y-axis) of the brightest quasar in the simulation. The red solid line represents the photon number rate corresponding to twice the Eddington limit (upper limit set by our simulation). The yellow curve denotes the averaged photon number rate \(\dot{N}_{\rm mean}\) computed over a time kernel of 5 Myr. The upper axis indicates the age of the Universe at a given redshift.
one order of magnitude larger than that in CROC. This difference is probably because the quasar host halo in the CR simulation, which reaches a mass of \(\sim 10^{13}\,M_{\odot}\) at \(z=6\), is much more massive than the halos selected in Chen & Gnedin (2021), whose halo masses are \(\gtrsim 1.5\times 10^{11}\,M_{\odot}\). In fact, the halo chosen in this work is more massive than almost all the halos in previous simulations of proximity zones, for example, \(M_{\rm h}=2.5\times 10^{12}\,h^{-1}\,M_{\odot}\) in Keating et al. (2015), \(M_{\rm h}\gtrsim 10^{11.5}\,M_{\odot}\) in Davies et al. (2020), and \(M_{\rm h}<10^{12}\,M_{\odot}\) in Satyauchi et al. (2023). We test how this difference in the surrounding density field affects the resultant \(R_{\rm p}\) in Appendix A using a constant lightbul model. Considering that black holes were added by hand at the center of the halo in most previous studies, our lines of sight, which are self-consistently drawn from the host halo of the black hole particle, reflect the environment of the high-redshift quasar more realistically.
### Quasar light curve
To convert the bolometric luminosity \(L_{\rm bol}\) computed in Section 2.2 to the UV ionizing photon number emitted by the quasar per second (\(\dot{N}\)), we follow the standard procedure (see, also Chen & Gnedin, 2021) and use a power-law spectral energy distribution (SED) from 1450 A to 912 A: \(L_{\nu}\propto\nu^{-\alpha}\) with a spectral index \(\alpha=1.5\), normalized by \(L_{\rm bol}\). This leads to \(M_{\rm UV}\) upon applying the appropriate bolometric correction (Fontanot et al., 2012):
\[M_{\rm UV}=-2.5\log_{10}\frac{L_{\rm bol}}{f_{\rm B}\mu_{\rm B}}+\Delta_{\rm B,\,UV}+34.1, \tag{5}\]
where \(f_{\rm B}=10.2\), \(\mu_{\rm B}=6.7\times 10^{14}\,\rm Hz\), and \(\Delta_{\rm B,\,UV}=-0.48\). We assume the escape fraction for the quasar is \(f_{\rm esc}=100\%\), consistent with the large escape fractions inferred from observations (Eilers et al., 2021; Stevans et al., 2014; Worseck et al., 2014). The total ionizing photon rate is then given by \(\dot{N}=\int_{13.6\,eV}^{\infty}L_{\nu}/h^{\nu}\,d\nu\), which translates \(M_{\rm UV}=-26.66\) to \(\dot{N}=1\times 10^{57}\,\rm s^{-1}\), and \(M_{\rm UV}=-27\) to \(\dot{N}=1.36\times 10^{57}\,\rm s^{-1}\).
We compute \(\dot{N}\) for the quasar and show the light curve (black curve) in Fig. 3, compared with \(\dot{N}_{\rm Edd}\) (red curve), which represents the photon number rate corresponding to \(2\dot{M}_{\rm Edd}\), the upper limit of the accretion rate. The yellow curve indicates the photon number rate averaged using a 5 Myr top-hat kernel (\(\dot{N}_{\rm mean}\)). In this work, we focus on the light curve within \(\delta<z\leq 7.5\), a period during which the average quasar luminosity (\(\dot{N}_{\rm mean}\)) reaches a plateau. The quasar has a mean magnitude of \(M_{\rm UV,mean}=-24.8\) in this redshift range, which is comparable to currently observed quasars.
The quasar exhibits significant variation in the light curve as opposed to maintaining a fixed \(\dot{N}\). We display the PDF (left panel) and the dimensionless power spectrum \(kP(k)\) (right panel) of \(\dot{N}/\dot{N}_{\rm mean}\) in Fig. 4, which shows that the quasar experiences variation in \(\dot{N}\) spanning over two orders of magnitude. The characteristic fluctuation timescale, which is indicated by the peak of the power spectrum, is around \(t=1\) Myr.
### Radiative transfer code
To interpret the effect of quasar radiation on the surrounding IGM, we carry out one-dimensional RT in the manner of Chen & Gnedin (2021). This is implemented by postprocessing the CR simulation. The RT code solves the time-dependent ionization and recombination of H i, He i, He ii including the effect of quasar photoionizing radiation and the cosmic ionizing background. Temperature evolution is also calculated considering recombination cooling, collisional ionization cooling, collisional excitation cooling, and inverse Compton cooling, as well as the expansion of the Universe. One improvement in the code compared to previous work (Bolton & Haehnelt, 2007; Davies et al., 2020) is the implementation of an adaptive prediction-correction scheme, which is motivated by the vastly different temporal behavior of gas at different distances from the quasar.
The quasar light curve from the simulation is passed to the first cell, and the neutral fraction H i, He i, He ii and temperature are evolved with an adaptive scheme. At each adaptive time step, the transmitted ionizing spectrum is passed to the next cell as the incidental spectrum to evolve the next cell. These operations are executed iteratively for consecutive cells along the line of sight. For a more complete explanation of the RT code, we direct readers to Chen & Gnedin (2021).
In order to compute the Lyman \(\alpha\) flux spectra, we convolve the absorption contributed by all cells with the approximate Voigt profile proposed by Tepper-Garcia (2006). This profile is calculated using the hydrogen neutral fraction \(x_{\rm H_{1}}\) and gas temperature output from the RT simulation, in conjunction with the velocity and the density field from the CR simulation.
## 3 Results
### Evolution of \(R_{\rm p}\)
We present the post-processed spectra obtained from the RT code for a single line of sight at \(z=7\) in Fig. 5. The gas density contrast \(\Delta_{\rm g}\) is shown in the first row and the temperature prior to quasar activation (i.e., \(t_{\rm evol}=0\) Myr) is shown in the third row with the black dashed line. The transmitted flux and IGM temperature at \(t_{\rm evol}=1\) Myr (blue), 5 Myr (orange), 15 Myr (green), 20 Myr (red), and 25 Myr (purple) are depicted in the second and third rows, respectively, with the corresponding \(\dot{N}\) values enumerated in the legend of the second panel. A subplot in the second row provides an overview of the light curve variation along with the selected \(\dot{N}\) marked by stars. The traditional definition of the proximity zone edge is the point where the Lyman \(\alpha\) flux first drops below 10% after being smoothed by a 20A top-hat window. We show the smoothed flux (solid curves) as well as the flux threshold 10% (horizontal grey dashed line) in the second panel, and the \(R_{\rm p}\) is indicated by the intersection of the horizontal line and the solid curves. It is noteworthy that the higher the stars marked in the light curve, i.e., larger instantaneous \(\dot{N}\), the higher the corresponding flux is. This implies that the levels of the Lyman \(\alpha\) flux, and consequently the proximity zone size \(R_{\rm p}\), are primarily determined by \(\dot{N}\) while showing no correlation with \(t_{\rm evol}\).
The profiles of H i/He ii/He ii fractions (solid blue/orange/green curves) at \(t_{\rm evol}=5\) Myr are displayed in the bottom row, compared with the background profile (dashed lines). In the original CR simulation, helium is predominantly found in the He ii state. As the quasar's radiation ionizes the He ii, a 'He ii proximity zone' emerges, and the energy injected from the He ii reionization heats the surrounding IGM, which is known as the 'thermal proximity effect' (Bolton et al., 2010, 2012; Meiksin et al., 2010). This phenomenon also enhances the size of the Lyman \(\alpha\) proximity zone since the H i fraction is partially temperature-dependent (Davies et al., 2020). As observed in the third panel of Fig. 5, the heated region, unlike \(R_{\rm p}\), continues to expand monotonically with the increasing \(t_{\rm evol}\), suggesting its potential application in the estimation of quasar lifetimes (see Section 4.1).
In order to fully exploit the 21 snapshots within the redshift range:
\(6<z_{\rm snap,i}\leqslant 7.5\) (\(1\leqslant i\leqslant 21\); \(z_{\rm snap,i+1}<z_{\rm snap,i}\)), we conduct an RT calculation on the \(i\)th snapshot for a period of
\[t_{\rm evol,i}=t_{\rm age}(z_{\rm snap,i+1})-t_{\rm age}(z_{\rm snap,i}), \tag{6}\]
where \(t_{\rm age}(z)\) is the age of the universe at redshift \(z\), \(z_{\rm snap,i+1}\) is the redshift of the subsequent snapshot and \(z_{\rm snap,22}=6\). When we concatenate the next snapshot at \(z_{\rm snap,i}\) (\(i>1\)), we draw the density and velocity along the lines of sight from the new snapshot, which has not been post-processed, while using the temperature and the ionization fraction of H \(\rm\,\mu\)/He \(\rm\,\mu\) output by the RT code from the previous snapshot. We then concatenate these evolution segments, each reflecting the \(P_{\rm P}\) during \(t_{\rm age}(z_{\rm snap,i})\sim t_{\rm age}(z_{\rm snap,i+1})\) interval.
Figure 5: The spectra computed by the RT post-processing code for one line of sight drawn from the snapshot at \(z=7\). Panels from top to bottom show the gas density contrast \(\Delta_{\rm B}\) (\(=\rho/\bar{\rho}\)), the Lyman \(\alpha\) transmitted flux, the gas temperature, and the fraction of H \(\rm\,\mu\)/He \(\rm\,\mu\). The same color scheme is used in both the flux and temperature panels to illustrate the outputs from the RT simulation at \(t_{\rm evol}=1\) Myr (blue), 5 Myr (orange), 15Myr (green), 20 Myr (red), and 25 Myr (purple). In the second row, a subplot provides an overview of the light curve along with the \(\dot{N}\) for the selected \(t_{\rm evol}\) marked with colored stars, whose values are enumerated in the legend. The solid curves represent the spectra smoothed by a 20 Å depth-hat kernel, with the dashed curves giving the original flux profiles, and the horizontal dotted line demonstrates the 10% flux threshold. In the third panel, the black dashed curve depicts the temperature at \(t_{\rm evol}=0\) Myr (i.e., no quasar radiation). In the bottom row, H \(\rm\,\mu\)/He \(\rm\,\mu\)/He \(\rm\,\mu\) fraction (solid blue/orange/green curves) at \(t_{\rm evol}=5\) Myr are compared with the background ionization fractions(dashed curves).
Such a stitching operation is implemented across all the lines of sight, each maintaining a fixed direction throughout the entire redshift range. We estimate the uncertainty in the resultant \(R_{\rm p}\) caused by the breaking of the continuous evolution of the IGM caused by this procedure in Appendix B. We find it tiny compared to the \(R_{\rm p}\) scatter caused by the underlying density fluctuations.
We capture the variations in proximity zone sizes across 48 directions with a variable light curve for \(\sim 240\) Myr after the quasar's activation. Fig. 6 presents the evolution of \(R_{\rm p}\) with a time resolution of \(\Delta t_{\rm evol}=0.25\) Myr for 6 sample sightlines (blue curves in the upper six panels) as well as an average value for 48 directions (blue curve in the bottom row). As a comparison, we also plot the quasar light curves \(\dot{N}\) using the red curves. The blue shaded area in the bottom panel depicts the 16-84th percentile scatter among different directions. The vertical grey dash lines indicate the redshifts of available snapshots \(z_{\rm snap,i}\). Since the reionization is virtually completed at \(z=7.5\), the scatter between different lines of sight is entirely attributable to the underlying density field at a specific point in time (Lidz et al., 2006). On the other hand, the fluctuations in the quasar light curve contribute significantly to the variability in \(R_{\rm p}\) for an individual direction within short time frames. The extent of this variability in \(R_{\rm p}\) depends on the specific density field. This is evidenced by a comparison of the proximity zone evolution between sightlines 6 and 7 (the second and third rows in Fig. 6): the same light curve fluctuations result in changes of \(\Delta R_{\rm p}\sim 1.7\) Mpc for sightline 7 within 0.25 Myr, while for sightline 6, the changes are nearly all less than \(\Delta R_{\rm p}\sim 0.5\) Mpc.
We can quantify the influence of the underlying density fluctuations by computing the standard deviation of proximity zone sizes (\(\sigma_{R_{\rm p}}\)) across sightlines for the same redshift. We find the mean of this standard deviation \(\left(\left\langle\sigma_{R_{\rm p}}\right\rangle\right)\) is 0.28 Mpc across the entire redshift range. On the other hand, to show the influence of light curve variability, we compute the mean of \(R_{\rm p}\) (\(\left\langle R_{\rm p}\right\rangle\)) across all sightlines at a
Figure 6: The evolution of the proximity zone size \(R_{\rm p}\) with a time resolution of \(\Delta t_{\rm evol}=0.25\) Myr. The top 6 panels depict the \(R_{\rm p}\) (blue curve; left y-axis) for 6 lines of sight, compared with the quasar light curve \(\dot{N}\) (red curves, right y-axis). The vertical grey dash lines denote the redshift of available snapshots \(z_{\rm snap,i}\), and the upper axis indicates the age of the Universe at a given redshift. The bottom panel shows the average proximity zone \(\left\langle R_{\rm p}\right\rangle\) (blue curve) and the 16-84th percentile scatter \(\sigma_{\rm R_{\rm p}}\) (blue shaded area) across 48 directions for the specific redshift. The standard deviation of \(\left\langle R_{\rm p}\right\rangle\), which represents the scatter caused by the light curve variation, is 0.33 Mpc. And the mean of \(\sigma_{\rm R_{\rm p}}\) for the entire redshift range, indicating the influence of density fluctuations, is 0.28 Mpc.
specific redshift, and then calculate the standard deviation of these mean \(R_{\rm p}\) values for different redshifts (\(\sigma_{\rm(R_{\rm p})}\)), which is 0.33 Mpc. This illustrates that compared to the density fluctuations, the variations in the light curve have a similar influence, or slightly larger, on the scatter of \(R_{\rm p}\) values.
### \(R_{\rm p}\) - \(M_{\rm UV}\) scaling relation
In Fig. 7, the lower left panel displays the proximity zone size \(R_{\rm p}\) as a function of the quasar's instantaneous magnitude \(M_{\rm UV}\). The median \(R_{\rm p}\) across the entire redshift range (\(6.0\leq z\leq 7.5\)) for 48 lines of sight is represented by the blue solid curve. The best power-law fit \(R_{\rm p}\) - \(M_{\rm UV}\) scaling relation is found to be \(\log R_{\rm p}\propto-0.13\,M_{\rm uv}\), i.e., \(R_{\rm p}\propto L^{0.32}\) (blue dotted curve). The upper and the right panels show the marginal distribution of \(M_{\rm UV}\) and \(R_{\rm p}\): \(M_{\rm UV,mean}=-24.8\), lines represent the mean values of \(M_{\rm UV}\) and \(R_{\rm p}\): \(M_{\rm UV,mean}=-24.8\), \(R_{\rm p,mean}=1.37\) pMpc. The \(M_{\rm UV}\) histogram shows that the quasar is relatively faint (\(M_{\rm UV}\gtrsim-25\)) over most of its lifetime, and only becomes luminous (\(M_{\rm UV}\lesssim-26\)) occasionally.
For comparison with observational results, in Fig.7 we plot the observed proximity zone size for quasars in the range \(6\leq z\leq 6.5\) measured by Eilers et al. (2017, 2020) (black dots) and Ishimoto et al. (2020) (green dots). We also show the minimum/maximum and median values yielded by our simulation within the same redshift range using red dashed lines and a solid line, respectively. Our simulation reproduces the \(R_{\rm p}\) range for most of the observed quasars with \(M_{\rm UV}>-26\). The inability to yield \(R_{\rm p}<0.5\) Mpc could be due to the failure to resolve LLSs or DLAs. On the bright end, our simulation predicts smaller \(R_{\rm p}\) than some of the observational measurements. This probably stems from the quasar's low average luminosity, which we discuss in more detail in Section 4.2.
Our derived \(R_{\rm p}\) - \(M_{\rm UV}\) scaling relation exhibits a flatter trend compared to the \(R_{\rm p}\propto L^{0.5}\) predicted by Bolton & Haehnelt (2007a) for an idealized ionized IGM based on a semi-analytical model (see equation 2). One possible reason for the disparity is that the \(M_{\rm UV}\) for our simulated quasar is non-uniformly distributed across a broad redshift range, as shown in the upper panel of Fig. 7. The \(R_{\rm p}\) - \(M_{\rm UV}\) scaling relation is redshift-dependent, which can be seen in Fig. A1, where the lower redshift environment typically produces more extensive proximity zones. According to our simulation, the optimal fit is \(\log R_{\rm p}\approx\log(3.20\ {\rm Mpc})+0.36\,[-0.4(M_{\rm UV}+27)]\) at \(6.0\leq z\leq 6.5\) and \(\log R_{\rm p}\approx\log(2.41\ {\rm Mpc})+0.41\,[-0.4(M_{\rm UV}+27)]\) at \(7.3\leq z\leq 7.5\). The slope remains shallower than that in Bolton & Haehnelt (2007a), even when the data is constrained within a narrower redshift band. Such a weaker dependence of \(R_{\rm p}\) on the instantaneous magnitude probably originates from the variation in the light curve, which breaks the correspondence between the \(R_{\rm p}\) and the contemporaneous \(M_{\rm UV}\) as we discuss in Section 4.1.
## 4 Discussion
### Response of \(R_{\rm p}\) to variable light curve
Several recent studies have focused on constraining quasar lifetimes using proximity zone sizes under the assumption of a lightbulb model (Morey et al., 2021; Khrykin et al., 2021). In this section, we briefly discuss the influence of a variable light curve on quasar lifetime estimation.
We start by exploring the response behavior of \(R_{\rm p}\) to the quasar light curve. We plot a 5 Myr duration in the evolution of \(R_{\rm p}\) starting from \(z_{\rm snap}=7.160\) in Fig. 8, with the blue curve representing the average value for 48 lines of sight and the shaded area showing the 16-84th percentile scatter. The subplots zoom in to 80 kyr time spans around four peaks in the light curve (red dotted lines in subplots) and their corresponding \(R_{\rm p}\) peaks (blue dotted lines in subplots). We label the time lags between the light curve peaks and the \(R_{\rm p}\) peaks in each subplot, which are 12, 9, 12, 14 kyr, respectively. They are comparable to the hydrogen equilibrium time at the edge of the proximity zone: \(t_{\rm eq}^{\rm H_{1}}=1/\Gamma^{\rm H_{1}}\sim 10^{4}\) yr, where \(\Gamma^{\rm H_{1}}\) is the photoionization rate of hydrogen (Bolton & Haehnelt, 2007b; Eilers et al., 2018; Davies et al., 2020). This illustrates that \(R_{\rm p}\) traces the fluctuations in the light curve closely but with a short delay of \(\sim 10^{4}\) yr, which breaks the correspondence between \(R_{\rm p}\) and the contemporaneous \(M_{\rm UV}\).
Previous studies have extensively discussed the implications of quasar proximity zone size for their 'lifetime,' using a lightbulb model. This model describes a quasar turning on suddenly, with its luminosity remaining constant thereafter. Our simulated light curve features some episodes where the luminosity increases nearly by a factor of \(\times 4\) within \(10^{3}\) years (e.g., at 830 kyr in the first zoom-in panel and at 3945 kyr in the last zoom-in panel, as shown in Fig. 8). Therefore, these sudden jumps in luminosity can be viewed as the beginning of an 'episode', a term previous studies have used to describe the quasar's episodic lifetime (Eilers et al., 2017, 2021). However, we note that the light curve varies rapidly and seldom behaves like a lightbulb for more than a few times \(10^{3}\) years. By the time \(10^{4}\) yr have passed, denoted as the typical \(R_{\rm p}\) delay time, the quasar's luminosity has already changed significantly. Moreover, there are many periods during which the quasar luminosity evolves
Figure 7: The dependence of proximity zone size \(R_{\rm p}\) on quasar instantaneous magnitude \(M_{\rm UV}\). Solid curves in the lower left panel show median \(R_{\rm p}\) as a function of \(M_{\rm UV}\) for \(6.0\leq z\leq 7.5\) (blue), which is the entire redshift range for this study, and \(6.0\leq z\leq 6.5\) (red). The blue dotted curve indicates the best power-law fit \(R_{\rm p}\) – \(M_{\rm UV}\) scaling relation: \(R_{\rm p}\propto L^{0.32}\). Observational measurements from Eilers et al. (2017), Eilers et al. (2020) and Ishimoto et al. (2020) are displayed by the black and green dots, which are made for quasars at \(6.0\leq z\leq 6.5\). The minimum and maximum simulated values for \(6.0\leq z\leq 6.5\) are depicted with the red dashed curves for comparison. The upper and the right panels present the marginal distribution of \(M_{\rm UV}\) and \(R_{\rm p}\), respectively. The blue dash-dot lines represent the mean values of the \(M_{\rm UV}\) and the \(R_{\rm p}\) for \(6.0\leq z\leq 7.5\): \(M_{\rm UV,mean}=-24.8\), \(R_{\rm p,mean}=1.37\) pMpc.
slowly (e.g., as seen in the second and third zoom-in panels of Fig. 8), making the 'episodic lifetime' ill-defined.
In the latter half of reionization, the integrated lifetime of the quasar can hardly be measured solely based on the size of the proximity zone. The \(R_{\rm p}\) value only informs us about the quasar's luminosity within a span of \(10^{4}\) years (as seen in Fig. 8), while the integrated lifetime of our quasar has been hundreds of million years. To measure this total duration for which the quasar has been shining, one can use observable associated with the thermal states of the IGM around the quasar, like the He II proximity zone, as it has a longer response time \(t_{\rm eq}\sim 10^{6}\) yr (Worseck et al., 2021; Khrykin et al., 2017, 2021; Soltinsky et al., 2023; Chen et al., 2023).
### Influence of light curve variation
In this section, we discuss how the rapid fluctuation in the light curve in our simulation affects the resultant \(R_{\rm p}\), and compare it with the proximity zone sizes yielded by a light curve that remains constant over an extended period of time.
We excerpt a portion of the variable light curve from the simulation at \(z=6.162\) with a time span of 30 Myr, short enough that the density evolution is negligible. This excerpted light curve has a mean magnitude of \(M_{\rm UV,mean}=-24.72\) and a mean ionizing photon rate of \(N_{\rm mean}=1.67\times 10^{56}\) s\({}^{-1}\). In contrast to this variable light curve, we construct another light curve with a constant flux of the same \(N_{\rm mean}\) ('lightbul' model). We evolve the same set of 48 sightlines with these two light curves for 30 Myr. In the left panel of Fig. 9, we display the mean \(R_{\rm p}\) at different \(t_{\rm evol}\) of the variable light curve (blue curve) and the lightbulb model (solid yellow curve). The blue shaded region and the yellow dashed lines indicate the 68% scatter for the two \(R_{\rm p}\) groups. For the lightbulb model, \(R_{\rm p}\) remains nearly unchanged after rapid growth during the first \(\sim 1\) Myr, with only a slight decline owing to the Universe cooling, which aligns with the results of previous studies (Davies et al., 2020; Eilers et al., 2021). In the middle panel, we show the \(R_{\rm p}\) distributions as a function of instantaneous magnitude \(M_{\rm UV}\) for \(10<t_{\rm evol}<30\) Myr, during which the lightbulb model reaches a stable stage and produces a similar \(R_{\rm p}\). The blue pixels represent the \(R_{\rm p}\) for the variable light curve, whose median, 68%, and 95% scatter regions are shown by the red solid, dashed, and dotted curves, respectively. The \(R_{\rm p}\) values generated by the lightbulb models are shown as dots, whose thick and thin errorbars correspond to the 68% and 95% scatter, respectively. With a similar mean proximity zone size \(\left(R_{\rm p}\right)\sim 1.6\) pMpc across the entire magnitude range, the variable light curve yields a scatter (\(\sigma_{\rm R_{\rm p}}=0.46\) pMpc) 28% larger than that of the \(N_{\rm mean}\) lightbulb (\(\sigma_{\rm R_{\rm p}}=0.36\) pMpc). The \(\sigma_{\rm R_{\rm p}}\) values for the variable light curve encompass contributions from both light curve variation and underlying density field fluctuation. On the other hand, the \(\sigma_{\rm R_{\rm p}}\) for the lightbulb model is almost totally attributed to the density differences. In addition to the lightbulb model fixed at \(M_{\rm UV,mean}\) (yellow dot), we also simulate the lightbulb with different magnitudes (black dots): \(M_{\rm UV}=-23.5\), \(-24\), \(-24.5\), \(-25.5\), \(-26.5\), \(-27\) in the middle panel. It can be seen from the error bars that the influence of the density fluctuation for a lightbulb is strongly correlated with \(M_{\rm UV}\), and the high luminosity magnifies the variance between directions, i.e., brighter \(M_{\rm UV}\) leads to an increase in \(\sigma_{\rm R_{\rm p}}\).
An important feature illustrated by the middle panel of Fig. 9 is that the median \(R_{\rm p}\) from the variable light curve at a specific magnitude coincides with the lightbulb model only around \(M_{\rm UV,mean}=-24.72\), while it tends to yield smaller \(R_{\rm p}\) compared to the lightbulb when \(M_{\rm UV}<M_{\rm UV,mean}\), and conversely, larger \(R_{\rm p}\) when \(M_{\rm UV}>M_{\rm UV,mean}\). This is more clearly demonstrated in the subplot, where we show the difference between the median \(R_{\rm p}\) produced by the lightbulb models and the variable light curve. The horizontal dotted line represents where the two median values are the same, and the vertical dotted line shows \(M_{\rm UV,mean}\).More specifically, in the right panel of Fig. 9 we compare the one-dimensional \(R_{\rm p}\) distributions within a narrow \(M_{\rm UV}\) bin for \(10\) Myr \(<t_{\rm evol}<30\) Myr. On the bright end (\(M_{\rm UV}\sim-26\)), the lightbulb model predicts a median \(R_{\rm p}\) 30% larger than that produced by the variable light curve (2.94 pMpc versus 2.27 pMpc). While on the dim end (\(M_{\rm UV}\sim-23.5\)), the lightbulb model yields a median \(R_{\rm p}\) 13% smaller than that of the variable light curve (1.03 pMpc versus 1.19 pMpc). However, these two models give similar scatter in \(R_{\rm p}\) for this quasar. The standard deviations of \(R_{\rm p}\) for both the variable light curve and the lightbulb model are \(\sim 0.6\) pMpc around \(M_{\rm UV}=-26\), and \(\sim 0.3\) pMpc around \(M_{\rm UV}=-23.5\).
By building a toy model of the fluctuating light curves, Davies
Figure 8: The averaged evolution of the proximity zone size \(R_{\rm p}\) (blue curve; left y-axis) compared with the quasar light curve \(\dot{N}\) (red curve; right y-axis) for 5 Myr following \(z_{\rm amp}=7.160\). The blue shaded region represents the 16-84th percentile scatter. The subplots zoom in to 80 kyr time spans around four peaks in the light curve (red dotted lines in subplots) and their corresponding \(R_{\rm p}\) peaks (blue dotted lines in subplots), which are denoted by the black shaded rectangles. In the subplots, we label the time lag between the light curve peaks and the \(R_{\rm p}\) peaks, which are \(\sim 10^{4}\) yr.
et al. (2020) noticed that the \(R_{\rm p}\) simulated based on variable light curves skewed towards smaller values as opposed to a lightbulb fixed at a relatively high luminosity. A similar bias on the bright end emerges in our simulation, while our computation herein further demonstrates that the discrepancy between the lightbulb model and the variable light curve is contingent upon the specific magnitude bin. Such a discrepancy occurs because \(R_{\rm p}\) is governed by the entire light curve within the most recent \(\sim 10^{4}\) yr, rather than the contemporaneous instantaneous luminosity. As depicted by the PDF in Fig. 4, the distribution of \(\dot{N}\) is essentially Gaussian centering around \(N_{\rm mean}\), which implies that the \(\dot{N}\) value \(10^{4}\) yr preceding a bright or a dim point in the light curve is probably close to \(\dot{N}_{\rm mean}\), producing a \(R_{\rm p}\) close to that given by a lightbulb model fixed at \(\dot{N}_{\rm UV,mean}\). Furthermore, the higher luminosities correspond to more significant variations since the variation amplitude generally equals \(M_{\rm UV,mean}-M_{\rm UV}\). Large variations result in more remarkable discrepancies (see Appendix C), which explains the more substantial shift at smaller magnitudes observed in the middle panel of Fig. 9.
Therefore, for an individual quasar whose light curve persistently fluctuates around a certain value, its proximity zone size displays a shallow evolution with instantaneous magnitude, and diverts from the lightbulb model in a \(M_{\rm UV}\)-dependent way. Such divergence accounts for the difference between our predicted \(R_{\rm p}\) and the observational measurements at \(M_{\rm UV}<-26\) shown in Fig. 7: our simulated light curve generally has a lower luminosity, which makes the \(R_{\rm p}\) in this magnitude range smaller; while the observed quasars with \(M_{\rm UV}<-26\) probably have larger overall luminosity, and so generate large proximity zones.
### Implications of the observed \(R_{\rm p}\) distribution
Our simulation shows that one single quasar can vary significantly over its lifetime, with a scatter in luminosity spanning approximately two orders of magnitude. If we define it according to its mean luminosity \(M_{\rm UV}^{\star}\), our quasar is a relatively faint one (\(M_{\rm UV}=\)-24.8) during the period \(z=7.5\sim 6\). However, it still has a 16% chance to be caught in a relatively bright phase with \(M_{\rm UV}<\)-25.5. In such a bright phase, the distribution of \(R_{\rm p}\) is shifted towards the shorter end compared to the case of constant luminosity (the right panel of Fig. 9). This has profound implications for interpreting the \(R_{\rm p}\) distribution at given observed magnitude \(M_{\rm UV}\) (\(P(R_{\rm p}|M_{\rm UV})\)).
The distribution of \(P(R_{\rm p}|M_{\rm UV})\) can be formulated as the following conditional distribution:
\[P(R_{\rm p}|M_{\rm UV})=\frac{P(M_{\rm UV}^{\star})P(M_{\rm UV}|M_{\rm UV}^{ \star})P(R_{\rm p}|M_{\rm UV},M_{\rm UV}^{\star})}{P(M_{\rm UV}^{\star})P(M_ {\rm UV}|M_{\rm UV}^{\star})},\]
where \(P(M_{\rm UV}^{\star})\) is the probability function of quasars with a certain mean magnitude \(M_{\rm UV}^{\star}\), \(P(M_{\rm UV}|M_{\rm UV}^{\star})\) is the probability that a quasar of \(M_{\rm UV}^{\star}\) is observed with magnitude \(M_{\rm UV}\), and \(P(R_{\rm p}|M_{\rm UV},M_{\rm UV}^{\star})\)
Figure 10: The \(R_{\rm p}\) distribution with \(-26.5<M_{\rm UV}<-25.5\) produced by the variable light curves (red solid curve), and the lightbulb models with \(10<t_{\rm real}<30\) Myr (red shaded area), compared with the observed \(R_{\rm p}\) for the quasars within the same magnitude bin measured in Eilers et al. (2017, 2020); Ishimoto et al. (2020).
Figure 9: Left panel: the evolution of the mean proximity zone sizes \(R_{\rm p}\) with the variable light curve (blue curves) or a lightbulb model fixed at \(M_{\rm UV,mean}=-24.72\) (yellow curves). The quasar is assumed to turn on at \(z=6.162\) and last for 30 Myr. The blue shaded region and the yellow dashed lines indicate the 68% scatter for the two \(R_{\rm p}\) groups. Middle panel: two-dimensional distribution of \(R_{\rm p}\) and instantaneous \(M_{\rm UV}\) for \(10<t_{\rm real}<30\) Myr. The blue pixels represent the \(R_{\rm p}\) for the variable light curve, whose median/68%/95% confidence regions are shown by the red solid/dashed/dotted curves. The dots depict the \(R_{\rm p}\) generated by the lightbulb models fixing the magnitude at \(M_{\rm UV}=-23.5\), \(-24\), \(-24.5\), \(-25.5\), \(-26\), \(-26.5\), \(-27\) (black dots) and \(M_{\rm UV,mean}=-24.74\) (yellow dot). The thick/thin errorbars correspond to 68%/95% confidence intervals. The subplot displays the difference between the median \(R_{\rm p}\) produced by the lightbulb models and the variable light curve. The horizontal dotted line represents where the two median values are the same, and the vertical dotted line shows \(M_{\rm UV,mean}\). Right panel: one-dimensional \(R_{\rm p}\) distribution generated by the part of the variable light curve with \(-26.5<M_{\rm UV}<-25.5\) (red curve), \(-24<M_{\rm UV}<-23\) (blue curve), and by a lightbulb model with \(M_{\rm UV}=-26\) (red shaded area), \(M_{\rm UV}=-23.5\) (blue shaded area) for \(10\) Myr \(<t_{\rm real}<30\) Myr.
is the probability that the quasar of \(M_{\rm UV}^{\star}\) at the observed magnitude \(M_{\rm UV}\) displays a proximity zone size of \(R_{\rm p}\).
To calculate such a distribution, certain assumptions need to be made. The first term \(P(M_{\rm UV}^{\star})\) is similar to the quasar luminosity function, but it is for the mean magnitude \(M_{\rm UV}^{\star}\) instead of the observed luminosity. Measuring such a \(M_{\rm UV}^{\star}\) directly is challenging. For simplicity, here we assume that \(P(M_{\rm UV}^{\star})\) is equal to the observed quasar luminosity function (QLF) measured by Matsuoka et al. (2018) for quasars at \(z=6\):
\[P(M_{\rm UV}^{\star})\propto\left[10^{-0.156}\left(M_{\rm UV}^{\star}+25.30 \right)+10^{-0.716}\left(M_{\rm UV}^{\star}+25.30\right)\right]^{-1}. \tag{7}\]
To estimate \(P(M_{\rm UV}|M_{\rm UV}^{\star})\), we need to know the PDF of the light curve. Motivated by the luminosity PDF of our simulated quasar (upper panel of Fig. 7), we assume that the light curve has a Gaussian distribution centered at \(M_{\rm UV}^{\star}\):
\[P(M_{\rm UV}|M_{\rm UV}^{\star})=\frac{1}{\sigma\sqrt{2\pi}}\,\exp\left[\frac {1}{2}\left(\frac{M_{\rm UV}-M_{\rm UV}^{\star}}{\sigma}\right)^{2}\right]. \tag{8}\]
with a fixed scatter \(\sigma=0.7\) for all the light curves.
Finally, to model \(P(R_{\rm p}|M_{\rm UV},M_{\rm UV}^{\star})\), we make the following assumptions: (1) \(P(R_{\rm p}|M_{\rm UV},M_{\rm UV}^{\star})\) has the same shape as the \(R_{\rm p}\) distribution generated by the constant light curve with magnitude fixed at \(M_{\rm UV}\), which we label as \(P_{\rm p},M_{\rm UV}(R_{\rm p})\), but is shifted towards a different mean \(R_{\rm p}\) with the amount \(B\). (2) the value of \(B\) is only determined by \(M_{\rm UV}-M_{\rm UV}^{\star}\) and is independent of the specific values of \(M_{\rm UV}^{\star}\) and \(M_{\rm UV}\). Hence, the probability of \(R_{\rm p}\) for a given variable light curve is
\[P(R_{\rm p}|M_{\rm UV},M_{\rm UV}^{\star})=P_{\rm Ib,M_{\rm UV}}\,(R_{\rm p}+ B). \tag{9}\]
The first assumption is motivated by the right panel of Fig. 9, which shows that for an individual quasar, the \(R_{\rm p}\) distributions generated by the variable light curve and the lightbulb have similar shapes: they have roughly the same scatter but different mean \(R_{\rm p}\). To formulate \(B(M_{\rm UV}-M_{\rm UV}^{\star})\), we use our results in Section. 4.2 as a guideline, i.e., we linearly interpolate the difference between the median \(R_{\rm p}\) of the lightbulb models and the variable light curve as a function of the magnitude difference \(M_{\rm UV}-M_{\rm UV}^{\star}\), which is depicted by the subplot in the middle panel of Fig. 9.
With the formulas and assumptions stated above, we use Markov Chain Monte Carlo (MCMC) to generate \(N=1000\) quasar samples to create the final distribution \(P(R_{\rm p}|M_{\rm UV})\). We present the \(R_{\rm p}\) distribution with \(-26.5<M_{\rm UV}<-25.5\) calculated by this model in Fig. 10 (red solid curve). As a comparison, we plot the combined \(R_{\rm p}\) distribution produced by the lightbulb models for \(10<t_{\rm evol}<30\) Myr fixed at \(M_{\rm UV}=-25.5,\ -26,\ -26.5\) (red shaded area). We consider three lightbulb models rather than only the one with \(M_{\rm UV}=-26\) to show the whole \(R_{\rm p}\) range reached by the lightbulb in this \(M_{\rm UV}\) bin, which spans the minimum \(R_{\rm p}\), reached when \(M_{\rm UV}=-25.5\), to the maximum \(R_{\rm p}\), for \(M_{\rm UV}=-26.5\). These three lightbul models are sampled based on the QLF (equation 7). We also show the \(R_{\rm p}\) values for the observed quasars with \(-26.5<M_{\rm UV}<-25.5\) measured by Eilers et al. (2017, 2020); Ishimoto et al. (2020) in Fig. 10 (black curve). It is evident that although a variable light curve produces a similar maximum value (\(R_{\rm p,max}\sim 5.5\) Mpc) as the lightbulb model, it can also result in much smaller proximity zones (\(R_{\rm p,min}\sim 0.2\) Mpc). Therefore, it readily explains the observations with small proximity zones. These extremely small \(R_{\rm p}\) values are generated by the quasar with low averaged luminosity (i.e., large \(M_{\rm UV}^{\star}\)). The considerable difference between the observed instantaneous \(M_{\rm UV}\), which is \(\sim-26\) in this case, and \(M_{\rm UV}^{\star}\) moves the \(R_{\rm p}\) distribution away from the distribution without light curve variation, as seen in the right panel of Fig. 9. Additionally, due to the relative abundance of faint \(M_{\rm UV}^{\star}\) quasars over the bright ones, the overall \(R_{\rm p}\) distribution skews towards smaller values.
Since the calculation for the variable light curve and the lightbulb models are based on the same groups of lines of sight, and because the lightbulb model accounts for all the scatter introduced by underlying density fluctuations, the wider scatter shown by the red curve in Fig. 10 can only be attributed to light curve variability. This underscores the necessity of considering light curve variability when investigating quasar proximity zones.
Note that Davies et al. (2020) made a similar comparison in their Fig. 16 between the observed \(R_{\rm p}\) distribution and predicted \(R_{\rm p}\) from different quasar light curve models. They concluded that the lightbul model with long episodic quasar lifetime (\(\geq 1\) Myr) leads to a \(R_{\rm p}\) distribution consistent with the observed \(R_{\rm p}\) distribution, while their toy quasar light curve with variation does not. On the contrary, our simulated quasar light curve results in a \(R_{\rm p}\) distribution that skews only slightly to the smaller end compared to the lightbulb model (red line versus transparent red shaded histograms in Fig. 10). We conduct a Kolmogorov-Smirnov (K-S) test and find the K-S statistic to be \(D=0.23\) and \(p\)-value \(0.54\) when comparing the observed \(R_{\rm p}\) distribution and that from the lightbulb model. On the other hand, we find \(D=0.34\) and \(p\)-value\(=0.12\) between the observed \(R_{\rm p}\) distribution and the one from our variable light curve. Therefore, both the lightbulb model and our simulated variable light curve are compatible with the observed \(R_{\rm p}\) distribution. We reach this different conclusion from Davies et al. (2020) because (1) their toy quasar light curve is constructed to have large variability at very small time scales (\(10^{2}\) yr and \(10^{4}\) yr), while our simulated quasar light curve has low power at such small timescales (see Fig. 4); (2) we consider the combined \(R_{\rm p}\) distribution generated by a group of variable quasars with different mean magnitude, rather than using an individual quasar; (3) we use an observational sample from Eilers et al. (2017, 2020); Ishimoto et al. (2020) while Davies et al. (2020) only have the data from Eilers et al. (2017).
As discussed in this subsection, the distribution of \(R_{\rm p}\) for a given observed \(M_{\rm UV}\) bin is influenced by both the underlying quasar mean-luminosity function and quasar variability, which includes both the luminosity PDF and the light curve power spectrum. Measuring \(R_{\rm p}\) can therefore provide constraints on these critical properties of the first quasars. However, it is important to note that the current observed \(R_{\rm p}\) distribution may be incomplete, and the selection function is complex. As a result, comparisons between models and observed \(R_{\rm p}\) should be interpreted cautiously. Obtaining a large complete sample of \(R_{\rm p}\) for reionization-era quasars could significantly enhance our understanding of quasar variability. From a modeling perspective, future work will explore how different light curve power spectra affect the \(R_{\rm p}\) distribution.
## 5 Conclusions
In this work, we study the proximity zone around a high-redshift quasar through RT post-processing the lines of sight from a cosmological simulation with constrained initial conditions. This constrained realization creates a quasar host halo of \(M_{h}=10^{13}\ M_{\odot}\) at \(z=6\), more massive than most halos studied in previous simulations. The simulation also includes galaxy and black hole formation models, resulting in a variable quasar light curve, which is more realistic than the widely used lightbulb model. The simulated light curve ex
hibits extreme variability, with the changes in luminosity spanning up to two orders of magnitude around the average value.
By concatenating \(R_{\rm p}\) evolution segments from 21 snapshots covering \(6.0<z\leqslant 7.5\), we capture the variations in proximity zone sizes with a variable light curve for \(\sim 240\) Myr after the quasar's activation in a highly ionized IGM. The resultant \(R_{\rm p}\) ranges between \(0.5-5\) pMpc. We demonstrate that variations in the light curve contribute an additional scatter, which is separate from the scatter induced by density variations. The standard deviation in \(R_{\rm p}\) values caused by each of these effects are approximately \(\sigma(R_{\rm p})\sim 0.3\) pMpc.
Our simulation suggests that in a pre-ionized IGM, the evolution of \(R_{\rm p}\) traces the variations in the light curve closely with a short time delay of \(\sim 10^{4}\) yr. This time lag breaks the correspondence between the \(R_{\rm p}\) and the contemporaneous \(M_{\rm UV}\). The \(R_{\rm p}\) is heavily influenced by the magnitude about \(10^{4}\) yr previously, whose difference from the observed \(M_{\rm UV}\) is uncertain and could be significant.This indicates that \(R_{\rm p}\) can only be used to infer the quasar episodic lifetime at best, and does not inform us of the integrated quasar lifetime.
By analyzing the \(R_{\rm p}\) distribution for specific \(M_{\rm UV}\) values, we show that for an individual quasar with a fluctuating light curve, its proximity zone size increases weakly with brighter instantaneous magnitude, and diverts from the lightpath model in a \(M_{\rm UV}\)-dependent way. Compared to the variable light curve, the lightpath model underestimates \(R_{\rm p}\) by 13% at the dim end (\(M_{\rm UV}\sim-23.5\)), and overestimates the \(R_{\rm p}\) by 30% at the bright end (\(M_{\rm UV}\sim-26\)).
We computed the distribution of \(R_{\rm p}\) based on a set of quasars sampled from a QLF and found that light curve variability leads to a broad distribution of \(R_{\rm p}\) at given observed magnitude. Notably, variable light curves contribute to a group of instantaneously bright quasars with extremely small proximity zones. These small \(R_{\rm p}\) can hardly be explained if the quasar light curve stays constant. This shows that it is necessary to consider the details of light curve variability when investigating quasar proximity zones.
## Acknowledgements
The authors thank Hy Trac and Nianyi Chen for helpful discussions. HC thanks the support by the Natural Sciences and Engineering Research Council of Canada (NSERC), funding reference #DIS-2022-568580. SB acknowledges the funding support by NASA-CONNSC22K1897. TDM and RAAC acknowledge funding from NSF AI Institute: Physics of the Future, NSF PHY-2020295, NASA ATP NNX17AK56G, and NASA ATP 80NSSC18K101. TDM acknowledges additional support from NASA ATP 19-ATP19-0084, and NASA ATP 80NSSC20K0519.
## Data Availability
The data underlying this article will be shared on reasonable request to the corresponding author.
|
2309.17100 | Turning Logs into Lumber: Preprocessing Tasks in Process Mining | Event logs are invaluable for conducting process mining projects, offering
insights into process improvement and data-driven decision-making. However,
data quality issues affect the correctness and trustworthiness of these
insights, making preprocessing tasks a necessity. Despite the recognized
importance, the execution of preprocessing tasks remains ad-hoc, lacking
support. This paper presents a systematic literature review that establishes a
comprehensive repository of preprocessing tasks and their usage in case
studies. We identify six high-level and 20 low-level preprocessing tasks in
case studies. Log filtering, transformation, and abstraction are commonly used,
while log enriching, integration, and reduction are less frequent. These
results can be considered a first step in contributing to more structured,
transparent event log preprocessing, enhancing process mining reliability. | Ying Liu, Vinicius Stein Dani, Iris Beerepoot, Xixi Lu | 2023-09-29T10:00:04Z | http://arxiv.org/abs/2309.17100v2 | # Turning Logs into Lumber:
###### Abstract
Event logs are invaluable for conducting process mining projects, offering insights into process improvement and data-driven decision-making. However, data quality issues affect the correctness and trustworthiness of these insights, making preprocessing tasks a necessity. Despite the recognized importance, the execution of preprocessing tasks remains ad-hoc, lacking support. This paper presents a systematic literature review that establishes a comprehensive repository of preprocessing tasks and their usage in case studies. We identify six high-level and 20 low-level preprocessing tasks in case studies. Log filtering, transformation, and abstraction are commonly used, while log enriching, integration, and reduction are less frequent. These results can be considered a first step in contributing to more structured, transparent event log preprocessing, enhancing process mining reliability.
Keywords:Log preprocessing Process mining Event log
## 1 Introduction
In the landscape of data-driven decision-making, event logs stand as invaluable assets, capturing the execution of activities of processes and their interactions within diverse operational systems. The potential insights that can be obtained from these logs are immense, spanning process improvement, anomaly detection, performance evaluation, and strategic planning [1]. However, the axiom "garbage in, garbage out" holds particularly true in this context [85]. The presence of data quality issues underscores the vital importance of preprocessing techniques. Without proper preprocessing, the very foundation of analysis is compromised.
The importance of data quality and preprocessing in the field of process mining has been acknowledged, as evidenced by the growing attention dedicated to these subjects [85, 97]. Despite the acknowledgment, the execution of log preprocessing seems to remain ad-hoc. Moreover, little support has been provided on which preprocessing tasks are possible and how to select them. Although a few process mining methodologies sketched potential preprocessing tasks, a comprehensive overview of these tasks has been notably absent. Furthermore, the way these preprocessing tasks are used in real-life has remained unclear.
Existing systematic literature reviews (SLRs) have attempted to tackle specific tasks of log preprocessing, such as event abstraction techniques [97] and data extraction [83].
However, a comprehensive review that covers diverse preprocessing tasks and their practical applications in real-world scenarios is lacking.
In this paper, we perform a systematic literature review to establish an initial, comprehensive overview of the preprocessing tasks and their utilization in process mining case studies. By undertaking this endeavor, we aim to create a repository of log preprocessing tasks that may provide guidance and support for researchers and practitioners.
We identified six high-level preprocessing tasks, and for four of these tasks, we observed 20 low-level preprocessing tasks described in the case studies. The results show that log filtering, transformation, and abstraction have been more frequently used in case studies, while log enriching, integration, and reduction (e.g., sampling) are much less frequently performed.
The remainder of this paper is organized as follows. In Section 2, we discuss related work. Next, we explain the methodology followed in Section 3 and present the results in Section 4. Finally, we conclude the paper in Section 5.
## 2 Related Work
In this section, we discuss the related work, based on which we synthesized an initial set of six high-level preprocessing tasks: (a) _log integration_, (b) _log transformation_, (c) _log reduction_, (d) _log abstraction_, (e) _log filtering_, and (f) _log enriching_, see Fig. 1.
Figure 1: Initial result of high-level log preprocessing tasks and techniques in the related work.
### Taxonomy of Log Preprocessing Tasks
Han et al. [36] propose four categories of data preprocessing techniques: data cleaning, data integration, data transformation, and data reduction. Data cleaning focuses on handling missing values, identifying noise or outliers, and repairing errors. Since these subtasks are not interesting (e.g., identifying missing variable values) or not directly applicable to process mining (e.g., identifying noise/outliers in a distribution), we omit this task and decide to focus on the latter three tasks. For each, we create a corresponding log preprocessing task: _log integration_, _log transformation_, and _log reduction_.
Van Eck et al. [26] listed four tasks in the preprocessing stage, which are specifically tailored towards event logs: creating views, aggregating events, filtering logs, and enriching logs. We exclude the task "creating views" because this task assumes that there is no event log yet, while we assume we have a raw event log as input. We match the task "aggregating events" to _log abstraction_, also known as _event abstraction_ that has been already surveyed [97]. The filtering of event logs ("filtering logs") is also considered a log preprocessing task within our scope, which we refer to as _log filtering_. Finally, the preprocessing task "enriching logs" is mapped to _log enriching_. As for _log enriching_ and _log integration_, we consider _log integration_ as creating a new event log by integrating one or more external data sources, while _log enriching_ focuses on using the information within the event log to derive additional attributes.
Fahlland [29] indicated that there are three basic preprocessing operations on event logs, which are: selection, projection, and aggregation. We consider the "selection" and "projection" as a part of the _log filtering_ task, while the aggregation operation is considered as part of the _log abstraction_.
Regarding _log filtering_, _log abstraction_, and _log reduction_, both _log filtering_ and _log abstraction_ can reduce the size of the logs, but we consider the following subtle differences in comparison to log reduction here. _Log filtering_ tends to focus on the quality issues of the original data. It obtains higher-quality logs by filtering out incorrect, incomplete, inconsistent, and irrelevant data. _Log abstraction_ focuses on the complexity and granularity of the original data. It groups the events through aggregation, defining event classes, and clustering to reduce the complexity of logs. _Log reduction_ is due to the data volume of the original data. It reduces the amount of data processed in a single analysis by random sampling, dividing, or cutting, but still makes the data representative.
### Literature Review in Event Log Preprocessing
To the best of our knowledge, there is only one literature review focusing on the log preprocessing tasks: Marin-Castro and Tello-Leal [56] reviewed 70 related papers that were published from the years 2005 to 2020 and explicitly mentioned event log preprocessing or cleaning. This literature review grouped preprocessing techniques into two types of _techniques_: transformation techniques and detection-visualization techniques. _Transformation techniques_ mark modifications made toward the original structure of the event log, while the events or traces that can lead to issues with data quality are identified, grouped, and isolated using _detection-visualization techniques_. In
this paper, we cover six high-level _preprocessing tasks_, instead of the techniques. We include log enriching, log integration, and log reduction, which have not been discussed.
Van Zelst et al. [97] conducted a review and presented a taxonomy of event abstraction techniques. While valuable and detailed insights are provided into the event abstraction techniques, no insights are provided into their usage in practice, and no overview is provided for other preprocessing tasks. Similarly, Stein Dani et al. [83] report that preprocessing, on a high level represented by filtering-related tasks, is still a manual effort in the event log preparation phase of a process mining project. However, they mainly focus on data extraction tasks and do not provide an overview of the preprocessing tasks, including automated ones, and their usage in real-life case studies.
Currently, there is no clear overview of log preprocessing tasks and how frequently are these preprocessing tasks being used in process mining projects. Using the six high-level tasks as our scope, we conduct an SLR in order to provide insights into the usage of log preprocessing techniques in process mining case studies.
## 3 Systematic Literature Review
To arrive at an initial selection of relevant papers, and inspired by Kitchenham and Petersen [44, 64], we applied the following search string on Scopus: ("process mining") AND ("case study" OR "case studies") within the article title, abstract, and keywords. As of December 20, 2022, we initially found 4565 papers. Fig. 2 shows an overview of the paper screening process we followed. Next, we applied the exclusion and inclusion criteria in order to narrow down the scope of the review. The following exclusion criteria were defined and applied directly via the search engine: (1) the paper is published in 2021 and 2022; (2) the paper is published in conferences or journals under peer-review; (3) the paper explicitly mentions "process mining" in the keywords; and, (4) the paper is written in English. As we are particularly interested in the current trend in case studies that use process mining as the core technique, this is our inclusion criteria. Therefore, only papers meeting these criteria were selected to be further analyzed.
After applying our exclusion and inclusion criteria, we obtained 355 papers. Because our focus is on log preprocessing tasks applied in real-world settings, we then read the abstracts of all these papers and filtered out the ones that did not mention collecting data from a real-world scenario. Thereafter, we obtained 159 papers to go through the full paper screening. These papers were downloaded and imported into the
Figure 2: Paper screening procedure.
software Nvivo1 for further analysis. During the full paper analysis stage, the papers that did not mention any data preprocessing steps were discarded and, finally, 86 papers were obtained as relevant papers to go through the coding stage of our work.
Footnote 1: [https://lumivero.com/products/nvivo/](https://lumivero.com/products/nvivo/)
The following codes were defined for the analysis: high-level category, low-level category, and data domain. Next, we discuss what each one of them entails. _High-level categories_ were defined based on related work and used to deductively categorize the papers. The six high-level categories are (1) log integration, (2) log reduction, (3) log abstraction, (4) log filtering, (5) log enriching, and (6) log transformation. Several _Low-level categories_ within the high-level categories were inductively defined from the studied papers. Finally, in addition, we also coded the _data domain_ (e.g., healthcare, education, manufacture, etc.), the analysis purpose, the PM task, and the year. Due to space limits, we do not discuss these results in this paper. The initial result of the categorization process is presented in Fig. 1.
## 4 Results
In this section, we present the results of coding 86 papers. The results are discussed for each high-level category. A complete overview of the results and the detailed coding can be found online, see the Google sheet file. We also include the overview listed in Table 1.
### Log filtering
Log filtering is the most commonly performed preprocessing task, with 55 out of 86 papers performing this preprocessing task. These 55 papers mentioned filtering different objects, such as noise, outliers, redundant, duplicated cases and events, missing values, useless values, blank values, irrelevant values, and so on. Using the objects mentioned in these papers, the category log filtering is subdivided into 9 detailed low-level categories.
**Filtering irrelevant data** We observed that 29 out of 55 papers mentioned filtering "irrelevant" data. After analyzing these papers, we define _irrelevant data_ as _those resources, activities, attributes, events, and traces that are not relevant or not important for the specific analysis to be conducted_.
Whether the data is relevant to the analysis task seems to be mostly determined by experts or analysts based on their domain knowledge and analysis requirements. For example, in [16], the analysis only focused on the students who participated in the class (resource), so the events generated by other resources were defined as irrelevant data and filtered. In [32], the authors intended to analyze the activities of Ph.D. students and improve their journeys. So after a discussion by analysts and stakeholders, a filter is applied to retain the traces of full-time students who completed their Ph.D. and who withdrew (case status). The term _useless data_ is also used in some of the papers to describe irrelevant data. For example, in [86], the authors mentioned _"filtering useless information such as links and marker symbols"_, since the links and marker symbols
\begin{table}
\begin{tabular}{l|l|l} \hline High-level category & Low-level category & References \\ \hline \multirow{8}{*}{Log filtering (55)} & Filtering irrelevant data (29) & [2; 6; 9; 14; 16; 17; 18; 21; 25; 32; 33; 35; 40; 41; 43; 53; 65; 66; 69; 72; 76; 77; 78; 79; 84; 86; 90; 91; 95] \\ \cline{2-3} & Filtering incomplete data (16) & [8; 20; 23; 25; 31; 32; 37; 43; 50; 63; 65; 67; 75; 76; 77; 90] \\ \cline{2-3} & Filtering infrequent data (13) & [7; 11; 19; 25; 38; 40; 42; 50; 67; 75; 88; 87; 89] \\ \cline{2-3} & Filtering duplicates (8) & [14; 22; 23; 25; 30; 67; 76; 78] \\ \cline{2-3} & Filtering outliers (5) & [13; 14; 18; 52; 74] \\ \cline{2-3} & Filtering incorrect data (4) & [31; 48; 63; 82] \\ \cline{2-3} & Filtering redundant data (2) & [19; 20] \\ \cline{2-3} & Filtering inconsistent data (1) & [17] \\ \cline{2-3} & Filtering noise (3) & [17; 51; 70] \\ \hline \multirow{8}{*}{Log transformation (38)} & Transforming format (25) & [6; 10; 12; 14; 16; 17; 68; 23; 25; 27; 34; 35; 39; 40; 43; 46; 57; 62; 63; 67; 69; 74; 92; 93; 94] \\ \cline{2-3} & Transforming values (12) & [20; 25; 30; 48; 50; 58; 59; 62; 65; 76; 78; 79] \\ \cline{2-3} & Reordering (5) & [9; 23; 25; 53; 63] \\ \cline{2-3} & Transition matrices and encoding (2) & [25; 96] \\ \hline Log abstraction (37) & - & [3; 4; 11; 13; 16; 19; 18; 21; 68; 24; 25; 27; 28; 33; 39; 45; 47; 48; 50; 51; 53; 54; 55; 57; 58; 59; 63; 66; 71; 73; 78; 86; 88; 87; 92; 95; 96] \\ \hline \multirow{8}{*}{Log enriching (16)} & Adding calculation metrics (9) & [22; 24; 37; 42; 45; 58; 61; 73; 80] \\ \cline{2-3} & Labelling (4) & [5; 41; 62; 87] \\ \cline{2-3} & Adding case id (2) & [74; 84] \\ \cline{2-3} & Adding noise (1) & [81] \\ \hline Log integration (14) & - & [15; 21; 68; 22; 27; 32; 38; 49; 59; 61; 67; 74; 78; 80] \\ \hline \multirow{8}{*}{Log reduction (11)} & Dividing into sub-logs (9) & [20; 28; 32; 37; 46; 51; 70; 80; 91] \\ \cline{2-3} & Sampling (2) & [30; 82] \\ \cline{1-1} \cline{2-3} & Cutting traces (1) & [30] \\ \hline \end{tabular}
\end{table}
Table 1: Category citation details of 86 papers.
(attributes) cannot make any contribution to the intended analysis and are regarded as useless data.
**Filtering incomplete data** In 16 out of 55 papers, the authors mentioned filtering incomplete data. Incomplete data can be divided into _incomplete events_ and _incomplete cases_. _Incomplete events_ usually refer to events having missing values or missing attributes. Incomplete events include missing case id [50], missing timestamps [23; 32; 63], and missing activities [20; 23], missing other attribute values that are relevant to this analysis [31].
The incompleteness of a case is usually described as cases that are not completed or do not represent the end-to-end process. It means that the cases lack some events, for example, "_remove any record that may create only one event per case as it will not depict the sequence of activities and hinder the performance analysis of the model_" [67] and "_removing cases that did not cover the whole steps_" [20].
**Filtering infrequent data** We use _infrequent data_ to refer to the infrequent case variant. In 13 papers, the authors mentioned that they performed the infrequent case variants filtering as a preprocessing task. Filtering infrequent data is done to "_prevent the PM tool from returning incomprehensible or inaccurate results_" [75], and "_to improve the quality of results, and to avoid low precision and highly complex results_" [89].
**Filtering inconsistent data** A simple example of inconsistent data is that the values are recorded in different formats, e.g., "2023-01-01" and "2023/01/01" as the attribute timestamps. This inconsistency in data format may be due to recording errors or caused by manual input. It may also be that different data sources have used different data formats. Inconsistent event labels make it difficult to assign clear semantics to the activities of a discovered process model [1], and may also bring about a dimensional explosion of the process model.
**Filtering incorrect data** Incorrect data is erroneous or unreliable data that violates the logic of reality. For example, in the real process, activity \(A\) should be executed earlier than activity \(B\), but in the log, the timestamp of \(A\) in a specific case is later than activity \(B\)[63].
**Filtering duplicates** Duplicates refer to repeated data. In process mining, the case ID needs to be a unique identifier, and the traces represented by different case IDs must be different, so as to ensure the accuracy of the data. However, in real life, duplicate data is usually generated due to system bugs or other reasons. For example, in [22], repeated events with the same Call-ID were excluded.
**Filtering redundant data** Only two papers mentioned redundant data [19; 20]. In [20], redundant events were included in data error: "_we conducted some data preprocessing, including handling data error (e.g., removing redundant events and eliminating multiple yield values)_", while there was no further definition and explanation in [19].
**Filtering outliers** In [13; 14; 52], the authors only mentioned "_removing outliers_" without any further explanation or definition. In [18], the authors mention "_we noticed the existence of outliers, i.e., cases that take too long, or incomplete_"; so, too long trace and incomplete data are considered outliers. In [74], "_if lecture activities in the short semester are included, it will be an outlier because it has activities that are far more
than short than activities in the semester in general_"; thus, traces that are too short are also considered outliers. It seems that process analysts use the distributions of a case or event-attribute to define outliers, e.g., the number of events per case, the case duration per case, etc.
**Filtering noise** Noise is an overused word. Data that is not conducive to the analysis task is often defined as noise. An interesting point is that among the 86 papers, more than one paper mentioned noise, but only one paper described what noise is and how to filter it, "_In the original log the noisy activities were conveniently named 'Noise', so they were removed using a filter on the activity name_" [51].
### Log transformation
In 38 of the 86 papers, the authors described that they performed a _log transformation_ task. The coding resulted in four data objects that are being transformed, which we use to further divide the high-level category.
**Transforming format** Among the format transformations, the transformation of the log format from CSV to XES was mentioned the most (14 out of 25 papers), such that the event logs can be used in the PM tool. This is because the log format after extraction is usually CSV, and PM tools require the log format to be imported as XES. The remaining format transformation is related to determining which columns are the key columns (such as case ID, activity name, and timestamps) after importing the log into PM tools.
**Transforming values** The difference between transforming values and transforming format is that transforming values means the change of one or more specific values in an event. For example, replacing infrequent values with the value 'other' to avoid dimension explosion, replacing missing values, replacing NaN values with 'zero', capturing data, and encrypting data.
**Reordering** Reordering is the process of sorting the log by a particular timestamp. When the original log is out of order, it is essential to reorder it so that the process model displays the activities' proper execution sequence.
**Transition matrices and encoding** In particular, transition matrices and trace encoding are used as a preprocessing for predictive process monitoring. Given that the trace encoding is a subfield itself and was not included in the search, we consider this category outside of our scope. We found two case studies mentioning this preprocessing task and coded them without further analysis.
### Log enriching
In 16 out of 86 papers, the log-enriching techniques were applied. Log enriching is split into four categories. Three of them are shown using an example in Fig. 3.
**Adding calculation metrics** In this low-level category, the calculation metrics are computed from existing attributes in the log. For example, in [22], call center processes of a company were examined. In the original event log, each call only had attributes
Start and Call Duration, but process analysis required the end time of the call. Therefore, the attribute End was obtained by adding Call Duration to Start.
**Labelling** Labeling is the task of assigning a tag or a class to an event or a trace. In [87], "_the cases are labeled as either successful or failed, depending on how they have been executed and their outcome_", to further divide the log into two logs. In [62], for recording differences over time between the intended operation and the actual execution, a label was assigned to each event to indicate if the event was carried out on time or not.
**Adding case id** Case id is a unique attribute in event logs. The data collected in some case studies did not have the attribute of case id, then the case id was created artificially in the data preprocessing stage. For example, in [84], "the caseid is created by combining the three-digit client number (MANDT) with a ten-digit document number and a five-digit item number".
**Adding noise** Adding noise is not a typical preprocessing task, as just one publication described it. [81] evaluated privacy assurance of healthcare metadata. Noise-adding plugins in the tool ProM were used to make the original event logs more privacy-preserving [60].
### Log reduction
In 11 out of 86 papers, the authors used log reduction to do log preprocessing. Examples of the three log reduction tasks are shown in Fig. 4.
**Dividing into sub-logs** In the example presented in Fig. 4, the original log is divided into two logs by the date in timestamp. In [51, 28], IoT logs were collected in a smart house and the aim was to explore human habits. They firstly divided logs into smaller
Figure 3: Examples of three log integration tasks versus log enriching.
pieces by timestamps to analyse the time distribution of the activities (user habits) within a day [51].
Resource could also be a common attribute for division. The authors of [70] divided the traces into subsets to model different profiles of users. Dividing original logs according to specific attributes is usually for more in-depth analysis [32].
In addition, in order to test the proposed algorithm or approach, the log was divided into training data and test data according to a certain proportion [20, 37].
**Sampling** The most notable characteristic of sampling is randomness. The reduction here is to reduce the trace; that is to say, the data processing needs to be in the unit of a trace. In the example shown in Fig. 4, there are four traces \([\langle A,B,C,D,E\rangle,\langle A\rangle,\langle A,C\rangle,\langle B\rangle]\). After randomly sampling 50% of the traces, the log \([\langle A\rangle,\langle A,C\rangle]\) in the lower right corner is obtained.
**Cutting traces** In the example in Fig. 4, compared to other traces, the trace \(\langle A,B,C,D,E\rangle\) is obviously longer and contains more events. Cutting off the event at the end of the trace will get the processed log in the lower left corner. The purpose of this technique is to avoid bias from very long traces [30].
### Log integration
Among the 86 papers, 14 papers used log integration to combine multiple data tables. No objects of interest are repetitively mentioned, nor have we observed obvious low-level tasks. Therefore, the log integration task has not been further divided.
Fig. 3 shows an example where a new event log is created by matching two data tables using the shared attribute "_student_id_". It is worth mentioning that some papers mention that additional data was added to the original event data without indicating the source, but we believe that the combination of these data is realized by log integration. According to [38], "_Besides the attributes shown in Table 4, we included the educational level of the nurses executing the activity, as well as their nursing experience/organisational role, the hospital shift and weekday on which the activities were performed, and the ward in which the shift took place_". It is reasonable
Figure 4: A simple example of log reduction.
to speculate that this additional information actually comes from a separate data table that stores information about all nurses.
### Log abstraction
In 37 out of the 86 papers, the authors used preprocessing techniques in log abstraction, which is the most widely performed task after log filtering and log transformation among the six preprocessing tasks. In [97], a review and taxonomy of event abstraction were presented. Therefore, we will not focus on this category here.
### Discussion
The _log filtering_ task emerges as the most commonly performed preprocessing task, with over \(63\%\) of the case studies mentioning that some filtering is performed. However, it's worth noting that the specifics of the log filtering tasks appear to heavily rely on domain knowledge. Moreover, more than 30 papers use somewhat ambiguous terminologies such as 'irrelevant' or 'noise'. The _log transformation_ task ranks as the second most frequently employed, accounting for \(44\%\). Currently, the majority of subtasks in the log transformation focus on fixing format-related and data-quality issues. This highlights the importance of data quality in process mining and suggests that efforts to enhance data quality should continue to be a focal point in log preprocessing.
In contrast, log enriching (\(18\%\)), log integration (\(16\%\)), and log reduction (\(12\%\)) tasks are notably less commonly performed. One plausible explanation is the limited support for these tasks in both academic and commercial tools. Furthermore, the relatively uncommon use of log reduction can be attributed to the fact that many filtering techniques inherently reduce the log size.
## 5 Conclusion
In this paper, we conducted a systematic literature review, examining the use of log preprocessing tasks in process mining case studies and presented the results. We identified six high-level tasks that were synthesized from the related work discussion and 20 low-level tasks inducted from the reported case studies. The log filtering task emerges as the most frequently used preprocessing task, featured in over 63% of the case studies reviewed. The log transformation task follows closely behind, accounting for 44% of the cases. Conversely, log enriching, integration, and reduction tasks are less commonly performed, possibly due to limited tool support. Future research can delve into these preprocessing tasks, providing operational guidance. Standardization in reporting practices and greater support for less common preprocessing tasks are valuable for improving traceability and advancing the reliability of process mining results. |
2301.10065 | Anisotropy of Cosmic Rays and Chaotic Trajectories in the Heliosphere | As cosmic rays (CRs) propagate in the Galaxy, they can be affected by
magnetic structures that temporarily trap them and cause their trajectories to
display chaotic behavior, therefore modifying the simple diffusion scenario.
When CRs arrive at the Earth, they do so anisotropically. These chaotic effects
can be a fundamental contributor to this anisotropy. Accordingly, this requires
a comprehensive description of chaos in trapping conditions since assessing
their repercussions on the CR arrival directions is necessary. This study
utilizes a new method described in L\'opez-Barquero and Desiati (2021) to
characterize chaotic trajectories in bound systems. This method is based on the
Finite-Time Lyapunov Exponent (FTLE), a quantity that determines the levels of
chaos based on the trajectories' divergence rate. The FTLE is useful since it
adapts to trapping conditions in magnetic structures or even propagating media
changes. Here, we explore the effects that chaos and trapping can have on the
TeV CR anisotropy. Concretely, we apply this method to study the behavior of
CRs entering the heliosphere. Specifically, how the distinct heliospheric
structures and CR impinging directions from the ISM can affect chaos levels.
The heliosphere has an intrinsic directionality that affects CRs differently
depending on where they enter it. This feature causes preferential directions
from which particles tend to be more chaotic than others. This eventually
translates into changes in the arrival maps which are not uniformly
distributed. Instead, we expect sectors in the map to change separately from
others, creating a time variation that could be detected. Consequently, this
result points to the idea that time-variability in the maps is essential to
understanding the CR anisotropy's overall processes. | Vanessa López-Barquero, Paolo Desiati | 2022-12-09T20:08:45Z | http://arxiv.org/abs/2301.10065v1 | # Anisotropy of Cosmic Rays and Chaotic Trajectories in the Heliosphere
###### Abstract:
As cosmic rays (CRs) propagate in the Galaxy, they can be affected by magnetic structures that temporarily trap them and cause their trajectories to display chaotic behavior, therefore modifying the simple diffusion scenario. When CRs arrive at the Earth, they do so anisotropically. These chaotic effects can be a fundamental contributor to this anisotropy. Accordingly, this requires a comprehensive description of chaos in trapping conditions since it is necessary to assess their repercussions on the CR arrival directions. This study utilizes a new method described in Lopez-Barquero and Desiati (2021) to characterize chaotic trajectories in bound systems. This method is based on the Finite-Time Lyapunov Exponent (FTLE), a quantity that determines the levels of chaos based on the trajectories' divergence rate. The FTLE is useful since it adapts to trapping conditions in magnetic structures or even propagating media changes. Here, we explore the effects that chaos and trapping can have on the TeV CR anisotropy. Concretely, we apply this method to study the behavior of CRs entering the heliosphere. Specifically, how the distinct heliospheric structures and CR impinging directions from the ISM can affect chaos levels. The heliosphere has an intrinsic directionality that affects CRs differently depending on where they enter it. This feature causes preferential directions from which particles tend to be more chaotic than others. This eventually translates into changes in the arrival maps which are not uniformly distributed. Instead, we expect sectors in the map to change separately from others, creating a time variation that could be detected. Consequently, this result points to the idea that time-variability in the maps is essential to understanding the CR anisotropy's overall processes.
Introduction
Cosmic rays of Galactic origin are detected on the Earth with anisotropy in their arrival directions. This anisotropy (\(10^{-3}\) in relative intensity) has been measured by multiple experiments [1, 2]; nonetheless, a complete explanation for it is still eluding us. This work will explore the contributions that chaotic trajectories of trapped cosmic rays can have on it. Specifically, how coherent structures, e.g., the heliosphere, can significantly impact how particles propagate and ultimately the directions that they are detected.
## 2 Chaotic Trajectories and Coherent Magnetic Structures
In order to assess the chaotic effects on particles' trajectories and how they are affected by being temporarily trapped in coherent structures, we develop a new method for characterizing chaos and construct a toy model that will replicate the trapping conditions in these magnetic structures.
This model is based on the Finite-Time Lyapunov exponent (FTLE):
\[\lambda(t,\Delta t)=\frac{1}{\Delta t}\ln\left[\frac{d(t+\Delta t)}{d(t)} \right]\,, \tag{1}\]
where \(\Delta t\) is the time interval for the calculation and d(t) is the the distance between two particles at time t. Thus, the FTLE can measure the level of chaos based on the trajectories' divergence rate. The usefulness of this quantity is that it can adapt to the temporarily trapped conditions that can emerge due to the interaction with coherent magnetic structures.
In order to reproduce the trapping conditions in a magnetic field, we created a model that consists of an axial-symmetric magnetic bottle with magnetic time-perturbations added:
\[B_{y}=\frac{\Delta B}{B}\,\sin(k_{p}x-\omega_{p}t)\,e^{-\frac{1}{2}\left( \frac{\varphi_{p}}{\varphi_{p}}\right)^{2}}, \tag{2}\]
where \(k_{p}=\frac{2\pi}{L_{p}}\) and \(\omega=\frac{2\pi v_{p}}{L_{p}}\).
The specific characteristics of the model used in this work are based on heliospheric conditions. The magnetic bottle is based on the mirroring effect that particles experience as they bounce out of the flanks of the heliosphere. The time perturbations replicate the effects of magnetic field reversals induced by the 11-year solar cycles.
## 3 Discussion
Once we propagate particles in this system. we found a correlation between the Finite-time Lyapunov exponent (FTLE), i.e., the chaotic behavior of the particles, and the escape time from the system. This correlation is given by a power law:
\[\lambda_{FTLE}=\beta\,t_{esc}^{-1.04\pm 0.03}. \tag{3}\]
One remarkable feature found for these systems is that the same power-law persists even if perturbations are introduced in the systems, see figure 1.
To derive information that will help us elucidate the observations, the Finite-Time Lyapunov exponents and escape times are plotted in arrival distribution maps. In these maps, regions with different chaotic behavior emerge, which can have an impact on the observations. For example, this can be a source of time-variability in the anisotropy maps.
|
2303.18240 | Where are we in the search for an Artificial Visual Cortex for Embodied
Intelligence? | We present the largest and most comprehensive empirical study of pre-trained
visual representations (PVRs) or visual 'foundation models' for Embodied AI.
First, we curate CortexBench, consisting of 17 different tasks spanning
locomotion, navigation, dexterous, and mobile manipulation. Next, we
systematically evaluate existing PVRs and find that none are universally
dominant. To study the effect of pre-training data size and diversity, we
combine over 4,000 hours of egocentric videos from 7 different sources (over
4.3M images) and ImageNet to train different-sized vision transformers using
Masked Auto-Encoding (MAE) on slices of this data. Contrary to inferences from
prior work, we find that scaling dataset size and diversity does not improve
performance universally (but does so on average). Our largest model, named
VC-1, outperforms all prior PVRs on average but does not universally dominate
either. Next, we show that task- or domain-specific adaptation of VC-1 leads to
substantial gains, with VC-1 (adapted) achieving competitive or superior
performance than the best known results on all of the benchmarks in
CortexBench. Finally, we present real-world hardware experiments, in which VC-1
and VC-1 (adapted) outperform the strongest pre-existing PVR. Overall, this
paper presents no new techniques but a rigorous systematic evaluation, a broad
set of findings about PVRs (that in some cases, refute those made in narrow
domains in prior work), and open-sourced code and models (that required over
10,000 GPU-hours to train) for the benefit of the research community. | Arjun Majumdar, Karmesh Yadav, Sergio Arnaud, Yecheng Jason Ma, Claire Chen, Sneha Silwal, Aryan Jain, Vincent-Pierre Berges, Pieter Abbeel, Jitendra Malik, Dhruv Batra, Yixin Lin, Oleksandr Maksymets, Aravind Rajeswaran, Franziska Meier | 2023-03-31T17:56:33Z | http://arxiv.org/abs/2303.18240v2 | # Where are we in the search for an Artificial Visual Cortex
###### Abstract
We present the largest and most comprehensive empirical study of pre-trained visual representations (PVRs) or visual 'foundation models' for Embodied AI. First, we curate CortexBench, consisting of 17 different tasks spanning locomotion, navigation, dexterous, and mobile manipulation. Next, we systematically evaluate existing PVRs and find that none are universally dominant.
To study the effect of pre-training data scale and diversity, we combine over \(4{,}000\) hours of egocentric videos from 7 different sources (over \(5.6\)M images) and ImageNet to train different-sized vision transformers using Masked Auto-Encoding (MAE) on slices of this data. Contrary to inferences from prior work, we find that scaling dataset size and diversity does _not_ improve performance universally (but does so on average).
Our largest model, named **VC-1**, outperforms all prior PVRs on average but does not universally dominate either. Finally, we show that task- or domain-specific adaptation of **VC-1** leads to substantial gains, with **VC-1** (adapted) achieving competitive or superior performance than the best known results on all of the benchmarks in CortexBench. These models required over 10,000 GPU-hours to train and can be found on our website for the benefit of the research community.
## 1 Introduction
Eyesight is considered one of the greatest inventions of biological evolution (Lane, 2010). Out of the 38 known phyla in the animal kingdom, only 6 have evolved eyes yet they account for 95% of all species (Lane, 2010) - vision seems to confer an enormous advantage. Of course, the evolution of visual _sensing_ via eyes progresses in concordance with visual _perception_ - via a visual cortex, the region of the brain that (together with the motor cortex) enables an organism to convert sight into movement. In this work, we ask the same question Fukushima (Fukushima, 1975; 1980) did nearly 50 years ago - how do we design an _artificial visual cortex_, the module in a larger computational system that enables an artificial agent to convert camera input into actions?
In contemporary AI, this question has been operationalized as the design of pre-trained visual representations (PVRs) or
Figure 1: An artificial visual cortex for embodied intelligence must support a diverse range of sensorimotor skills, environments, and embodiments; we curate CortexBench to systematically measure progress towards this ambitious goal. Our strongest model, denoted **VC-1** (adapted) above, is competitive with or outperforms the _best prior results_ (success rates) on all benchmarks in CortexBench. Notice that this comparison is particularly unforgiving because the best prior results are benchmark-specific and not constrained to share any aspect of their design.
visual 'foundation models' for embodied AI (EAI).1 Indeed, recent work has shown that PVRs trained on large quantities of egocentric-videos and web-images can substantially improve performance and learning efficiency for navigation (Khandelwal et al., 2022; Yadav et al., 2022b; 2023) and manipulation tasks (Parisi et al., 2022; Nair et al., 2022; Radosavovic et al., 2022; Ma et al., 2022). Unfortunately, each study is fundamentally incommensurable, as each uses different self-supervised learning (SSL) algorithms on different pre-training datasets, designed for, and evaluated on different downstream EAI tasks. Naturally, one might ask: is there a universally-dominant configuration? Essentially, _does an artificial visual cortex already exist_?2
Footnote 1: We use embodied AI (EAI) as an umbrella term for all communities studying visuomotor control such as robot learning, vision-based reinforcement learning, egocentric computer vision, etc.
Footnote 2: To the degree of our ability to measure it.
To answer this question, we conduct the largest and most comprehensive empirical study to-date of visual foundation models for EAI. First, we curate CortexBench, a new benchmark for evaluating PVRs, consisting of 17 tasks spanning low-level locomotion (Tassa et al., 2018), table-top manipulation of rigid and articulated objects (Yu et al., 2020), dexterous manipulation (Rajeswaran et al., 2018), multi-finger coordinated manipulation (Wuthrich et al., 2020), indoor visual navigation (Savva et al., 2019), and mobile manipulation (Szot et al., 2021). The visual environments span from flat infinite planes to table-top settings to photorealistic 3D scans of real-world indoor spaces. The agent embodiments vary from stationary arms to dexterous hands to idealized cylindrical navigation agents to articulated mobile manipulators. The learning conditions vary from few-shot imitation learning to large-scale reinforcement learning. The exhaustiveness of this study enables us to draw conclusions with unprecedented scope and confidence.
Our first finding is a _negative result_. We discover that while existing PVRs generally outperform learning-from-scratch baselines, none is universally dominant. Instead, we find that PVRs tend to work best in the domains (locomotion, manipulation, navigation) they were originally designed for. We note that no claims of universality were made in prior work, so this finding is illustrative rather than refutative. Overall, serendipity did not come to pass - an artificial visual cortex does not already exist.3 However, curiously, the _kinds of PVRs_ that are locally-dominant in CortexBench differ significantly in the size and type of pre-training datasets: CLIP (Radford et al., 2021) was pre-trained on \(400\)M image-text pairs from the web; MVP (Radosavovic et al., 2022) on \(4.5\)M frames from web-images and many egocentric-video datasets; R3M (Nair et al., 2022) on \({\sim}5\)M frames from Ego4D - yet, each performs best on some subset of tasks in CortexBench. This leads to a natural question: _how does scaling model size, dataset size, or diversity affect performance on CortexBench?_ Can we use scaling as a means to learn a single PVR that works for all of the diverse tasks in CortexBench?
Footnote 2: To the degree of our ability to measure it.
To study these questions, we combine over \(4{,}000\) hours of egocentric videos from 7 different sources containing humans manipulating objects and navigating indoor spaces encountered in daily life, together with ImageNet. From this union, we create 4 pre-training datasets of varying size and diversity, with the largest containing over \(5.6\)M images. We train vision transformers (ViT-B and ViT-L) (Dosovitskiy et al., 2020) on these 4 datasets using Masked Auto-Encoding (MAE) (He et al., 2021), and systematically analyze their performance on CortexBench. To benefit the EAI community, we will open-source these models, which required over 10,000 GPU hours to train.
We do find evidence supporting the scaling hypothesis, but the picture that emerges is more nuanced than what a superficial reading might suggest. Our largest model trained on all data, named **VC-1**, outperforms the best existing PVR by 1.2% on average. However, **VC-1** does _not_ universally dominate either - i.e., there are PVRs trained on smaller amounts of data that outperform it on specific tasks. A similar trend emerges for data diversity - more is better on average, but not universally. For instance, the best performance on the Mobile-Pick task from Habitat 2.0 (Szot et al., 2021) is achieved by pre-training on the subset of video data focused on manipulation; presumably because the mobility involved in the task is fairly limited. Thus, our second key finding is: _Naively scaling dataset size and diversity does not improve performance uniformly across benchmarks_.
Our findings reveal a challenge and opportunity for the community - the search for a PVR that is universally dominant (or 'foundational') for EAI calls for innovations in architecture, learning paradigm, data engineering, and more. As the final step in this paper, but as a first step towards this open problem, we study _adapting_**VC-1** with either task-specific training losses or datasets (via MAE (He et al., 2021)) to specialize **VC-1** for each domain. We find that adapting **VC-1** results in it becoming competitive with or outperforming the _best prior results on all of the benchmarks_ in CortexBench. We highlight that this comparison is particularly unforgiving, since best prior results are highly domain-specific and are not constrained to share any aspect of their design. To our knowledge, **VC-1** (adapted) is the first PVR that is competitive with (or outperforms) state-of-art results on such a diverse set of EAI tasks ( Figure 1).
We will release code for CortexBench to enable the EAI, robotics, and CV communities to benchmark their own models, and share our pre-trained models (including **VC-1**) that we believe can serve as a starting point for all visuomotor tasks of interest today.
## 2 Related Work
**Pre-trained visual representations (PVRs).** The last few years have seen increasing interest in the self-supervised learning (SSL) of visual representations (He et al., 2021; Caron et al., 2020; Baevski et al., 2022; Chen et al., 2020, 2021). These algorithms use contrastive (Chen et al., 2020, 2021), distillation-based (Caron et al., 2020; Baevski et al., 2022), or reconstructive (Bao et al., 2021; He et al., 2021) objectives for training. Recently, a flurry of works have proposed using the vision transformers (ViTs) (Dosovitskiy et al., 2021) with masked image modeling (He et al., 2021; Baevski et al., 2022; Yao et al., 2022), which among other benefits reduces the computation time required for pre-training. In this work, we use one such pre-training algorithm (MAE (He et al., 2021)) to explore scaling and adapting pre-trained visual representations (PVRs).
**PVRs for embodied AI.** Inspired by the advancements in self-supervised learning, recent work has incorporated visual representation learning into the training pipelines for EAI agents (Parisi et al., 2022; Nair et al., 2022; Radosavovic et al., 2022; Ma et al., 2022; Khandelwal et al., 2022; Yadav et al., 2022; 2023). Specifically, Parisi et al. (2022) evaluate several PVRs trained with supervised or self-supervised learning on a range of EAI tasks, demonstrating promising results under a few-shot imitation learning evaluation protocol. Nair et al. (2022); Radosavovic et al. (2022); Ma et al. (2022) introduce new methods for pre-training visual representations using egocentric video data, targeting robotic manipulation tasks. Similarly, Khandelwal et al. (2022); Yadav et al. (2022); 2023) use pre-trained visual representations to improve performance on multiple visual navigation tasks. Closely related, Radosavovic et al. (2022) demonstrate that MAE pre-training on internet-scale video and image data can produce effective visual representations for robotic manipulation tasks. In contrast, our work studies a larger range of embodied AI tasks (collected in CortexBench) to understand how PVRs can provide a general-purpose foundation for embodied agents and explores in-domain model adaptation for various tasks.
**Scaling model and dataset size.** Several works have showed that scaling model and dataset size improves performance on vision tasks like image classification (Zhai et al., 2022; Tian et al., 2021; Goyal et al., 2021). In EAI, Radosavovic et al. (2022) find that scaling model and data sizes improves downstream policy performances for robotic manipulation tasks. While such prior works have been confined to narrow domains like image classification and robotic manipulation, our work is the first to study if scaling can provide better models on a broad range of EAI tasks.
**Adapting PVRs.** When and how to adapt PVRs for downstream applications remains an open research question (Kumar et al., 2022; Wijmans et al., 2022; Kirichenko et al., 2022; Lee et al., 2022; Goyal et al., 2022). In the context of EAI, Parisi et al. (2022) and Hansen et al. (2022) show that naively fine-tuning PVRs with behavior cloning can reduce performance in simulation, and Radosavovic et al. (2022) observe minimal gains in real-world tasks manipulation tasks. In large-scale RL settings, Yadav et al. (2022); 2023) show that end-to-end finetuning considerably improves performance for indoor visual navigation. By comparison, Pari et al. (2021) find simple \(k\)-nearest-neighbor adaptation works well for real-world visual imitation tasks. Our work neither aims nor expects to be the final word on this fertile topic.
## 3 Benchmarking Progress Towards an Artificial Visual Cortex for Embodied AI
This section describes CortexBench, a curated set of EAI tasks designed to evaluate the ability of pre-trained visual representations (PVRs) to support a wide variety of EAI applications. Specifically, CortexBench includes 17 tasks drawn from 7 existing EAI benchmarks as shown in Figure 1. For each task, we delineate a downstream policy learning paradigm (e.g., few-shot imitation learning) and evaluation protocol that follows community standards in each domain (Section 3.2). By fixing the tasks and downstream learning methods as shown in Figure 2, we are able to focus our evaluations on the contribution of PVRs, which allows us to measure progress towards the development of an artificial visual cortex for embodied intelligence. We use CortexBench to conduct the largest and most comprehensive empirical study to-date of PVRs from prior work (Section 4).
We recommend two metrics to evaluate PVR performance: **Mean Success** and **Mean Rank**. **Mean Success**: the average success rate across all benchmarks. **Mean Rank**: for each benchmark, we rank PVRs based on their success rate; then we average these rankings across all benchmarks.
must control a 28-DoF anthropomorphic hand to perform a variety of tasks. We study the two hardest tasks from Adroit: Relocate and Reorient-Pen. In these tasks, an agent must manipulate an object into a goal position and orientation, where the goal must be inferred from the scene.
**MetaWorld (MW)**(Yu et al., 2020) is a collection of tasks in which an agent commands a Sawyer robot arm to manipulate objects in a tabletop environment. We consider five tasks from MetaWorld: Assembly, Bin-Picking, Button-Press, Drawer-Open, and Hammer, which follows the evaluations performed in (Nair et al., 2022).
**DeepMind Control (DMC)**(Tassa et al., 2018) is a widely studied benchmark for image-based continuous control in which an agent performs low-level locomotion and object manipulation tasks. We consider five tasks from DMC: Finger-Spin, Reacher-Hard, Cheetah-Run, Walker-Stand, and Walker-Walk, which follows the work in (Parisi et al., 2022).
**TriFinger (TF)** is a robot, introduced in (Wuthrich et al., 2020), that is composed of a three-finger hand with 3-DoF per finger. We consider two TriFinger tasks: Reach-Cube and Push-Cube. The Push-Cube task was part of the Real Robot Challenge 2020 (Real Robot Challenge 2020). We also consider the easier Reach-Cube task, which (Dittadi et al., 2021) also studies. In these tasks, the agent must either touch the cube with one finger (Reach-Cube) or push the cube and move it to a goal location (Push-Cube).
**Habitat**(Savva et al., 2019) is a simulation platform that includes several visual navigation tasks in which agents explore highly photo-realistic unseen 3D environments. We consider two semantic navigation tasks in Habitat: image-goal navigation (ImageNav) (Zhu et al., 2017) and object-goal navigation (ObjectNav) (Batra et al., 2020). In both tasks, the agent starts at a random location in an unknown 3D environment and must find a goal location - specified with an image taken from the goal location in ImageNav or with the name of an object (e.g., 'chair') in ObjectNav. Evaluation is conducted on unseen environments, thus testing the generalization capabilities of the visual encoder and policy.
**Habitat 2.0**(Szot et al., 2021) includes a set of mobile manipulation tasks in which an agent controls a Fetch robot with a 7-DoF arm, mobile base (Gu et al., 2022), and suction gripper to rearrange objects in apartment scenes. We consider a challenging version of the Mobile-Pick (MP) task from Habitat 2.0, in which an agent must pick up a target object from a cluttered receptacle (e.g., a counter) while starting from a position in which the object is outside of the robot's reach (thus, requiring navigation). We relax the dense goal specification as described in Appendix A.6.
### Downstream Policy Learning
Given a frozen PVR, an agent needs to learn a policy for each task. The EAI community has developed a range of policy learning algorithms from few-shot imitation learning (IL) to large-scale reinforcement learning (RL). For each task in CortexBench, we conform to the community standard for achieving state-of-art performance in that domain.
**"MuJoCo Tasks"** On the tasks from the Adroit, MetaWorld, and DMC suites we train policies using behavior cloning on a small number of expert demonstrations (100 for Adroit and DMC and 25 for MetaWorld), which follows Parisi et al. (2022); Nair et al. (2022). Specifically, we train policies for 100 epochs and report the average rollout performance on the test set for the best intermediate policy during training. For all tasks, the policy is a 3-layer MLP. When using vision transformers (ViT) based PVRs, we use the [CLS] token as input to the policy, and with ResNets we use features from the final convolutional layer after global average pooling. These design choices follow prior work such as Radosavovic et al. (2022); Nair et al. (2022).
**"Trifinger Tasks"** For TriFinger, we train policies using behavior cloning on 100 demonstrations per task. Specifically, we train a policy network composed of a 3-layer MLP for 100 epochs for Reach-Cube and 1,000 epochs
Figure 2: Overview of CortexBench. We assemble relevant datasets and visual representation learning algorithms to produce candidate Visual Cortex models, which are then evaluated using either reinforcement or imitation learning on a set of highly diverse tasks.
for Move-Cube. We report the average score for the best checkpoint over the course of training. As in the "MuJoCo Tasks", the input to the policy is the [CLS] token for ViT-based PVRs and average pooled features from the last convolutional layer for ResNet-based models.
**"Habitat Tasks"** We train ObjectNav policies with behavior cloning on 77k human demonstrations (Yadav et al., 2022c) collected by Habitat-Web (Ramrakhya et al., 2022b), totaling 360M environment steps. For ImageNav and the Habitat 2.0 Mobile-Pick task, we use RL for 500M environment steps with DD-PPO (Wijmans et al., 2020) and VER (Wijmans et al., 2022). We use patch representations for ViT-based PVRs and grid-features from last convolutional layer for ResNet models, passed through a compression layer (Savva et al., 2019a) for a lower dimensional representation for use by the policy layers, which is a 2-layer LSTM for navigation and a 2-layer GRU for manipulation.
More details on tasks and training are in Appendix A.6.
## 4 Do we already have a foundation model?
First, we evaluate several existing pre-trained visual representations (PVRs) on CortexBench to study whether existing open-sourced visual backbones can consistently perform well across all tasks. For all evaluations we consider frozen visual representations to disentangle the effect of learned representations from downstream task learning. Specifically, we include the following models:
* CLIP (Radford et al., 2021) Contrastive image-language pre-training objective; Trains on 400M images-text pairs from the internet (WIT); ViT-B backbone.
* R3M (Nair et al., 2022) Time-Contrastive video-language alignment pre-training objective; Trains on 5M images from a subset of Ego4D; ResNet-50 backbone.
* MVP (Radosavovic et al., 2022). Pre-trains with MAE; Trains on 4.5M images from Egocentric videos and ImageNet; ViT-B and ViT-L backbones.
* VIP (Ma et al., 2022). Goal-conditioned value function pre-training objective; Trains on 5M images from a subset of Ego4D; ResNet-50 backbone.
These models cover a wide range of architectures, pre-training objectives, and pre-training datasets, constituting a solid set for comparisons. Additionally, we include randomly initialized ViTs with both frozen weights and fine-tuned weights to assess the necessity of pre-training and the limitations of pure end-to-end in-domain learning.
Table 2 shows the evaluation results aggregated by benchmark; no single model excels in all cases. Among all of the models evaluated, R3M performs the best on Adroit, MetaWorld, and DMControl. While MVP (ViT-L) performs best on Trifinger, ImageNav, and Mobile Pick. CLIP, on the other hand, achieves the best results on ObjectNav.
The variance in performance of existing PVRs on CortexBench is further illustrated in Figure 3. Indeed, PVRs can be successful on some benchmarks but fail on others; for instance, while CLIP is the best model for ObjectNav (ranking first), its performance is poor on Adroit and MetaWorld (ranking fifth). This variance highlights that we do not yet have one strong performing artificial visual cortex for embodied AI yet.
## 5 Analyzing the Scaling Hypothesis for EAI
The previous section investigated models pre-trained on datasets of varying size and diversity. Interestingly, while the model pre-trained on the largest dataset (CLIP) performs well on one benchmark (ObjectNav) it does not perform well across all tasks. We now ask: how much does the relevance and diversity of the pre-training dataset and the model size matter? To study this, we fix the pre-training objective - MAE (He et al., 2021) - and then vary the composition of the pre-training dataset and the size of the visual backbone (ViT-B with 86M parameters and ViT-L with 307M parameters). We measure the corresponding changes in performance on CortexBench. MAE is selected for these experiments due to the strong performance on CortexBench of the MVP (Radosavovic et al., 2022) models (Table 2), which use the MAE pre-training objective.
### Constructing a Pre-training Dataset for EAI
To evaluate the impact of dataset size and diversity on our benchmark tasks, which involve various navigation and manipulation challenges, we employ a combination of nine datasets. These datasets include Ego4D (Grauman et al., 2022), 100 Days of Hands (100DOH) (Shan et al., 2020), Something-Something v2 (SS-V2) (Goyal et al., 2017), and Epic Kitchens (Damen et al., 2018). This subset con
Figure 3: Rank distribution per model. For every model, we compute the ranks it achieved on each of the 7 benchmarks. We visualize them as vertical lines, where each rank number \(x\) receives a tick if that model achieved such rank \(x\). For instance, MVP (ViT-L) achieves ranks 1,1,1,2,3,3,4 across the 7 benchmarks. Significant variability exists in the performance of PVRs across benchmarks.
sists of videos showcasing people manipulating objects and are comparable to the datasets used in MVP (Radosavovic et al., 2022). Additionally, we use two egocentric indoor navigation datasets: the Real Estate 10K dataset (Zhou et al., 2018) and the OpenHouse24 dataset (described in Appendix A.2.1). Finally, we include ImageNet (Deng et al., 2009) as a representative static internet image dataset.
We strategically select combinations of these datasets (listed in Table 3 and below) to answer the following questions:
* What is the impact of scaling dataset size and diversity?
* How does the inclusion of _less-relevant_ datasets influence the performance of PVRs on embodied AI tasks?
**Ego4D**(Grauman et al., 2022) is our base pre-training dataset and encompasses a wide range of egocentric videos consisting of _daily life activities_ such as home, leisure, transportation, and workplace activities.
**Ego4D+M** extends **Ego4D** with three object manipulation-centric datasets: 100DOH, SS-v2, and Epic Kitchens. This results in a dataset comprising 3.5 million frames that is primarily focused on manipulation scenarios.
**Ego4D+N** extends **Ego4D** with two egocentric indoor navigation datasets: OpenHouse24 and RealEstate10K. This results in a dataset with 3.5 million frames, which is similar in size to **Ego4D+M**, but is more diverse because it contains a larger proportion of navigation data than the manipulation-centric datasets **Ego4D** and **Ego4D+M3**.
Footnote 3: While **Ego4D** does contain navigation data (e.g., people moving from location to another), the dataset is heavily skewed towards object manipulation activities.
**Ego4D+MN** combines **Ego4D** with both the three object manipulation-centric datasets and two indoor navigation dataset, resulting a dataset with 4.3 million frames. While larger than **Ego4D+M** and **Ego4D+N**, it does not include any new types of data beyond the manipulation and navigation videos in the previous subsets. Thus, it is no more diverse than **Ego4D+N** (which includes both types of data).
**Ego4D+MN** includes **Ego4D**, all of the manipulation-centric and indoor navigation datasets, and ImageNet for a total of 5.6M frames. This dataset allows us to explore the impact of static internet images on our benchmark tasks.
### Scaling Hypothesis Findings
We now turn to analyzing the effect of increasing model size, dataset size, and dataset diversity. The full set of results is shown in Figure 4 and Table 4. The key takeaways are:
**Model Size.** We find that increasing model size positively impacts performance on CortexBench. Specifically, in Figure 3(a), we find that with all pre-training datasets, switching from ViT-B to ViT-L improves average performance on CortexBench. However, in Table 4, we find exceptions where this general trend does not hold. For instance, when pre-trained on **Ego4D+MNI**, the ViT-B model outperforms the ViT-L model on MetaWorld and Trifinger.
**Dataset Size and Diversity.** Figure 3(b) shows that, in general, increasing dataset size and diversity leads to improved performance. Models are are ordered from right to left by increasing size and the diversity of their pre-training dataset,
\begin{table}
\begin{tabular}{l l c c c c c c c c c} \hline \hline & & \multicolumn{6}{c}{Imitation Learning} & \multicolumn{3}{c}{Reinforcement Learning} & Mean \\ \cline{2-10} \# & Model & Adroit & MetaWorld & DMControl & Tri-Finger & ObjectNav & ImageNav & Mobile Pick & Rank & Success \\ \hline
1 & Best prior result (any setting) & 75 & 80 & 77 & - & 70.4 & 82.0 & - & \\
2 & Best prior result (Frozen PVR) & 75 & 80 & 77 & - & 54.4 & 61.8 & - & \\ \hline
3 & Random (ViT-B) Frozen & \(2.0\pm 2.0\) & \(0.5\pm 0.5\) & \(10.1\pm 0.6\) & \(57.8\pm 0.5\) & \(19.2\pm 0.9\) & \(42.1\pm 0.8\) & \(10.8\pm 1.4\) & 7.2 & 20.4 \\
4 & Random (ViT-L) Frozen & \(2.7\pm 1.8\) & \(0.5\pm 0.5\) & \(9.1\pm 0.2\) & \(57.2\pm 0.9\) & \(19.3\pm 0.9\) & \(45.2\pm 0.8\) & \(20.6\pm 1.8\) & 6.9 & 22.1 \\
5 & Random (ViT-B) Fine-tuned & \(44.0\pm 2.0\) & \(49.9\pm 7.3\) & \(43.5\pm 2.4\) & \(56.1\pm 1.3\) & \(28.5\pm 1.0\) & \(62.5\pm 0.7\) & \(47.6\pm 2.2\) & 5.3 & 47.4 \\ \hline
6 & MVP (ViT-B) & \(48.0\pm 3.3\) & \(91.2\pm 2.9\) & \(65.9\pm 2.4\) & \(59.7\pm 0.3\) & \(51.2\pm 1.1\) & \(64.7\pm 0.7\) & \(56.0\pm 2.2\) & 3.1 & 62.4 \\
7 & MVP (ViT-L) & \(53.3\pm 4.1\) & \(87.5\pm 3.4\) & \(69.2\pm 1.5\) & \(74.1\pm 0.3\) & \(55.0\pm 1.1\) & \(68.1\pm 0.7\) & \(65.4\pm 2.1\) & 2.1 & 67.5 \\
8 & CLIP (ViT-B) & \(47.3\pm 3.0\) & \(75.5\pm 3.4\) & \(55.5\pm 1.4\) & \(62.0\pm 0.5\) & \(56.6\pm 1.1\) & \(52.2\pm 0.8\) & \(49.8\pm 2.2\) & 3.9 & 57.0 \\
9 & VIP (RN-50) & \(54.0\pm 4.8\) & \(90.1\pm 2.2\) & \(72.5\pm 2.7\) & \(66.7\pm 0.2\) & \(26.4\pm 1.0\) & \(48.8\pm 0.8\) & \(7.2\pm 1.2\) & 4.0 & 52.3 \\
10 & R3M (RN-50) & \(73.3\pm 2.0\) & \(96.0\pm 1.1\) & \(81.1\pm 0.7\) & \(69.2\pm 0.8\) & \(22.7\pm 0.9\) & \(30.6\pm 0.7\) & \(33.2\pm 1.1\) & 3.4 & 58.0 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance of different **frozen** pre-trained visual representations on a diverse suite of evaluation domains. Best prior results means that the results are the best reported in literature prior to this work. Overall, we find that no single PVR consistently performs the best across all benchmarks. However, we find that several of these pre-trained models often outperform a random training from scratch baseline. Best prior results sources (row 1): Adroit and MetaWorld approximated from (Nair et al., 2022), DMControl from (Parisi et al., 2022), ImageNav from (Yadav et al., 2022), ObjectNav from (Ramrakhya et al., 2023). Frozen PVR Sources (row 2): Adroit, MetaWorld, and DMControl are the same as SOTA, ImageNav from (Yadav et al., 2022), ObjectNav from (Deitke et al., 2022).
\begin{table}
\begin{tabular}{l c} \hline \hline Name & Frames Used \\ \hline
**Ego4D** & 2,790,520 \\
**Ego4D+M** (Manipulation) & 3,538,291 \\
**Ego4D+N** (Navigation) & 3,593,049 \\
**Ego4D+MN** (Manipulation, Navigation) & 4,340,820 \\
**Ego4D+MN** (Manipulation, Navigation, ImageNet) & 5,621,987 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Datasets assembled to study effects of pre-training dataset size, diversity, and relevance – the largest of which (**Ego4D+MNI**) has 5.6M frames. For a detailed breakdown of the composition of each dataset, see Table 6 in Appendix A.2
and we mostly see improvements for both ViT-B and ViT-L.
For instance, **Ego4D+M** slightly improves upon **Ego4D** by 0.6 and 0.9 points (62.2 \(\rightarrow\) 62.8 and 63.5 \(\rightarrow\) 64.4) in the case of ViT-B and ViT-L, respectively. The gains with **Ego4D+N** are larger and it outperforms **Ego4D** by 1.6 points using ViT-B (62.2 \(\rightarrow\) 63.8) and by 3.6 points for ViT-L (63.5 \(\rightarrow\) 67.1). It is interesting to note that **Ego4D+N** has a larger improvement over the base **Ego4D** dataset than **Ego4D+M**, even though **Ego4D+N** and **Ego4D+M** dataset are similar in size. In these results, we find that increasing diversity by adding indoor navigation data improves performance more than adding additional manipulation data to **Ego4D**.
Additionally, we find that pre-training on **Ego4D+MN** is roughly on par with pre-training on **Ego4D+N**. We see a 0.3 and 0.1 point difference (63.8 \(\rightarrow\) 64.1 and 67.1 \(\rightarrow\) 67.2) for ViT-B and ViT-L, respectively, even though **Ego4D+MN** has about 800K more training frames. Together with the results from above this demonstrates that increasing data diversity seems to matter more than simply increasing dataset size.
Next, we analyze the effect of including static internet image data. Specifically, we find that adding ImageNet positively impacts average performance on CortexBench. For example, models pre-trained on **Ego4D+MN** outperform those pre-trained on **Ego4D+MN** by 1.9 points (64.1 \(\rightarrow\) 66.2) for ViT-B and 1.5 points (67.2 \(\rightarrow\) 68.7) for ViT-L. Interestingly, these results demonstrate that including static internet images can significantly boost performance on EAI tasks. This finding further highlights the importance of seeking data diversity to build better representations.
Finally, on average, our largest model (ViT-L) pre-trained on all datasets (**Ego4D+MNI**), achieves the highest rank when averaged across all benchmark tasks (Table 4 row 11), with a mean rank of 2.4. This performance is superior to the second-best model (**Ego4D+MN** ViT-L, Table 4 row 9) that has an average rank of 3.1. We call this model **VC-1**, and will open-source it.
However, upon further dis-aggregation, we observe we find that while **VC-1** performs best on average, it is not the best for each benchmark. For example, the best model for Mobile Pick, a mobile manipulation task, is a ViT-L trained on **Ego4D+M** and the best model for ImageNet, an indoor navigation task, is the ViT-L trained on **Ego4D+N**. These findings suggest that task-specific pre-training datasets could enhance the performance of models on individual tasks. However, it is important to note that this approach would lead to multiple pre-trained models, each tailored to a specific task, and not a unified visual foundation model.
### How does VC-1 compare to existing PVRs?
We now compare **VC-1** with existing PVRs from Section 4. On average it ranks as the best model across all benchmarks Figure 3(c). We focus on R3M, MVP, and CLIP, since they achieved the highest success in at least one benchmark; we also compare to fine-tuning from scratch to demonstrate the impact of end-to-end fine-tuning. In terms of mean success, **VC-1** (Table 4 row 11) outperforms MVP (ViT-L) by +1.2 points (67.5 \(\rightarrow\) 68.7), R3M by +10.7 (58.0 \(\rightarrow\) 68.7), CLIP by +11.7 (57.0 \(\rightarrow\) 68.7), and end-to-end fine-tuning from scratch +19.6 (49.1 \(\rightarrow\) 68.7).
Impressively, **VC-1** outperforms CLIP _on every benchmark_ (Figure 5), despite training on a 70X smaller dataset, emphasizing the importance of egocentric interaction datasets. **VC-1** also outperforms fine-tuning from scratch on every benchmark, indicating that PVRs trained with out-of-domain data can outperform end-to-end learning.
When compared to R3M, **VC-1** demonstrates superior performance on average and on 4 out of 7 benchmarks (Figure 5). It is outperformed by R3M on Adroit, MetaWorld and DMControl benchmarks. It is unclear whether this gap is caused by the different training objective, pre-training dataset, or backbone. This highlights the need for comparable evaluations on benchmarks like CortexBench.
The MVP model is the most similar in terms of results, architecture, and pre-training objective to **VC-1**, with the
Figure 4: Scaling experiments: Visualizing model performance averaged across all benchmarks in Table 4. Overall, we demonstrate modest but positive scaling trends in both (a) scaling model size, and (b) dataset diversity. c) Average ranking across all benchmarks. We compare existing PVRs (baselines) (Table 2) and scaling models (Table 4) by showcasing their ranking across all benchmarks, **VC-1**: **Ego4D+MNI** (ViT-L) achieves the highest average rank.
main difference being the addition of a _convolutional stem_ in MVP. **VC-1** outperforms MVP VIT-L by 1.3 points on mean success and performs better on four out of seven benchmarks, likely due to the use of a more diverse dataset.
Overall, **VC-1** is an effective model across a broad set of tasks and thus a reasonable starting point for novel EAI problems. However, it is not always the best performing model for a specific task. This leads us to theorize that there is a domain gap that might be bridged with dataset engineering or adaptation of the PVR.
## 6 Adapting VC-1
In prior sections, we focused on evaluating **VC-1** as a **frozen** PVR for EAI. We now study if _adapting_ **VC-1** can improve results in downstream tasks. We use a broad definition of adaptation (Bommasani et al., 2021), which, in the context of large pre-trained foundation models, can take several forms from simple prompting (Wei et al., 2022), to selectively updating some or all weights of the backbone (Kumar et al., 2022; Hansen et al., 2022; Yadav et al., 2023).
In the context of PVRs for EAI, adaptation can serve at least two purposes. The first is **task-specialization** in the feature extraction stage. Since **VC-1** was trained with MAE (He et al., 2021), it captures features that are generally useful for reconstructing images. Adaptation can specialize the visual backbone to extract features required for performing specific downstream tasks such as object rearrangement. Secondly, adaptation can also help **mitigate domain-gap** that might exist between pre-training and evaluation settings. In general, domain-gap can arise for several reasons such as poor coverage in pre-training data collection or deployment in novel conditions (e.g., on robots) not seen in the pre-training data (e.g., in human-centric video datasets). Domain gap is naturally instantiated in our setup, since **VC-1** was pre-trained on real-world, human video data while our downstream evaluation in CortexBench uses simulated EAI domains with different visual characteristics.
**End-to-end (E2E) fine-tuning** with a task-specific loss function can in-principle capture both of the aforementioned benefits of adaptation, and is widely used in computer vision literature (He et al., 2020; Caron et al., 2021; He et al., 2021; Baevski et al., 2022). To study E2E fine-tuning of **VC-1**, we use the same policy learning methods described in Section 3.2, except we allow the gradients to flow through the **VC-1** backbone and update the weights.
In Table 5, we find an interesting mixed result. In domains that involve large-scale IL or RL (ObjectNav, ImageNet, and Mobile Pick), we use the strategy proposed in Yadav et al. (2023) and observe that adapting **VC-1** with E2E fine-tuning significantly improves performance as compared to using a frozen **VC-1** backbone. Specifically, we see an improvement in ObjectNav success rate (SR) of +7.4 (\(60.3\to 67.7\)), ImageNav SR of +11.3 (\(70.3\to 81.6\)), and
\begin{table}
\begin{tabular}{l l c c c c c c c c c} \hline \hline \# & Benchmark & Adroit & Meta-World & DMControl & Trifinger & ObjectNav & ImageNav & Mobile Pick & Mean Rank & Mean Success \\ \hline
1 & Best prior recall (any setting) & 75 & 80 & 77 & & 70.4 & 82.0 & & & \\
2 & Rand (VT-B) fine-tuned & 44.0 & 49.9 & 34.2 & 55.0 & 28.5 & 65.0 & 47.6 & & \\
3 & Best result Table 2 (Frozen PVR) & 73.3 & 96.0 & 81.1 & 74.1 & 56.6 & 68.1 & 65.4 & & \\ \hline
4 & Ego4D (VIT-B) & 48.7 \(\pm\) 1.3 & 86.1 \(\pm\) 2.1 & 64.1 \(\pm\) 2.3 & 68.3 \(\pm\) 1.1 & 46.8 \(\pm\) 1.1 & 64.0 \(\pm\) 0.7 & 57.4 \(\pm\) 2.2 & 8.6 & 62.2 \\
5 & Ego4D (VIT-L) & 50.0 \(\pm\) 1.2 & 92.9 \(\pm\) 2.4 & 60.8 \(\pm\) 3.3 & 69.7 \(\pm\) 0.5 & 47.6 \(\pm\) 1.1 & 55.8 \(\pm\) 0.8 & 67.6 \(\pm\) 2.1 & 5.9 & 63.5 \\
6 & Ego4D+N (VIT-B) & 50.0 \(\pm\) 2.4 & 86.4 \(\pm\) 2.9 & 59.5 \(\pm\) 2.4 & 67.8 \(\pm\) 1.3 & 54.7 \(\pm\) 1.1 & 68.7 \(\pm\) 0.7 & 59.4 \(\pm\) 2.2 & 7.2 & 63.8 \\
7 & Ego4D+N (VIT-L) & 54.0 \(\pm\) 1.2 & 89.1 \(\pm\) 2.6 & 66.4 \(\pm\) 1.7 & 66.9 \(\pm\) 0.4 & 57.4 \(\pm\) 1.1 & 70.5 \(\pm\) 0.7 & 65.2 \(\pm\) 2.1 & 3.5 & 67.1 \\
8 & Ego4D+M (VIT-B) & 51.3 \(\pm\) 2.4 & 83.5 \(\pm\) 2.6 & 64.3 \(\pm\) 1.8 & 69.1 \(\pm\) 0.4 & 47.3 \(\pm\) 1.1 & 65.8 \(\pm\) 0.7 & 59.8 \(\pm\) 2.2 & 7.0 & 63.0 \\
9 & Ego4D+M (VIT-L) & 52.0 \(\pm\) 1.3 & 88.3 \(\pm\) 3.2 & 64.7 \(\pm\) 2.4 & 64.7 \(\pm\) 0.9 & 47.3 \(\pm\) 1.1 & 65.5 \(\pm\) 0.7 & 68.6 \(\pm\) 2.1 & 6.0 & 64.4 \\
10 & Ego4D+N(VIT-B) & 48.7 \(\pm\) 2.4 & 83.5 \(\pm\) 5.2 & 64.2 \(\pm\) 1.9 & 70.3 \(\pm\) 0.5 & 52.8 \(\pm\) 1.1 & 66.9 \(\pm\) 0.7 & 58.6 \(\pm\) 2.2 & 6.9 & 64.1 \\
11 & Ego4D+N(VIT-L) & 52.7 \(\pm\) 4.2 & 86.7 \(\pm\) 3.9 & 69.7 \(\pm\) 3.3 & 72.4 \(\pm\) 0.5 & 58.4 \(\pm\) 1.1 & 69.1 \(\pm\) 0.7 & 61.2 \(\pm\) 2.2 & 3.1 & 67.2 \\
12 & Ego4D+N(VIT-B) & 54.0 \(\pm\) 4.0 & 89.6 \(\pm\) 3.9 & 63.8 \(\pm\) 2.7 & 72.2 \(\pm\) 0.6 & 55.4 \(\pm\) 1.1 & 67.9 \(\pm\) 0.7 & 60.6 \(\pm\) 2.2 & 4.4 & 66.2 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Average success per benchmark of scaling hypothesis models. We highlight the best model from the set of models trained to evaluate the scaling hypothesis in bold. We find that on average the **VC-1** Ego4D+**MNI** (VIT-L) model performs best, but is not the best for each benchmark. Our best model outperforms in-domain from scratch learning on all benchmarks.
Figure 5: Comparison of **VC-1** with existing PVRs. **VC-1** matches or exceeds existing PVRs on all benchmarks except R3M on AD, MW, and DMC, indicating an opportunity for model adaptation.
Mobile Pick SR of +10.8 (63.2 \(\rightarrow\) 74.0). Overall, these results suggest that E2E fine-tuning of **VC-1** can achieve the benefits of both task-specialization and domain adaptation. Additional qualitative analysis is provided in Appendix A.4.
However, in few-shot IL domains (Adroit, MetaWorld, DMC, and Tri-Finger), we find E2E fine-tuning does not result in performance improvement. In fact, in most few-shot IL domains, it leads to a significant drop in performance, a finding that is consistent with prior work (Parisi et al., 2022; Hansen et al., 2022b). We hypothesize that the poor performance of E2E fine-tuning in few-shot IL domains is caused by overfitting, due to fine-tuning a large model with 307M parameters on a small dataset (\(\leq 50K\) frames).
**MAE adaptation to mitigate domain-gap.** As an alternative to E2E fine-tuning, we explore adapting **VC-1** with self-supervised learning (SSL). Specifically, in MAE adaptation we continue training the backbone network with the MAE (He et al., 2021) pre-training objective on task-specific data. Then, we freeze these adapted representations and use them to learn task-specific policies. We note that in MAE adaptation, the backbone is adapted using the same data that is used for training the policy (e.g., frames from expert demonstrations), and no additional in-domain datasets are used. While this adaptation strategy cannot address task-specialization, it may serve to mitigate domain gap.
For MAE adaptation, we initialize with **VC-1** weights, and then train with MAE for 100 epochs. In domains where expert demonstrations are available (i.e., Adroit, MetaWorld, DMControl, Tri-Finger, and ObjectNav), we use the RGB frames from these demonstrations for adaptation. In the remaining two benchmarks (ImageNav and Mobile Pick) we sample frames from training environments to create adaptation datasets. Finally, to isolate the importance of initializing with **VC-1** weights, we train in-domain MAE baselines by starting from a random initialization and then following the same approach used for MAE adaptation.
In Table 5, we observe that MAE adaptation substantially improves performance in few-shot learning domains. Specifically, on Adroit performance improves by +18.7 (59.3 \(\rightarrow\) 72.0), MetaWorld by +7.2 (88.8 \(\rightarrow\) 96.0), DMC by +14.0 (66.9 \(\rightarrow\) 80.9), Trifinger by +7.4 (72.7 \(\rightarrow\) 80.1). Interestingly, in DMC and Trifinger, the in-domain MAE baseline (Table 5 row 3) performs surprisingly well, highlighting the importance of in-domain data for representation learning.
Finally, in large-scale IL or RL domains (ObjectNav, ImageNav, and Mobile Pick), we find MAE adaptation results in small reductions in performance from **VC-1** (Table 5 row 4 vs. 6). In these domains, where substantial amounts of data is available for task-specific training (large-scale IL or RL), we find that E2E fine-tuning is the superior approach for adaptation. In aggregate, these results suggests that MAE adaptation should be explored particularly in few-shot domains or when E2E fine-tuning leads to poor performance.
Overall, we find _adapting_**VC-1** results in competitive performance on all benchmarks. On MetaWorld, DMControl, and Tri-Finger **VC-1** with MAE adaptation (Table 5 row 6) is comparable with the best known results (SoTA) and the best results from previous sections (Table 5 rows 1 and 2). Similarly, on ImageNav and Mobile Pick, **VC-1** with E2E fine-tuning (Table 5 row 5) matches or exceeds the best results. Together, these results demonstrate that **adaptation** is a powerful paradigm for using PVRs for EAI.
## 7 Discussion
This work introduced CortexBench, which comprises of 17 different embodied AI (EAI) task spanning locomotion, indoor navigation, and dexterous and mobile manipulation. Enabled by CortexBench, we performed the most comprehensive study to-date of visual foundation models for EAI. Specifically, we evaluated state-of-art open-sourced foundation models and find that we do not yet have a strong backbone for all tasks. However, models trained via masked auto-encoders (MAEs) are the most promising. Furthermore, our study finds that naively scaling model size and pre-training data diversity does not improve performance universally across all tasks, but does so on average. Finally, we find that adapting our largest pre-trained model (**VC-1**) results in performance that is competitive with or outperforms the best known results on all benchmarks in CortexBench.
One of our primary contentions is that in order for the research community to make progress on foundation models for EAI, we need to develop strong benchmarks - for a PVR to be foundational, it must be broadly applicable. Furthermore, as a community we should converge on best practices and a rigorous reproducible experimental methodology; we hope CortexBench will help the community make progress towards that.
\begin{table}
\begin{tabular}{l l c c c c c c c} \hline \hline \# & Method & Adroit & MetaWorld & DMControl & Tri-Finger & ObjectNav & ImageNav & Mobile Pick \\ \hline
1 & Best prior result (any setting) & 75 & 80 & 77 & - & 70.4 & 82.0 & - \\
2 & Best result from our experiments & 73.3 & 96.0 & 81.1 & 74.1 & 60.3 & 70.5 & 68.6 \\
3 & In-domain MAE baseline & 47.3 & 83.4 & 77.6 & 80.4 & 39.9 & 47.6 & 51.6 \\ \hline
4 & **VC-1** & 59.3 & 88.8 & 66.9 & 71.7 & 60.3 & 70.3 & 63.2 \\
5 & **VC-1** E2E fine-tuning & 15.9 & 22.7 & 6.7 & 70.9 & 67.7 & 81.6 & 74.0 \\
6 & **VC-1** MAE adaptation & 72.0 & 96.0 & 80.9 & 80.6 & 57.4 & 67.0 & 62.4 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Adapting **VC-1** with end-to-end fine-tuning or self-supervised learning (MAE) on in-domain data leads to substantial gains.
## Acknowledgements
The Georgia Tech effort was supported in part by NSF, ONR YIP, and ARO PECASE. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the U.S. Government, or any sponsor.
|
2309.12050 | Production rates of hidden-charm pentaquark molecules in $Λ_b$
decays | The partial decay widths and production mechanism of the three pentaquark
states, $P_{\psi}^{N}(4312)$, $P_{\psi}^{N}(4440)$, and $P_{\psi}^{N}(4457)$,
discovered by the LHCb Collaboration in 2019, are still under debate. In this
work, we employ the contact-range effective field theory approach to construct
the $\bar{D}^{(*)}\Sigma_{c}^{(*)}$, $\bar{D}^{*}\Lambda_c$,
$\bar{D}\Lambda_c$, $J/\psi p$, and $\eta_c p$ coupled-channel interactions to
dynamically generate the multiplet of hidde-charm pentaquark molecules by
reproducing the masses and widths of $P_{\psi}^{N}(4312)$,
$P_{\psi}^{N}(4440)$, and $P_{\psi}^{N}(4457)$. Assuming that the pentaquark
molecules are produced in the $\Lambda_b$ decay via the triangle diagrams,
where $\Lambda_{b}$ firstly decays into $D_{s}^{(\ast)}\Lambda_{c}$, then
$D_{s}^{(\ast)}$ scatters into $\bar{D}^{(\ast)}K$, and finally the molecules
are dynamically generated by the $\bar{D}^{(\ast)}\Lambda_{c}$ interactions, we
calculate the branching fractions of the decays $\Lambda_b \to {P_{\psi}^{N}}K$
using the effective Lagrangian approach. With the partial decay widths of these
pentaquark molecules, we further estimate the branching fraction of the decays
$ \Lambda_b \to ( P_{\psi}^{N} \to J/\psi p )K $ and $ \Lambda_b \to (
P_{\psi}^{N}\to \bar{D}^* \Lambda_c )K $. Our results show that the pentaquark
states $P_{\psi}^{N}(4312)$, $P_{\psi}^{N}(4440)$, and $P_{\psi}^{N}(4457)$ as
hadronic molecules can be produced in the $\Lambda_b$ decay, and on the other
hand their heavy quark spin symmetry partners are invisible in the $J/\psi p$
invariant mass distribution because of the small production rates. Our studies
show that is possible to observe some of the pentaquark states in the
$\Lambda_b\to \bar{D}^*\Lambda_c K$ decays. | Ya-Wen Pan, Ming-Zhu Liu, Li-Sheng Geng | 2023-09-21T13:17:39Z | http://arxiv.org/abs/2309.12050v2 | # Production rates of hidden-charm pentaquark molecules in \(\Lambda_{b}\) decays
###### Abstract
The partial decay widths and production mechanism of the three pentaquark states, \(P_{\psi}^{N}(4312)\),\(P_{\psi}^{N}(4440)\), and \(P_{\psi}^{N}(4457)\), discovered by the LHCb Collaboration in 2019, are still under debate. In this work, we employ the contact-range effective field theory approach to construct the \(\bar{D}^{(*)}\Sigma_{c}^{(*)}\), \(\bar{D}^{*}\Lambda_{c}\), \(\bar{D}\Lambda_{c}\), \(J/\psi p\), and \(\eta_{c}p\) coupled-channel interactions to dynamically generate the multiplet of hidden-charm pentaquark molecules by reproducing the masses and widths of \(P_{\psi}^{N}(4312)\), \(P_{\psi}^{N}(4440)\), and \(P_{\psi}^{N}(4457)\). Assuming that the pentaquark molecules are produced in the \(\Lambda_{b}\) decay via the triangle diagrams, where \(\Lambda_{b}\) firstly decays into \(D_{s}^{(*)}\Lambda_{c}\), then \(D_{s}^{(*)}\) scatters into \(\bar{D}^{(*)}K\), and finally the molecules are dynamically generated by the \(\bar{D}^{(*)}\Lambda_{c}\) interactions, we calculate the branching fractions of the decays \(\Lambda_{b}\to P_{\psi}^{N}K\) using the effective Lagrangian approach. With the partial decay widths of these pentaquark molecules, we further estimate the branching fraction of the decays \(\Lambda_{b}\to(P_{\psi}^{N}\to J/\psi p)K\) and \(\Lambda_{b}\to(P_{\psi}^{N}\to\bar{D}^{*}\Lambda_{c})K\). Our results show that the pentaquark states \(P_{\psi}^{N}(4312)\), \(P_{\psi}^{N}(4440)\), and \(P_{\psi}^{N}(4457)\) as hadronic molecules can be produced in the \(\Lambda_{b}\) decay, and on the other hand their heavy quark spin symmetry partners are invisible in the \(J/\psi p\) invariant mass distribution because of the small production rates. Our studies show that is possible to observe some of the pentaquark states in the \(\Lambda_{b}\to\bar{D}^{*}\Lambda_{c}K\) decays.
## I Introduction
In 2015, two pentaquark states \(P_{\psi}^{N}(4380)\) and \(P_{\psi}^{N}(4450)\) were observed by the LHCb Collaboration in the \(J/\psi p\) invariant mass distributions of the \(\Lambda_{b}\to J/\psi pK\) decay [1]. Four years later, they updated the data sample and found that the original \(P_{\psi}^{N}(4450)\) state splits into two states, \(P_{\psi}^{N}(4440)\) and \(P_{\psi}^{N}(4457)\), and a new state \(P_{\psi}^{N}(4312)\) emerges below the \(\bar{D}\Sigma_{c}\) threshold [2]. Recently the LHCb Collaboration found the evidence of the hidden-charm pentaquark state \(P_{\psi}^{N}(4337)\) in the \(B_{s}\) meson decay [3], as well as the hidden-charm pentaquark states with strangeness \(P_{\psi s}^{\Lambda}(4459)\) in the \(\Xi_{b}\) decay [4], the existence of which need to be confirmed because at present the significance of the observation is only about \(3\sigma\). Very recently the LHCb Collaboration reported another pentaquark state \(P_{\psi s}^{\Lambda}(4338)\) in the \(B\) decay with a high significance [5]. In this work, we only focus on the three pentaquark states \(P_{\psi}^{N}(4312)\), \(P_{\psi}^{N}(4440)\), and \(P_{\psi}^{N}(4457)\), which have been extensively studied in a series of theoretical works. We note that although the \(\bar{D}^{(*)}\Sigma_{c}\) molecular interpretations for these pentaquark states are the most popular [6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23], there exist other explanations, e.g., hadro-charmonia [24], compact pentaquark states [25; 26; 27; 28; 29; 30; 31], virtual states [32], triangle singularities [33], and cusp effects [34].
From the perspective of masses, the three pentaquark states can be nicely arranged into the \(\bar{D}^{(*)}\Sigma_{c}^{(*)}\) multiplet. However, their widths obtained in the hadronic molecule picture always deviate a bit from the experimental data. In Ref. [35], we found that their partial decay widths into three-body final states \(\bar{D}^{(*)}\Lambda_{c}\pi\) are only at the order of a few hundreds of keV, which indicates that the two-body decay modes are dominant. The chiral unitary study found that the partial decay widths of \(P_{\psi}^{N}\to J/\psi p(\eta_{c}p)\) account for the largest portion of their total decay widths [9], while the study based on the triangle diagrams shows that the three \(P_{\psi}^{N}\) states mainly decay into \(\bar{D}^{(*)}\Lambda_{c}\)[36]. In Refs. [11; 12; 37], the authors argued that the one-pion exchange is responsible for the \(\bar{D}^{(*)}\Sigma_{c}\to\bar{D}^{(*)}\Lambda_{c}\) interactions, and therefore dominantly contributes to the widths of the pentaquark states. From these studies, we conclude that these three molecules mainly decay via two modes: hidden-charm \(J/\psi p(\eta_{c}p)\) and open-charm \(\bar{D}^{(*)}\Lambda_{c}\). Considering the upper limit of the branching fraction \(\mathcal{B}(P_{\psi}^{N}\to J/\psi p)<2\%\) measured in the photoproduction processes [38; 39], the partial decays \(P_{\psi}^{N}\to\bar{D}^{(*)}\Lambda_{c}\) are expected to play a dominant role. However, such small upper limits cannot be easily reconciled with the current LHCb data [40]. In this work, we employ the contact-range effective field theory(EFT) approach to revisit the partial decay widths of the hidden-charm pentaquark molecules by studying their two- and three-body decays.
Up to now, the hidden-charm pentaquark states have only been observed in the exclusive \(b\) decays in proton-proton collisions. The productions of pentaquark states in other pro
cesses have been proposed. In Refs. [41; 42; 43; 44; 45], the authors claimed that the hidden-charm pentaquark states can be produced in the \(J/\psi\) photoproduction off proton. This process could distinguish whether these pentaquark states are genuine states or anomalous triangle singularities. Moreover, it is suggested that the hidden-charm pentaquark states can be produced in the \(e^{+}e^{-}\) collisions [46] and antiproton-deuteron collisions [47]. Based on Monte Carlo simulations, the inclusive production rates of these pentaquark states are estimated in proton-proton collisions [48; 49] and electron-proton collisions [50], which are helpful for future experimental searches for the pentaquark states. In this work, based on the LHCb data, we primarily focus on the production mechanism of the pentaquark states in the \(\Lambda_{b}\) decays.
The production mechanism of the pentaquark states in the \(\Lambda_{b}\) decays can be classified into two categories. In mechanism I, the mother particle \(M\) weakly decays into three particles \(A\), \(B\) and \(C\), and the hadronic molecule under study can be dynamically generated via the rescattering of any two particles of \(A\), \(B\) and \(C\). This mechanism has already been applied to study the production rates of \(X(3872)\) as a \(\bar{D}D^{*}\) molecule via the weak decays \(B\to\bar{D}D^{*}K\)[51; 52]. For the pentaquark states, it was proposed that the weak decays of \(\Lambda_{b}\to\bar{D}^{(*)}\Sigma_{c}K\) and \(\Lambda_{b}\to J/\psi pK\) can dynamically generate the hidden-charm pentaquark molecules via the \(\bar{D}^{(*)}\Sigma_{c}\) rescattering [53; 16] and \(J/\psi p\) rescatterring [54], respectively, which can well describe the experimental invariant mass distribution of \(J/\psi p\), while their absolute production rates are not quantitatively estimated. In particular, as pointed out in Ref. [34], the branching fractions \(Br(\Lambda_{b}\to\bar{D}^{(*)}\Sigma_{c}K)\) are so tiny that the pentaquark molecules are rather difficult to be produced via the weak decays \(\Lambda_{b}\to\bar{D}^{(*)}\Sigma_{c}K\). Therefore, whether these pentaquark molecules can be produced via Mechanism I remains unsettled.
In Mechanism II, the mother particle \(M\) weakly decays into two states \(A\) and \(B\), then \(A\) scatters (or decays) into \(C\) and \(D\), and finally the final-state interaction of \(B\) and \(C\) dynamically generates the molecules of interest [55; 56; 57; 58; 59]. A typical example is that the \(X(3872)\) as a \(\bar{D}D^{*}\) molecule can be generated through the weak decays \(B\to\bar{D}^{(*)}D_{s}^{(*)}\) following \(D_{s}^{(*)}\) scattering into \(D^{(*)}K\)[59]. In Ref. [60], Wu et al. proposed that \(\Lambda_{b}\) weakly decays into \(\Sigma_{c}\) and \(D_{s}^{(*)}\), then \(D_{s}^{(*)}\) scatter into \(\bar{D}^{(*)}\) and \(K\), and the pentaquark molecules are finally generated via the \(\bar{D}^{(*)}\Sigma_{c}\) interactions. We note that the \(\Lambda_{b}\) decaying into \(\Sigma_{c}^{(*)}\) is highly suppressed due to the fact the light quark pair transition between a symmetric and antisymmetric spin-flavor configuration is forbidden [61; 62], which indicates that the production of pentaquark molecules is difficult (if not impossible) via the weak decays of \(\Lambda_{b}\to D_{s}^{(*)}\Sigma_{c}\)1. In Ref. [34], the authors select the Color favorable weak decays \(\Lambda_{b}\to D_{s}^{(*)}\Lambda_{c}\) to produce the pentaquark molecules as well as to analyse their mass distributions, but did not explicitly calculate their productions rates. Following Refs. [58; 59], we take the effective Lagrangian approach to calculate the production rates of the pentaquark molecules in \(\Lambda_{b}\) decays with no free parameters, and try to answer the questions whether the three pentaquark states \(P_{\psi}^{N}(4312)\), \(P_{\psi}^{N}(4440)\), and \(P_{\psi}^{N}(4457)\) as hadronic molecules can be produced in the \(\Lambda_{b}\) decays, as well as why their HQSS partners have not been observed in the same decays.
Footnote 1: In Ref. [60], the \(\Lambda_{b}\to\Sigma_{c}\) transition is assumed to be proportional to the \(\Lambda_{b}\to\Lambda_{c}\) transition, characterized by an unknown parameter \(R\). By reproducing the experimental production rates of the pentaquark molecules, \(R\) is found to be about 0.1.
This work is organized as follows. We first calculate the two-body partial decay widths of the pentaquark molecules obtained by the contact range EFT, and the amplitudes of their production mechanism in \(\Lambda_{b}\) decays via the triangle diagrams using the effective Lagrangian approach in section II. The results and discussions on the widths of the pentaquark molecules and the branching fractions of the decays \(P_{\psi}^{N}\to J/\psi p\) and \(P_{\psi}^{N}\to\bar{D}^{(*)}\Lambda_{c}\), as well as the branching fractions of the weak decays \(\Lambda_{b}\to P_{\psi}^{N}K\), \(\Lambda_{b}\to(P_{\psi}^{N}\to J/\psi p)K\), and \(\Lambda_{b}\to(P_{\psi}^{N}\to\bar{D}^{*}\Lambda_{c})K\) are provided in section III, followed by a short summary in the last section.
## II Theoretical framework
In this work, we employ the triangle diagrams to describe the productions of pentaquark molecules. We suppose that the colour favored weak decays \(\Lambda_{b}\to\Lambda_{c}D_{s}^{(*)-}\) are responsible for the short-range interactions because the branching fractions \({\cal B}(\Lambda_{b}\to\Lambda_{c}D_{s}^{(*)-})\) are large among the nonleptonic decays of \(\Lambda_{b}\). Then the \(D_{s}^{(*)-}\) mesons scatter into \(\bar{D}^{(*)}\) and \(K\) mesons, and the pentaquark molecules with spin \(1/2\) and \(3/2\) are dynamically generated via the \(\bar{D}^{(*)}\Lambda_{c}\) interactions as shown in Fig. 1 and Fig. 2, respectively, where \(P_{\psi}^{1/2}\) and \(P_{\psi}^{3/2}\) denote the pentaquark molecules of spin \(1/2\) and \(3/2\), respectively. As shown in a number of previous studies [9; 11; 12; 16; 18; 21; 63; 64; 65; 66] and also explicitly shown later, there exists a complete multiplet of hidden-charm pentaquark molecules dominantly generated by the \(\bar{D}^{(*)}\Sigma_{c}^{(*)}\) in
teractions. We denote the seven pentaquark molecules as \(P_{\psi 1}^{N}\), \(P_{\psi 2}^{N}\),..., \(P_{\psi 7}^{N}\), following the order of Scenario A of Table 1 in Ref. [63]. It should be noted that such order specifies the spin of pentaquark molecules, i.e., \(P_{\psi 3}^{N}\) and \(P_{\psi 4}^{N}\) represent the pentaquark molecules of spin \(1/2\) and \(3/2\), respectively. In this work, we study two scenarios A and B corresponding to different spin assignments of these pentaquark molecules. In Scenario A, \(P_{\psi 3}^{N}\) and \(P_{\psi 4}^{N}\) represent \(P_{\psi}^{N}(4440)\) and \(P_{\psi}^{N}(4457)\), while they represent \(P_{\psi}^{N}(4457)\) and \(P_{\psi}^{N}(4440)\) in Scenario B. \(P_{\psi 1}^{N}\) represents \(P_{\psi}^{N}(4312)\) in both Scenario A and Scenario B. Considering only \(S\)-wave \(\bar{D}^{(*)}\Lambda_{c}\) interactions, the production of the pentaquark molecule of spin \(5/2\) is not allowed by the mechanisms shown in either Fig. 1 or Fig. 2, which indicate that \(P_{\psi 7}^{N}\) can not be produced in our model. Therefore, we only focus on the productions of the remaining six pentaquark molecules in the \(\Lambda_{b}\) decays in this work.
### Effective Lagrangians
In this work, we adopt the effective Lagrangian approach to calculate the triangle diagrams of Figs. 1 and 2. In the following, we spell out the relevant Lagrangians.
First, we focus on the weak decays of \(\Lambda_{b}\to\Lambda_{c}D_{s}^{(*)-}\). At quark level, the decays of \(\Lambda_{b}\to\Lambda_{c}D_{s}^{(*)-}\) can occur via the external \(W\)-emission mechanism shown in Fig. 3, which is usually the largest in terms of the topological classification of weak decays [67; 68; 69]. As shown in Ref. [59], the color favored weak decays \(B\to D_{s}^{(*)}D^{(*)}\) are significant to produce the \(\bar{D}^{*}D^{(*)}\) molecules in \(B\) decays, which share similar topologies to the weak decays \(\Lambda_{b}\to\Lambda_{c}D_{s}^{(*)-}\) at quark level.
The effective Hamiltonian describing the weak decays of \(\Lambda_{b}\to\Lambda_{c}D_{s}^{(*)-}\) has the following form
\[\mathcal{H}_{eff}=\frac{G_{F}}{\sqrt{2}}V_{cb}V_{cs}[c_{1}(\mu)\mathcal{O}_{1} (\mu)+c_{2}(\mu)\mathcal{O}_{2}(\mu)]+h.c. \tag{1}\]
where \(G_{F}\) is the Fermi constant, \(V_{bc}\) and \(V_{cs}\) are the Cabibbo-Kobayashi-Maskawa (CKM) matrix elements, \(c_{1,2}(\mu)\) are the Wilson coefficients, and \(\mathcal{O}_{1}(\mu)\) and \(\mathcal{O}_{2}(\mu)\) are the four-fermion operators of \((\bar{s}\bar{c})_{V-A}(\bar{c}\bar{b})_{V-A}\) and \((\bar{c}\bar{c})_{V-A}(\bar{s}\bar{b})_{V-A}\) with \((\bar{q}q)_{V-A}\) standing for \(\bar{q}\gamma_{\mu}(1-\gamma_{5})q\)[70; 71; 72]. The Wilson coefficients \(c_{1,2}(\mu)\) include the short-distance quantum chromodynamics (QCD) dynamic scaling from \(\mu=M_{W}\) to \(\mu=m_{c}\).
In the naive factorisation approach [73], the amplitudes of \(\Lambda_{b}\to\Lambda_{c}D_{s}^{(*)-}\) can be expressed as the products of two current hadronic matrix elements
\[\mathcal{A}\left(\Lambda_{b}\to\Lambda_{c}D_{s}^{-}\right)\] \[=\frac{G_{F}}{\sqrt{2}}V_{cb}V_{cs}a_{1}\left<D_{s}^{-}|(s\bar{c })|0\right>\left<\Lambda_{c}|(c\bar{b})|\Lambda_{b}\right> \tag{2}\] \[\mathcal{A}\left(\Lambda_{b}\to\Lambda_{c}D_{s}^{*-}\right)\] \[=\frac{G_{F}}{\sqrt{2}}V_{cb}V_{cs}a_{1}\left<D_{s}^{*-}|(s\bar{ c})|0\right>\left<\Lambda_{c}|(c\bar{b})|\Lambda_{b}\right> \tag{3}\]
where the effective Wilson coefficient \(a_{1}\) is expressed as \(a_{1}=c_{1}(\mu)+c_{2}(\mu)/N_{c}\) with \(N_{c}=3\) the number of colors [72; 73].
The matrix elements between a pseudoscalar meson or vector meson and the vacuum have the following form:
\[\left<D_{s}^{-}|(s\bar{c}\bar{c})|0\right>=i\,f_{D_{s}^{-}}p_{D_{s }^{-}}^{\mu}\,, \tag{4}\] \[\left<D_{s}^{*-}|(s\bar{c})|0\right>=m_{D_{s}^{*-}}f_{D_{s}^{*-}} \epsilon_{\mu}^{*}\,. \tag{5}\]
where \(f_{D_{s}^{-}}\) and \(f_{D_{s}^{*-}}\) are the decay constants for \(D_{s}^{-}\) and \(D_{s}^{*-}\), respectively, and \(\epsilon_{\mu}^{*}\) denotes the polarization vector of \(D_{s}^{*-}\).
The \(\Lambda_{b}\to\Lambda_{c}\) transition form factors are parameterized as follows [62]
\[\langle B(p^{\prime})|V_{\mu}-A_{\mu}|B(p)\rangle \tag{6}\] \[=\bar{u}(p^{\prime})[f_{1}^{V}(q^{2})\gamma_{\mu}-f_{2}^{V}(q^{2} )\frac{i\sigma_{\mu\nu}q^{\nu}}{m}+f_{3}^{V}(q^{2})\frac{q^{\mu}}{m}\] \[-(f_{1}^{A}(q^{2})\gamma_{\mu}-f_{2}^{A}(q^{2})\frac{i\sigma_{ \mu\nu}q^{\nu}}{m}+f_{3}^{A}(q^{2})\frac{q^{\mu}}{m})\gamma^{5}]u(p)\]
where \(\sigma^{\mu\nu}=\frac{i}{2}(\gamma^{\mu}\gamma^{\nu}-\gamma^{\nu}\gamma^{\mu})\) and \(q=p-p^{\prime}\). As a result, the weak decays \(\Lambda_{b}\to\Lambda_{c}D_{s}^{(*)}\) can be characterised by the following Lagrangian [70]:
\[\mathcal{L}_{\Lambda_{b}\Lambda_{c}D_{s}^{*}} = i\bar{\Lambda}_{c}(A+B\gamma_{5})\Lambda_{b}D_{s}\,, \tag{7}\] \[\mathcal{L}_{\Lambda_{b}\Lambda_{c}D_{s}^{*}} = \bar{\Lambda}_{c}(A_{1}\gamma_{\mu}\gamma_{5}+A_{2}\frac{p_{2\mu} }{m}\gamma_{5}+B_{1}\gamma_{\mu}+B_{2}\frac{p_{2\mu}}{m})\Lambda_{b}D_{s}^{*\mu}\,.\]
where \(A_{1}\), \(A_{2}\), \(B_{1}\), \(B_{2}\), \(A\), and \(B\) are:
\[A = \lambda f_{D_{s}}[(m-m_{2})f_{1}^{V}+\frac{m_{1}^{2}}{m}f_{3}^{V}]\,,\] \[B = \lambda f_{D_{s}}[(m+m_{2})f_{1}^{A}-\frac{m_{1}^{2}}{m}f_{3}^{A}]\,,\] \[A_{1} = -\lambda f_{D_{s}^{*}}m_{1}[f_{1}^{A}-f_{2}^{A}\frac{m-m_{2}}{m}]\,, \tag{8}\] \[B_{1} = \lambda f_{D_{s}^{*}}m_{1}[f_{1}^{V}+f_{2}^{V}\frac{m+m_{2}}{m}]\,,\] \[A_{2} = 2\lambda f_{D_{s}^{*}}m_{1}f_{2}^{A}\,,\] \[B_{2} = -2\lambda f_{D_{s}^{*}}m_{1}f_{2}^{V}\,.\]
with \(\lambda=\frac{G\pi}{\sqrt{2}}V_{cb}V_{cs}a_{1}\) and \(m,m_{1},m_{2}\) referring to the masses of \(\Lambda_{b}\), \(D_{s}^{(*)}\), and \(\Lambda_{c}\), respectively. The form factors can be expressed in a double-pole form:
\[f_{i}^{V/A}(q^{2})=F_{i}^{V/A}(0)\frac{F_{0}}{1-a\,\varphi+b\, \varphi^{2}} \tag{9}\]
with \(\varphi=q^{2}/m^{2}\). The values of \(F_{0}\), \(a\) and \(b\) in the \(\Lambda_{b}\rightarrow\Lambda_{c}\) transition form factors are taken from Ref. [74] and shown in Table 1.
In this work, we take \(G_{F}=1.166\times 10^{-5}\) GeV\({}^{-2}\), \(V_{cb}=0.041\), \(V_{cs}=0.987\), \(f_{D_{s}^{-}}=250\) MeV, and \(f_{D_{s}^{*-}}=272\) MeV as in Refs. [75; 76; 77; 62]. We note that the value of \(a_{1}\) as a function of the energy scale \(\mu\) differs from process to process [78; 79]. Therefore, we take the branching fraction \(\mathcal{B}(\Lambda_{b}\rightarrow\Lambda_{c}D_{s}^{-})=(1.10\pm 0.10)\%\) to determine the effective Wilson coefficient to be \(a_{1}=0.883\). We note that in Ref. [59], the effective Wilson Coefficient \(a_{1}\) are determined to be 0.79 and 0.81 by reproducing the branching fractions of the decays \(B^{+}\rightarrow\bar{D}^{0}D_{s}^{+}\) and \(B^{+}\rightarrow\bar{D}^{0}D_{s}^{*+}\), respectively. These values are consistent with the value obtained from the weak decay \(\Lambda_{b}\rightarrow\Lambda_{c}D_{s}^{-}\), showing that the naive factorisation approach works well for the external \(W\)-emission mechanism. Due to the non availability of experimental data for the branching fraction \(\mathcal{B}(\Lambda_{b}\rightarrow\Lambda_{c}D_{s}^{*-})\), we assume that the effective Wilson coefficient in Eq.(3) is the same as that in Eq.(2). With the so obtained effective Wilson coefficient \(a_{1}\) we predict the branching fraction \(\mathcal{B}(\Lambda_{b}\rightarrow\Lambda_{c}D_{s}^{*-})=(2.47\pm 0.26)\%\), consistent with the results of Refs. [80; 62]. As a matter of fact, the experimental branching fraction \(\mathcal{B}(\Lambda_{b}\rightarrow\Lambda_{c}D_{s}^{-})\) helps reduce the uncertainty in the weak vertices.
The Lagrangians describing the \(D_{s}^{(*)}\) mesons scattering into \(\bar{D}^{(*)}\) and \(K\) mesons are:
\[\mathcal{L}_{KD_{s}D^{*}} = ig_{KD_{s}D^{*}}D^{*}\mu[\bar{D}_{s}\partial_{\mu}K-(\partial_{ \mu}\bar{D}_{s})K]+H.c.\] \[\mathcal{L}_{KD_{s}^{*}D^{*}} = -g_{KD_{s}^{*}D^{*}}\epsilon^{\mu\nu\alpha\beta}[\partial_{\mu} \bar{D}_{\nu}^{*}\partial_{\alpha}D_{s\beta}^{*}\bar{K} \tag{10}\] \[+\partial_{\mu}D_{\nu}^{*}\partial_{\alpha}\bar{D}_{s\beta}^{*}K]+ H.c.\]
where \(g_{KD_{s}D^{*}}\) and \(g_{KD_{s}^{*}D^{*}}\) are the kaon couplings to \(D_{s}D^{*}\) and \(D_{s}^{*}D^{*}\), respectively. Unfortunately, there exists no experimental data to determine the values of these couplings. The coupling \(g_{D_{s}D^{*}K}\) is estimated to be \(16.6\) and \(10\) assuming SU(3)-flavor symmetry [81] and SU(4)-flavor symmetry [82], respectively, while the QCD sum rule yields \(5\)[83; 84]. In view of this large variance, we adopt the couplings estimated by SU(4) symmetry, which are in between those estimated utilizing SU(3) symmetry and by the QCD sum rule, i.e., \(g_{D_{s}D^{*}K}=g_{D_{s}^{*}DK}=10\) and \(g_{D_{s}^{*}D^{*}K}=7.0\) GeV\({}^{-1}\)[82].
The effective Lagrangians describing the interactions between pentaquark molecules and their constituents \(\bar{D}^{(*)}\Lambda_{c}\) are written as
\[\mathcal{L}_{P_{\psi}^{1/2}\Lambda_{c}\bar{D}} = g_{P_{\psi}^{1/2}\Lambda_{c}\bar{D}}P_{\psi}^{1/2}\Lambda_{c} \bar{D}\,,\] \[\mathcal{L}_{P_{\psi}^{1/2}\Lambda_{c}\bar{D}^{*}} = g_{P_{\psi}^{1/2}\Lambda_{c}\bar{D}^{*}}\bar{\Lambda}_{c}\gamma_ {5}(g_{\mu\nu}-\frac{p_{\mu}p_{\nu}}{m_{P_{\psi}^{1/2}}^{2}})\gamma^{\nu}P_{ \psi}^{1/2}D^{*\mu}\,,\] \[\mathcal{L}_{P_{\psi}^{3/2}\Lambda_{c}\bar{D}^{*}} = g_{P_{\psi}^{3/2}\Lambda_{c}\bar{D}^{*}}\bar{\Lambda}_{c}P_{\psi \mu}^{3/2}D^{*\mu}\,. \tag{11}\]
where \(g_{P_{\psi}^{1/2}\Lambda_{c}\bar{D}^{*}}\), \(g_{P_{\psi}^{1/2}\Lambda_{c}\bar{D}^{*}}\), and \(g_{P_{\psi}^{3/2}\Lambda_{c}\bar{D}^{*}}\) are the couplings of the \(P_{\psi}^{1/2}\) and \(P_{\psi}^{3/2}\) pentaquark molecules to their constituents \(\bar{D}^{(*)}\Lambda_{c}\). One should note that although these pentaquark molecules are dominantly generated by the \(\bar{D}^{(*)}\Sigma_{c}^{(*)}\) interactions [63], the \(\bar{D}^{(*)}\Lambda_{c}\) and \(J/\psi(\eta_{c})p\) coupled channels also play a relevant role [9; 11; 12; 37; 65; 85]. Below, we estimate the couplings of the pentaquark molecules to their constituents \(\bar{D}^{(*)}\Lambda_{c}\) and \(J/\psi(\eta_{c})p\) by the contact range EFT approach, which is widely applied to study the dynamical generation of hadronic molecules [86; 22].
### Contact-range EFT approach
In this subsection, we introduce the contact-range EFT approach. The scattering amplitude \(T\) is responsible for the dynamical generation of the pentaquark molecules via the Lippmann-Schwinger equation
\[T(\sqrt{s})=(1-VG(\sqrt{s}))^{-1}V, \tag{12}\]
where \(V\) is the coupled-channel potential determined by the contact-range EFT approach (see Appendix A), and \(G(\sqrt{s})\) is the two-body propagator. In this work, we consider the following coupled channels \(\bar{D}^{*}\Sigma_{c}^{*}-\bar{D}^{*}\Sigma_{c}-\bar{D}\Sigma_{c}-\bar{D}^{*} \Lambda_{c}-\bar{D}\Lambda_{c}-J/\psi p-\eta_{c}p\) with \(J^{P}=1/2^{-}\) and \(\bar{D}^{*}\Sigma_{c}^{*}-\bar{D}^{*}\Sigma_{c}-\bar{D}^{*}\Lambda_{c}-J/\psi p\) with \(J^{P}=3/2^{-}\). Since the mass splitting between \(\bar{D}^{*}\Sigma_{c}^{*}\) and \(\eta_{c}p\) is about 600 MeV, we take a relativistic propagator:
\[G(\sqrt{s})=2m_{1}\int\frac{d^{3}q}{(2\pi)^{3}}\frac{\omega_ {1}+\omega_{2}}{2\,\omega_{1}\omega_{2}}\frac{F(q^{2},k)}{(\sqrt{s})^{2}-( \omega_{1}+\omega_{2})^{2}+i\varepsilon}\,, \tag{13}\]
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & \(F_{1}^{V}\) & \(F_{2}^{V}\) & \(F_{3}^{V}\) & \(F_{1}^{A}\) & \(F_{2}^{A}\) & \(F_{3}^{A}\) \\ \hline \(F(0)\) & 0.549 & 0.110 & \(-0.023\) & 0.542 & 0.018 & \(-0.123\) \\ \(a\) &
where \(\sqrt{s}\) is the total energy in the center-of-mass (c.m.) frame of \(m_{1}\) and \(m_{2}\), \(\omega_{i}=\sqrt{m_{i}^{2}+q^{2}}\) is the energy of the particle and the c.m. momentum \(k\) is
\[k(\sqrt{s})=\frac{\sqrt{\sqrt{s}^{2}}-(m_{1}+m_{2})^{2}}{2\sqrt{s}^{2}-(m_{1}-m _{2})^{2}}{2\sqrt{s}}\,. \tag{14}\]
A regulator of Gaussian form \(F(q^{2},k)=e^{-2q^{2}/\Lambda^{2}}/e^{-2k^{2}/\Lambda^{2}}\) is used to regulate the loop function. We note that the loop function can also be regularized by other methods such as the momentum cut off scheme and dimensional regularization scheme [87; 88; 89; 90; 91].
The dynamically generated pentaquark molecules correspond to poles on the unphysical sheet, which is defined as [92; 93],
\[G_{II}(\sqrt{s})=G_{I}(\sqrt{s})+i\frac{2m_{1}}{4\pi}\frac{k(\sqrt{s})}{\sqrt{ s}}, \tag{15}\]
where \(m_{1}\) stands for the mass of the baryon.
With the potentials obtained in Eq. (10) and Eq. (11) of the Appendix, we search for poles in the vicinity of the \(\bar{D}^{(*)}\Sigma_{c}^{(*)}\) channels, and then determine the couplings between the pentaquark molecules and their constituents from the residues of the corresponding poles,
\[g_{i}g_{j}=\lim_{\sqrt{s}\to\sqrt{s_{0}}}\bigl{(}\sqrt{s}-\sqrt{s_{0}}\bigr{)} \,T_{ij}(\sqrt{s}), \tag{16}\]
where \(g_{i}\) denotes the coupling of channel \(i\) to the dynamically generated molecules and \(\sqrt{s_{0}}\) is the pole position.
Using the couplings \(g_{i}\) obtained above, one can estimate the partial decay widths of the pentaquark molecules [94]
\[\Gamma_{i}=g_{i}^{2}\frac{1}{2\pi}\frac{m_{i}}{m_{P_{\psi}^{N}}}p_{i} \tag{17}\]
where \(m_{i}\) stands for the mass of the baryon of channel \(i\), \(m_{P_{\psi}^{N}}\) is the mass of the pentaquark molecule (the real part of the pole position), and \(p_{i}\) is the momentum of the baryon (meson) of channel \(i\) in the \(P_{\psi}^{N}\) rest frame.
### Decay Amplitudes
With the above effective Lagragians, we obtain the following decay amplitudes for \(\Lambda_{b}\to P_{\psi}^{1/2}K\) of Fig. 1
\[\mathcal{M}_{1}^{a}= i^{3}\int\frac{d^{4}q}{(2\pi)^{4}}[g_{P_{\psi}^{1/2}\Lambda_{c} \bar{D}}.\bar{u}(p_{2})\gamma^{\nu}\gamma_{5}(g_{\mu\nu}-\frac{p_{2\mu}p_{2\nu }}{m_{P_{\psi}^{1/2}}^{2}})(q_{\underline{2}}+m_{2})\,i(A+B\gamma_{5})u(k_{0})]\] \[[-g_{KD^{*}D_{s}}(q_{1}+p_{1})_{\alpha}](-g^{\mu\alpha}+\frac{q^{ \mu}q^{\alpha}}{m_{E}^{2}})\frac{1}{q_{1}^{2}-m_{1}^{2}}\frac{1}{q_{2}^{2}-m_ {2}^{2}}\frac{1}{q^{2}-m_{E}^{2}}\,,\] \[\mathcal{M}_{1}^{b}= i^{3}\int\frac{d^{4}q}{(2\pi)^{4}}[g_{P_{\psi}^{1/2}\Lambda_{c} \bar{D}}.\bar{u}(p_{2})\gamma^{\nu}\gamma_{5}(g_{\mu\nu}-\frac{p_{2\mu}p_{2 \nu}}{m_{P_{\psi}^{1/2}}^{2}})(q_{\underline{2}}+m_{2})\] \[[(A_{1}\gamma_{\alpha}\gamma_{5}+A_{2}\frac{q_{2\alpha}}{m}\gamma _{5}+B_{1}\gamma_{\alpha}+B_{2}\frac{q_{2\alpha}}{m})u(k_{0})][-g_{KD^{*}D_{s} ^{*}}\varepsilon_{\rho\lambda\eta\tau}q^{\rho}q_{1}^{\eta}] \tag{18}\] \[(-g^{\mu\lambda}+\frac{q^{\mu}q^{\lambda}}{m_{E}^{2}})(-g^{\sigma \tau}+\frac{q_{1}^{\alpha}q_{1}^{\tau}}{m_{1}^{2}})\frac{1}{q_{1}^{2}-m_{1}^{ 2}}\frac{1}{q_{2}^{2}-m_{2}^{2}}\frac{1}{q^{2}-m_{E}^{2}}\,,\] \[\mathcal{M}_{1}^{c}= i^{3}\int\frac{d^{4}q}{(2\pi)^{4}}[g_{P_{\psi}^{1/2}\Lambda_{c}D} \bar{u}(p_{2})(q_{\underline{2}}+m_{2})(A_{1}\gamma_{\alpha}\gamma_{5}+A_{2} \frac{q_{2\alpha}}{m}\gamma_{5}+B_{1}\gamma_{\alpha}+B_{2}\frac{q_{2\alpha}}{m })u(k_{0})]\] \[[-g_{KDD_{s}^{*}}(-q+p_{1})_{\tau}](-g^{\alpha\tau}+\frac{q_{1}^{ \alpha}q_{1}^{\tau}}{m_{1}^{2}})\frac{1}{q_{1}^{2}-m_{1}^{2}}\frac{1}{q_{2}^{2 }-m_{2}^{2}}\frac{1}{q^{2}-m_{E}^{2}}\,.\]
where \(k_{0}\), \(q_{1}\), \(q_{2}\), \(q\), \(p_{1}\), and \(p_{2}\) refer to the momenta of \(\Lambda_{b}\), \(D_{s}^{(*)}\), \(\Lambda_{c}\), \(\bar{D}^{(*)}\), \(K\), and \(P_{\psi}^{1/2}\), respectively, and \(\bar{u}(p_{2})\) and \(u(k_{0})\) represent the spinors of \(P_{\psi}^{1/2}\) and \(\Lambda_{b}\). Similarly, we write the decay amplitudes for \(\Lambda_{b}\to P_{\psi}^{3/2}K\) of Fig. 2 as follows
\[\begin{split}\mathcal{M}_{3}^{a}=& i^{3}\int\frac{d^{4}q}{(2\pi)^{4}}[g_{P_{ \psi}^{3/2}\Lambda_{c}\bar{D}^{*}}\bar{u}_{\mu}(p_{2})](\not{q}_{2}+m_{2})[i(A+B \gamma_{5})u(k_{0})]\\ &[-g_{KD^{*}D_{s}}(q_{1}+p_{1})_{\nu}](-g^{\mu\nu}+\frac{q^{\mu}q ^{\nu}}{m_{E}^{2}})\frac{1}{q_{1}^{2}-m_{1}^{2}}\frac{1}{q_{2}^{2}-m_{2}^{2}} \frac{1}{q^{2}-m_{E}^{2}}\,,\\ \mathcal{M}_{3}^{b}=& i^{3}\int\frac{d^{4}q}{(2\pi)^{4 }}[g_{P_{\psi}^{3/2}\Lambda_{c}\bar{D}^{*}}\bar{u}_{\mu}(p_{2})(\not{q}_{2}+m_ {2})(-i)(A_{1}\gamma_{\alpha}\gamma_{5}+A_{2}\frac{q_{2\alpha}}{m}\gamma_{5}+B _{1}\gamma_{\alpha}+B_{2}\frac{q_{2\alpha}}{m})u(k_{0})]\\ &[-g_{KD^{*}D_{s}^{*}}\varepsilon_{\rho\lambda\eta\tau}q^{\rho}q _{1}^{\eta}](-g^{\mu\lambda}+\frac{q^{\mu}q^{\lambda}}{m_{E}^{2}})(-g^{\alpha \tau}+\frac{q_{1}^{2}q_{1}^{2}}{m_{1}^{2}})\frac{1}{q_{1}^{2}-m_{1}^{2}}\frac{ 1}{q_{2}^{2}-m_{2}^{2}}\,\frac{1}{q^{2}-m_{2}^{2}}\,.\end{split} \tag{19}\]
With the amplitudes for the decays of \(\Lambda_{b}\to P_{\psi}^{1/2}K\) and \(\Lambda_{b}\to P_{\psi}^{3/2}K\) given above, one can compute the corresponding partial decay widths
\[\Gamma=\frac{1}{2J+1}\frac{1}{8\pi}\frac{|\vec{p}|}{m_{\Lambda_{b}}^{2}}| \overline{M}|^{2} \tag{20}\]
where \(J\) is the total angular momentum of the initial \(\Lambda_{b}\) baryon and \(|\vec{p}|\) is the momentum of either final state in the rest frame of the \(\Lambda_{b}\) baryon.
Regarding the three-body decays of the pentaquark molecules, we have systematically investigated two decay modes: tree diagrams and triangle diagrams, and found that the former can almost saturate their total three-body decay widths [35]. In this work, with the new couplings between the pentaquark molecules and \(\bar{D}^{(*)}\Sigma_{c}^{(*)}\), we update the widths of the three-body decays \(P_{\psi}^{N}\to\bar{D}^{(*)}\Lambda_{c}\pi\), where these hidden-charm pentaquark molecules decay into \(\bar{D}^{(*)}\Lambda_{c}\pi\) via the off-shell \(\Sigma_{c}^{(*)}\) baryons decaying into \(\Lambda_{c}\pi\). The details for the calculations can be found in our previous work [35].
## III Results and discussions
Because the three-body partial decay widths of the pentaquark states \(P_{\psi}^{N}(4312)\), \(P_{\psi}^{N}(4440)\), and \(P_{\psi}^{N}(4457)\) as hadronic molecules are less than \(1\) MeV, we can neglect their three-body decays and assume that their two-body decays saturate their total widths. Therefore, we suppose that these three pentaquark molecules are dynamically generated via the \(\bar{D}^{(*)}\Sigma_{c}^{(*)}\), \(\bar{D}^{(*)}\Lambda_{c}\), \(\eta_{c}N\) and \(J/\psi N\) coupled-channel potentials. In the heavy quark limit, the contact potentials of this coupled-channel system are parameterized by seven parameters as shown in Eq.(A6) and Eq.(A7). In this work, we set the potential \(V_{J/\psi(\eta_{c})N\to J/\psi(\eta_{c})N}=0\), resulting in six unknown parameters. The unknown couplings of the \(\bar{D}^{(*)}\Sigma_{c}^{(*)}\to\bar{D}^{(*)}\Sigma_{c}^{(*)}\) potentials are well described by the light meson saturation approach [13], which is widely applied to study heavy hadronic molecules [95; 96; 18; 97]. Therefore, we expect that the light meson saturation approach(see Appendix) is also valid for the \(\bar{D}^{(*)}\Lambda_{c}\to\bar{D}^{(*)}\Lambda_{c}\) interaction. With the \(\bar{D}^{(*)}\Sigma_{c}^{(*)}\to\bar{D}^{(*)}\Sigma_{c}^{(*)}\) potentials determined in Ref. [63], we obtain the \(\bar{D}^{(*)}\Lambda_{c}\to\bar{D}^{(*)}\Lambda_{c}\) potentials using the light meson saturation approach, and then search for poles near the \(\bar{D}^{(*)}\Lambda_{c}\) threshold, but find none, indicating that there exists no genuine states generated by the \(\bar{D}^{(*)}\Lambda_{c}\) interactions, consistent with Refs. [98; 12; 9]. Very recently, Duan et al. argued that there exist the enhancements at the \(\bar{D}^{(*)}\Lambda_{c}\) thresholds induced by the triangle and box singularities [99]. Therefore, even taking into account the \(\bar{D}^{(*)}\Lambda_{c}\), \(\eta_{c}N\), and \(J/\psi N\) channels, the number of hidden-charm pentaquark molecules does not change. These channels mainly affect the imaginary part of the pole positions, i.e., the widths of the pentaquark states.
### Widths of hidden-charm pentaquark molecules
In Table 2, we tabulate the masses and quantum numbers of relevant particles. For the cutoff in the Gaussian regulator, we choose the value of \(\Lambda=1.5\) GeV [35]. To quantify the agreement with the experimental data, we use the \(\chi^{2}\) defined as
\[\chi^{2}=\sum_{i=1}^{3}\frac{(M_{exp}^{i}-M_{fit}^{i})^{2}}{{d_{M}^{i}}^{2}}+ \sum_{i=1}^{3}\frac{(\Gamma_{exp}^{i}-\Gamma_{fit}^{i})^{2}}{{d_{\Gamma}^{i}}^{2}} \tag{21}\]
where \(M_{exp}^{i}(\Gamma_{exp}^{i})\) and \(M_{fit}^{i}(\Gamma_{fit}^{i})\) are the masses(widths) measured by the LHCb Collaboration and those obtained in the contact-range EFT approach, \(d_{M}^{i}\) and \(d_{\Gamma}^{i}\) are the uncertainties of experimental masses and widths, and the superscripts
\begin{table}
\begin{tabular}{c c c|c c c} \hline \hline Hadron & \(I(J^{P})\) & M (MeV) & Hadron & \(I(J^{P})\) & M (MeV) \\ \hline \(p\) & \(\frac{1}{2}(1/2^{+})\) & \(938.27\) & \(n\) & \(\frac{1}{2}(1/2^{+})\) & \(939.57\) \\ \(\Sigma_{c}^{++}\) & \(1(1/2^{+})\) & \(2453.97\) & \(\Sigma_{c}^{+}\) & \(1(1/2^{+})\) & \(2452.65\) \\ \(\Sigma_{c}^{*++}\) & \(1(3/2^{+})\) & \(2518.41\) & \(\Sigma_{c}^{*++}\) & \(1(3/2^{+})\) & \(2517.4\) \\ \(\Sigma_{c}^{0}\) & \(1(1/2^{+})\) & \(2453.75\) & \(\Sigma_{c}^{*0}\) & \(1(3/2^{+})\) & \(2518.48\) \\ \(\Lambda_{c}^{+}\) & \(0(1/2^{+})\) & \(2286.46\) & \(\Lambda_{b}\) & \(0(1/2^{+})\) & \(5619.60\) \\ \(\pi^{\pm}\) & \(10(^{-})\) & \(139.57\) & \(\pi^{0}\) & \(1(0^{-})\) & \(134.98\) \\ \(K^{\pm}\) & \(1/2(0^{-})\) & \(493.677\) & \(K^{0}\) & \(1/2(0^{-})\) & \(497.611\) \\ \(\bar{D}^{0}\) & \(\frac{1}{2}(0^{-})\) & \(1864.84\) & \(D^{-}\) & \(\frac{1}{2}(0^{-})\) & \(1869.66\) \\ \(\bar{D}^{*0}\) & \(\frac{1}{2}(1^{-})\) & \(2006.85\) & \(D^{*-}\) & \(\frac{1}{2}(1^{-})\) & \(2010.26\) \\ \(D_{s}^{\pm}\) & \(0(0^{-})\) & \(1968.35\) & \(D_{s}^{*\pm}\) & \(0(1^{-})\) & \(2112.2\) \\ \(J/\psi\) & \(0(1^{-})\) & \(3096.90\) & \(\eta_{c}\) & \(0(0^{-})\) & \(2983.90\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Masses and quantum numbers of hadrons relevant to this work [75].
with \(i=1\), \(i=2\) and \(i=3\) represent \(P_{\psi}^{N}(4312)\), \(P_{\psi}^{N}(4440)\), and \(P_{\psi}^{N}(4457)\), respectively. The masses and decay widths of the three states are [2]
\[M_{P_{\psi}^{N}(4312)} = 4311.9\pm 0.7^{+6.8}_{-0.6}\ \text{MeV}\,,\] \[\Gamma_{P_{\psi}^{N}(4312)} = 9.8\pm 2.7^{+3.7}_{-4.5}\ \text{MeV}\,,\] \[M_{P_{\psi}^{N}(4440)} = 4440.3\pm 1.3^{+4.1}_{-4.7}\ \text{MeV}\,,\] \[\Gamma_{P_{\psi}^{N}(4440)} = 20.6\pm 4.9^{+8.7}_{-10.1}\ \text{MeV}\,, \tag{22}\] \[M_{P_{\psi}^{N}(4457)} = 4457.3\pm 0.6^{+4.1}_{-1.7}\ \text{MeV}\,,\] \[\Gamma_{P_{\psi}^{N}(4457)} = 6.4\pm 2.0^{+5.7}_{-1.9}\ \text{MeV}\,.\]
Given the fact that the light meson saturation is valid for the \(\bar{D}^{(*)}\Lambda_{c}\to\bar{D}^{(*)}\Lambda_{c}\) and \(\bar{D}^{(*)}\Sigma_{c}^{(*)}\to\bar{D}^{(*)}\Sigma_{c}^{(*)}\) potentials2, we adopt the ratio \(C_{a}^{\prime}/C_{a}=0.216\) obtained in the light meson saturation approach. As a result, there only remain five parameters. With the above preparations we determine the values of the parameters \(C_{a}\), \(C_{b}\), \(C_{b}^{\prime}\), \(g_{1}\) and \(g_{2}\) as well as the \(\chi^{2}\) for Scenario A and Scenario B and show them in Table 3. In the following, we compare the fitted parameters (\(C_{a}\), \(C_{b}\), and \(C_{b}^{\prime}\)) with those obtained in the light meson saturation approach (see the Appendix for details). With the light meson saturation, we obtain the ratio \(C_{b}/C_{a}=0.12\), while the ratio determined in the EFT approach (by fitting to the data) is \(C_{b}/C_{a}=-0.12\) and \(C_{b}/C_{a}=0.10\) for Scenario A and Scenario B, respectively. It seems that the light meson saturation approach prefers Scenario B, consistent with the single-channel analysis [13]. Moreover, the light meson saturation approach yields \(C_{b}^{\prime}/C_{b}=0.61\), while the value determined in the EFT approach is \(C_{b}^{\prime}/C_{b}=0.43\) for Scenario A but \(C_{b}^{\prime}/C_{b}=0.95\) for Scenario B. One can see that they are quite different for either Scenario A or Scenario B. From the perspective of light meson saturation, only the \(\rho\) meson exchange is considered to saturate the \(\bar{D}^{(*)}\Sigma_{c}\to\bar{D}^{(*)}\Lambda_{c}\) interactions. However, the one-pion exchange plays a significant role in the \(\bar{D}^{(*)}\Sigma_{c}\to\bar{D}^{(*)}\Lambda_{c}\) potentials [11; 12; 37]. Since the one-pion exchange is not considered, the \(C_{b}^{\prime}\) obtained in the light meson saturation approach is not consistent with the \(C_{b}^{\prime}\) determined in the EFT approach for either Scenario A or Scenario B.
Footnote 2: As indicated in Ref [13], the ratio of \(C_{a}\) to \(C_{b}\) estimated by the light meson saturation approach is consistent with that obtained by the contact-range EFT approach.
In Table 5, we present the pole positions of the hidden-charm pentaquark molecules and the couplings to their constituents. From the obtained pole positions of \(P_{\psi}^{N}(4312)\), \(P_{\psi}^{N}(4440)\), and \(P_{\psi}^{N}(4457)\), it is obvious that scenario A, yielding results consistent with the experimental data, is better than Scenario B, which is quite different from the single-channel study [63]. Our study shows that the coupled-channel effects can help distinguish the two possible scenarios. In a similar approach but without the \(\bar{D}^{(*)}\Lambda_{c}\) channels, Scenario A is still slightly better than Scenario B [35]. We note in passing that the chiral unitary model [9] also prefers scenario A. We further note that the coefficients in the contact-range potentials of Eq.(A6) and Eq.(A7) are derived assuming the HQSS, while the HQSS breaking is not taken into account. In Ref. [11], it was shown that the tensor term of the one-pion exchange potentials plays a crucial role in describing the widths of the pentaquark molecules, while the \(D\)-wave potentials are neglected in this work. Therefore, we can not conclude which scenario is more favorable at this stage.
Up to now, the spins of \(P_{\psi}^{N}(4440)\) and \(P_{\psi}^{N}(4457)\) are still undetermined experimentally, which motivated many theoretical discussions on how to determine their spins [100; 23; 21]. One crucial issue is that the strength of \(\bar{D}^{*}\Sigma_{c}\to\bar{D}^{*}\Sigma_{c}\) potentials of \(J^{P}=1/2^{-}\) and \(J^{P}=3/2^{-}\) are undetermined. We can see that the \(J^{P}=1/2^{-}\)\(\bar{D}^{*}\Sigma_{c}\to\bar{D}^{*}\Sigma_{c}\) potential is stronger than the \(J^{P}=3/2^{-}\)\(\bar{D}^{*}\Sigma_{c}\to\bar{D}^{*}\Sigma_{c}\) potential in Scenario A, while their order reverses in Scenario B. In Refs. [34; 37], Burns et al. proposed another case, named as Scenario C, which actually corresponds to a special case of Scenario B, where the \(J^{P}=1/2^{-}\)\(\bar{D}^{*}\Sigma_{c}\to\bar{D}^{*}\Sigma_{c}\) potential is not strong enough to form a bound state, and therefore \(P_{\psi}^{N}(4457)\) is interpreted as a kinetic effect rather than a genuine state. From their values of \(C_{a}\) and \(C_{b}\)[34], the ratio \(C_{b}/C_{a}\) is determined to be around 0.5, which implies the emergence of a large spin-spin interaction, inconsistent with the principle of EFTs. It is no surprise that such a large spin-spin interaction breaks the completeness of the multiplet picture of hidden-charm pentaquark molecules [11; 12; 16; 63; 64; 11; 18; 21; 65; 66]. Therefore, we strongly recommend that lattice QCD simulations could study the potentials of \(J^{P}=1/2^{-}\)\(\bar{D}^{*}\Sigma_{c}\) and \(J^{P}=3/2^{-}\)\(\bar{D}^{*}\Sigma_{c}\) to
\begin{table}
\begin{tabular}{c c c c c c c} Scenario & \multicolumn{5}{c}{A} \\ \hline Molecule & \(P_{\psi}^{N}\) & \(P_{\psi}^{N}\) & \(P_{\psi 3}^{N}\) & \(P_{\psi 4}^{N}\) & \(P_{\psi 5}^{N}\) & \(P_{\psi 6}^{N}\) \\ Two-body decay & 7.00 & 5.40 & 17.20 & 1.40 & 19.80 & 15.40 \\ Three-body decay & 0.20 & 1.47 & 0.03 & 0.33 & 3.83 & 6.85 \\ Total decay & 7.20 & 6.87 & 17.23 & 1.73 & 23.63 & 22.25 \\ \hline Scenario & \multicolumn{5}{c}{B} \\ \hline Molecule & \(P_{\psi 1}^{N}\) & \(P_{\psi 2}^{N}\) & \(P_{\psi 3}^{N}\) & \(P_{\psi 4}^{N}\) & \(P_{\psi 5}^{N}\) & \(P_{\psi 6}^{N}\) \\ Two-body decay & 8.00 & 12.40 & 2.20 & 9.00 & 15.00 & 7.40 \\ Three-body decay & 0.16 & 1.00 & 0.01 & 2.44 & 13.58 & 9.58 \\ Total decay & 8.16 & 13.40 & 2.21 & 11.44 & 28.58 & 16.98 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Two-body decay widths, three-body decay widths, and total decay widths (in units of MeV) of hidden-charm pentaquark molecules in Scenario A and Scenario B.
address this issue3.
Footnote 3: The \(\bar{D}^{(*)}\) is the same as the \(\bar{D}^{(*)}\), \(\bar{D}^{(*)}\), \(\bar{D}^{(*)}\) and \(\bar{D}^{(*)}\) are the same as the \(\bar{D}^{(*)}\), \(\bar{D}^{(*)}\) and \(\bar{D}^{(*)}\).
The imaginary parts of the pole positions in Table 5 specify the two-body partial decay widths of the pentaquark molecules, which can also be calculated via the triangle diagrams using the effective Lagrangian approach [65; 102]. From the results of Table 5, we can calculate the two-body decay widths of these pentaquark molecules and tabulate them in Table 4. Moreover, with the newly obtained couplings \(g_{P^{N}_{\psi}\bar{D}^{(*)}\Sigma^{(*)}_{c}}\), we update their three-body decay widths as shown in Table 4. Comparing with the results in Ref. [35], we find that the new results vary a bit because the pole positions affect the phase space of the three-body decays and the values of the couplings \(g_{P^{N}_{\psi}\bar{D}^{(*)}\Sigma^{(*)}_{c}}\). Assuming that the two-body and three-body decays are dominant decay channels for the pentaquark molecules, we can obtain their total decay widths by summing the two decay modes. Our results indicate that the widths of \(P^{N}_{\psi}\) and \(P^{N}_{\psi\bar{\psi}}\) as the \(\bar{D}^{*}\Sigma^{*}_{c}\) molecules are larger than those of \(P^{N}_{\psi}(4312)\), \(P^{N}_{\psi}(4440)\), and \(P^{N}_{\psi}(4457)\) reported by the LHCb Collaboration, and their three-body decay widths account for a large proportion of their total widths. In addition, we can see that the three-body decay widths of \(P^{N}_{\psi}(4312)\), \(P^{N}_{\psi}(4440)\), and \(P^{N}_{\psi}(4457)\) account for only a small proportion of their total widths, which confirms our assumption that their total widths are almost saturated by the two-body decays.
### Production rates of hidden-charm pentaquark molecules
From the values of the couplings given in Table 5, one can see that the \(\bar{D}^{(*)}\Sigma^{(*)}_{c}\) channel plays a dominant role in generating these pentaquark molecules. Yet their productions in the \(\Lambda_{b}\) decay can not proceed via the \(\bar{D}^{(*)}\Sigma^{(*)}_{c}\) interactions as discussed above. It is important to investigate the productions of hidden-charm pentaquark molecules in the \(\Lambda_{b}\) decays via the
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Scenario & \multicolumn{6}{c}{A} \\ \hline Name & \(P^{N}_{\psi 1}\) & \(P^{N}_{\psi 2}\) & \(P^{N}_{\psi 3}\) & \(P^{N}_{\psi 4}\) & \(P^{N}_{\psi 5}\) & \(P^{N}_{\psi 6}\) \\ Molecule & \(\bar{D}^{(*)}_{\Sigma_{c}}\) & \(\bar{D}^{(*)}\Sigma^{*}_{c}\) & \(\bar{D}^{*}\Sigma_{c}\) & \(\bar{D}^{*}\Sigma_{c}\) & \(\bar{D}^{*}\Sigma^{*}_{c}\) & \(\bar{D}^{*}\Sigma^{*}_{c}\) \\ \(J^{P}\) & \(\frac{1}{2}^{-}\) & \(\frac{3}{2}^{-}\) & \(\frac{1}{2}^{-}\) & \(\frac{3}{2}^{-}\) & \(\frac{1}{2}^{-}\) & \(\frac{3}{2}^{-}\) \\ Pole (MeV) & 4310.6+3.5\(i\) & 4372.8 +2.7\(i\) & 4440.6+8.6\(i\) & 4458.4+0.7\(i\) & 4500.0+9.9\(i\) & 4513.2+7.7\(i\) \\ \(g_{P^{N}_{\psi}\Sigma^{*}_{c}\bar{D}^{*}}\) & - & - & - & 2.686 & 2.194 \\ \(g_{P^{N}_{\psi}\Sigma^{*}_{c}\bar{D}^{*}}\) & - & - & 2.554 & 1.082 & 0.141 & 0.218 \\ \(g_{P^{N}_{\psi}\Sigma^{*}_{c}\bar{D}}\) & - & 2.133 & - & 0.179 & - & 0.237 \\ \(g_{P^{N}_{\psi}\Sigma^{*}_{c}\bar{D}}\) & 2.089 & - & 0.254 & - & 0.139 & - \\ \(g_{P^{N}_{\psi}\Lambda_{c}\bar{D}^{*}}\) & 0.234 & 0.074 & 0.177 & 0.050 & 0.110 & 0.241 \\ \(g_{P^{N}_{\psi}\Lambda_{c}\bar{D}}\) & 0.014 & - & 0.158 & - & 0.207 & - \\ \(g_{P^{N}_{\psi}J/\psi N}\) & 0.251 & 0.454 & 0.584 & 0.103 & 0.434 & 0.532 \\ \(g_{P^{N}_{\psi}\eta_{c}N}\) & 0.420 & - & 0.261 & - & 0.527 & - \\ \hline Scenario & \multicolumn{6}{c}{B} \\ \hline Name & \(P^{N}_{\psi 1}\) & \(P^{N}_{\psi 2}\) & \(P^{N}_{\psi 3}\) & \(P^{N}_{\psi 4}\) & \(P^{N}_{\psi 5}\) & \(P^{N}_{\psi 6}\) \\ Molecule & \(\bar{D}\Sigma_{c}\) & \(\bar{D}\Sigma^{*}_{c}\) & \(\bar{D}^{*}\Sigma_{c}\) & \(\bar{D}^{*}\Sigma_{c}\) & \(\bar{D}^{*}\Sigma^{*}_{c}\) & \(\bar{D}^{*}\Sigma^{*}_{c}\) & \(\bar{D}^{*}\Sigma^{*}_{c}\) \\ \(J^{P}\) & \(\frac{1}{2}^{-}\) & \(\frac{3}{2}^{-}\) & \(\frac{1}{2}^{-}\) & \(\frac{3}{2}^{-}\) & \(\frac{1}{2}^{-}\) & \(\frac{3}{2}^{-}\) \\ Pole (MeV) & 4309.9+4\(i\) & 4365.8+6.2\(i\) & 4458.4+44.5\(i\) & 4441.4+1.1\(i\) & 4521.6+7.5\(i\) & 4522.5+3.7\(i\) \\ \(g_{P^{N}_{\psi}\Sigma^{*}_{c}\bar{D}^{*}}\) & - & - & - & 1.841 & 1.621 \\ \(g_{P^{N}_{\psi}\Sigma^{*}_{c}\bar{D}^{*}}\) & - & 1.679 & 2.462 & 0.107 & 0.143 \\ \(g_{P^{N}_{\psi}\Sigma^{*}_{c}\bar{D}}\) & - & 2.451 & - & 0.099 & - & 0.171 \\ \(g_{P^{N}_{\psi}\Sigma^{*}_{c}\bar{D}}\) & 2.072 & - & 0.161 & - & 0.131 & - \\ \(g_{P^{N}_{\psi}\Lambda_{c}\bar{D}^{*}}\) & 0.392 & 0.090 & 0.247 & 0.159 & 0.232 & 0.223 \\ \(g_{P^{N}_{\psi}\Lambda_{c}\bar{D}}\) & 0.020 & - & 0.191 & - & 0.281 & - \\ \(g_{P^{N}_{\psi}\eta_{c}N}\) & 0.263 & 0.704 & 0.277 & 0.168 & 0.314 & 0.312 \\ \(g_{P^{N}_{\psi}\eta_{c}N}\) & 0.413 & - & 0.164 & - & 0.328 & - \\ \hline \hline \end{tabular}
\end{table}
Table 5: Pole positions(in units of MeV) of six hidden-charm pentaquark molecules and the couplings to their constituents in Scenario A and Scenario B.
\(\bar{D}^{(*)}\Lambda_{c}\) interactions although the couplings of the pentaquark states to \(\bar{D}^{(*)}\Lambda_{c}\) are small. With the couplings \(g_{P^{N}_{\psi}\bar{D}^{(*)}\Lambda_{c}}\) given in Table 5, we employ the effective Lagrangian approach to calculate the decays of \(\Lambda_{b}\to P^{N}_{\psi}K\) illustrated in Fig. 1 and Fig. 2.
In Table 6, we present the branching fractions of \(\Lambda_{b}\to P^{N}_{\psi}K\) in Scenario A and Scenario B. Our results show that the branching fractions of the three pentaquark states discovered by the LHCb Collaboration: \(\mathcal{B}(\Lambda_{b}\to P^{N}_{\psi}(4312)K)=35.18\times 10^{-6}\), \(\mathcal{B}(\Lambda_{b}\to P^{N}_{\psi}(4440)K)=15.30\times 10^{-6}\), and \(\mathcal{B}(\Lambda_{b}\to P^{N}_{\psi}(4457)K)=0.48\times 10^{-6}\) in Scenario A and \(\mathcal{B}(\Lambda_{b}\to P^{N}_{\psi}(4312)K)=98.88\times 10^{-6}\), \(\mathcal{B}(\Lambda_{b}\to P^{N}_{\psi}(4440)K)=5.21\times 10^{-6}\), and \(\mathcal{B}(\Lambda_{b}\to P^{N}_{\psi}(4457)K)=27.23\times 10^{-6}\) in Scenario B. From the order of magnitude of the obtained branching fractions, we can conclude that the pentaquark molecules can be produced via the triangle diagrams shown in Fig. 1 and Fig. 2. The branching fractions of the decay \(\Lambda_{b}\to P^{N}_{\psi}(4312)K\) are larger than those of \(\Lambda_{b}\to P^{N}_{\psi}(4440)K\) and \(\Lambda_{b}\to P^{N}_{\psi}(4457)K\) in both cases, and the branching fractions involving \(P^{N}_{\psi}(4440)\) and \(P^{N}_{\psi}(4457)\) for \(J^{P}=1/2^{-}\) are always larger than those for \(J^{P}=3/2^{-}\). Such results reflect that the branching fractions of the decays \(\Lambda_{b}\to P^{N}_{\psi}K\) are related to the couplings \(g_{P^{N}_{\psi}\bar{D}^{(*)}\Lambda_{c}}\), especially the coupling \(g_{P^{N}_{\psi}\bar{D}^{*}\Lambda_{c}}\), which shows that the \(\bar{D}^{*}\Lambda_{c}\) interactions play an important role in producing the pentaquark molecules in the \(\Lambda_{b}\) decays. Similarly, we predict the branching fractions of \(\Lambda_{b}\) decaying into \(P^{N}_{\psi 2}\), \(P^{N}_{\psi 5}\) and \(P^{N}_{\psi 6}\) plus a kaon as shown in Table 6, the order of magnitude of which are similar to those involving \(P^{N}_{\psi}(4440)\) and \(P^{N}_{\psi}(4457)\).
Up to now, there exist no available experimental data for the branching fractions of the decays \(\Lambda_{b}\to P^{N}_{\psi}K\). The LHCb Collaboration measured the relevant ratios of branching fractions for the three pentaquark states: \(R_{P^{N}_{\psi}(4312)}=(0.30\pm 0.07^{+0.34}_{-0.09})\%\), \(R_{P^{N}_{\psi}(4440)}=(1.11\pm 0.33^{+0.22}_{-0.11})\%\), and \(R_{P^{N}_{\psi}(4457)}=(0.53\pm 0.16^{+0.15}_{-0.13})\%\), where \(R\) is defined as
\[R_{P^{N}_{\psi}}=\frac{\mathcal{B}(\Lambda_{b}^{0}\to P^{N}_{\psi}K^{-}) \cdot\mathcal{B}(P^{N}_{\psi}\to J/\psi p)}{\mathcal{B}(\Lambda_{b}^{0}\to J/ \psi pK^{-})} \tag{23}\]
According to RPP [75], the branching fraction of \(\Lambda_{b}^{0}\to J/\psi pK^{-}\) is \(\mathcal{B}(\Lambda_{b}^{0}\to J/\psi pK^{-})=3.2^{+0.6}_{-0.5}\times 10^{-4}\), and then we obtain the product of the branching fractions of the decays \(\Lambda_{b}\to P^{N}_{\psi}K\) and \(P^{N}_{\psi}\to J/\psi p\)
\[\begin{array}{ll}\mathcal{B}(\Lambda_{b}^{0}\to P^{N}_{\psi}(4312)^{+}K^{-}) \cdot\mathcal{B}(P^{N}_{\psi}(4312)^{+}\to J/\psi p)&=0.96^{+1.13}_{-0.39}\times 1 0^{-6}\,,\\ \mathcal{B}(\Lambda_{b}^{0}\to P^{N}_{\psi}(4440)^{+}K^{-})\cdot\mathcal{B}(P^{ N}_{\psi}(4440)^{+}\to J/\psi p)&=3.55^{+1.43}_{-1.24}\times 10^{-6}\,,\\ \mathcal{B}(\Lambda_{b}^{0}\to P^{N}_{\psi}(4457)^{+}K^{-})\cdot\mathcal{B}(P^{ N}_{\psi}(4457)^{+}\to J/\psi p)&=1.70^{+0.77}_{-0.71}\times 10^{-6}\,.\end{array} \tag{24}\]
To compare with the experimental data, we have to obtain the branching fractions of \(P^{N}_{\psi}\) decaying into \(J/\psi p\), which are not yet determined experimentally. We note that the GlueX and JLab Collaborations investigated the production rates of pentaquark states in the photoproduction process and only gave the upper limits of \(\mathcal{B}(P^{N}_{\psi}\to J/\psi p)<2.0\%\)[38; 39], which indicates that the branching fractions of \(\mathcal{B}(\Lambda_{b}^{0}\to P^{N+}_{\psi}K^{-})\) are at the order of \(10^{-4}\), approaching to the values of \(\mathcal{B}(\Lambda_{b}^{0}\to J/\psi pK^{-})\). Such large values highlight the inconsistency between the LHCb results and the GlueX/JLab results. Therefore, more precise experimental data are needed to settle this issue.
Using Eq.(17), we calculate the two-body partial decay widths of hidden-charm pentaquark molecules, and then estimate the branching fractions of the decays \(P^{N}_{\psi}\to J/\psi p\). The results are shown in Table 7, where the three-body decay widths of the pentaquark molecules are not included. One can see that the partial decay widths of \(P^{N}_{\psi}\to\bar{D}^{(*)}\Lambda_{c}\) are less than those of \(P^{N}_{\psi}\to J/\psi p\) in Scenario A, but their order reverses in Scenario B. In Ref. [103], the estimated branching fractions of the decays \(P^{N}_{\psi}\to\bar{D}^{(*)}\Lambda_{c}\) are much smaller than those of the decays \(P^{N}_{\psi}\to J/\psi p\), consistent with Scenario
\begin{table}
\begin{tabular}{c c c c c c c} Scenario & \multicolumn{6}{c}{A} \\ \hline Molecule & \(P^{N}_{\psi 1}\) & \(P^{N}_{\psi 2}\) & \(P^{N}_{\psi 3}\) & \(P^{N}_{\psi 4}\) & \(P^{N}_{\psi 5}\) & \(P^{N}_{\psi 6}\) \\ \(\mathcal{B}(\Lambda_{b}\to P^{N}_{\psi}K)\) & 35.18 & 1.49 & 15.30 & 0.48 & 6.37 & 9.01 \\ \hline \multicolumn{6}{c}{B} \\ \hline Molecule & \(P^{N}_{\psi 1}\) & \(P^{N}_{\psi 2}\) & \(P^{N}_{\psi 3}\) & \(P^{N}_{\psi 4}\) & \(P^{N}_{\psi 5}\) & \(P^{N}_{\psi 6}\) \\ \(\mathcal{B}(\Lambda_{b}\to P^{N}_{\psi}K)\) & 98.88 & 2.27 & 27.23 & 5.21 & 21.69 & 7.43 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Branching fractions (\(10^{-6}\)) of \(\Lambda_{b}\) decaying into a \(K\) meson and a hidden-charm pentaquark molecule in Scenario A and Scenario B.
A. In terms of the meson exchange theory, the branching fractions of the decays \(P_{\psi}^{N}\to\bar{D}^{(*)}\Lambda_{c}\) are larger than those of the decays \(P_{\psi}^{N}\to J/\psi p\), where the heavy meson (\(\bar{D}^{(*)}\)) exchange and the light meson (\(\pi(\rho)\)) exchange are responsible for the \(\bar{D}^{(*)}\Sigma_{c}^{(*)}\to J/\psi p\) and \(\bar{D}^{(*)}\Sigma_{c}^{(*)}\to\bar{D}^{(*)}\Lambda_{c}\) interactions, respectively [65]. It is obvious that the transitions \(\bar{D}^{(*)}\Sigma_{c}^{(*)}\to J/\psi p\) are heavily suppressed, resulting in the small partial decay widths of \(P_{\psi}^{N}\to J/\psi p\) in the same theoretical framework [65]. We note that the meson exchange theory has been tested for light mesons exchanges, but remains to be verified for heavy meson exchanges, especially when both heavy and light mesons can be exchanged. The meson exchange theory dictates that charmed mesons are responsible for the very short range interaction, but they can not adequately describe such short-range interactions because one gluon exchange may play a role. In Ref. [104], the authors found that the strength of the short range potential provided by the one gluon exchange is much stronger than that provided by the heavy meson exchange. In the present work, the hidden-charm meson-baryon potentials are provided by the contact-range EFT constrained by HQSS with the low-energy constants determined by fitting to data, which are plausible but the underlying mechanism needs to be clarified.
With the obtained branching fractions \({\cal B}(\Lambda_{b}\to P_{\psi}^{N}K)\) in Table 6 and \({\cal B}(P_{\psi}^{N}\to J/\psi p)\) in Table 7, we further calculate the branching fractions \({\cal B}[\Lambda_{b}\to(P_{\psi}^{N}\to J/\psi p)K]\) for Scenario A and Scenario B as shown Table 8. Our results show that the branching fractions for \(P_{\psi}^{N}(4312)\) and
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Scenario & \multicolumn{6}{c}{A} \\ \hline Molecule & \(P_{\psi 1}^{N}\) & \(P_{\psi 2}^{N}\) & \(P_{\psi 3}^{N}\) & \(P_{\psi 4}^{N}\) & \(P_{\psi 5}^{N}\) & \(P_{\psi 6}^{N}\) \\ \cline{2-7} Ours & 7.11 & 1.44 & 8.21 & 0.09 & 1.77 & 4.82 \\ ChUA [103] & 1.82 & 8.62 & 0.13 & 0.83 & 0.04 & 2.36 \\ Exp & 0.96 & - & 3.55 & 1.70 & - & - \\ \hline Scenario & \multicolumn{6}{c}{B} \\ \hline Molecule & \(P_{\psi 1}^{N}\) & \(P_{\psi 2}^{N}\) & \(P_{\psi 3}^{N}\) & \(P_{\psi 4}^{N}\) & \(P_{\psi 5}^{N}\) & \(P_{\psi 6}^{N}\) \\ Ours & 18.24 & 2.22 & 6.06 & 1.79 & 3.83 & 2.76 \\ ChUA [103] & - & - & - & - & - & - \\ Exp & 0.96 & - & 1.70 & 3.55 & - & - \\ \hline \hline \end{tabular}
\end{table}
Table 8: Branching fractions (\(10^{-6}\)) of the decays \(\Lambda_{b}\to(P_{\psi}^{N}\to J/\psi p)K\) in Scenario A and Scenario B.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Scenario & \multicolumn{6}{c}{A} \\ \hline Molecule & \(P_{\psi 1}^{N}\) & \(P_{\psi 2}^{N}\) & \(P_{\psi 3}^{N}\) & \(P_{\psi 4}^{N}\) & \(P_{\psi 5}^{N}\) & \(P_{\psi 6}^{N}\) \\ \cline{2-7} \(\Gamma_{2}(\Sigma_{c}\bar{D}^{*})\) & - & - & - & (2.52 \%) & (8.87 \%) \\ \cline{2-7} \(\Gamma_{3}(\Sigma_{c}^{*}\bar{D})\) & - & - & - & (73.23 \%) & - & (16.87 \%) \\ \cline{2-7} \(\Gamma_{4}(\Sigma_{c}\bar{D})\) & - & - & 2.87 & 1.04 & - \\ \cline{2-7} \(\Gamma_{5}(\Lambda_{c}\bar{D}^{*})\) & 0.83 & 0.19 & 1.44 & 0.12 & 0.65 & 3.24 \\ \cline{2-7} \(\Gamma_{5}(\Lambda_{c}\bar{D}^{*})\) & (11.71 \%) & (3.48 \%) & (8.33 \%) & (7.82 \%) & (3.32 \%) & (20.81 \%) \\ \cline{2-7} \(\Gamma_{6}(\Lambda_{c}\bar{D})\) & 0.01 & 1.60 & 2.98 & & \\ \cline{2-7} \(\Gamma_{7}(J/\psi N)\) & (0.14 \%) & - & (9.22 \%) & - & (15.14 \%) & - \\ \cline{2-7} \(\Gamma_{7}(J/\psi N)\) & 1.43 & 5.16 & 9.29 & 0.29 & 5.46 & 8.31 \\ \cline{2-7} \(\Gamma_{8}(\eta_{c}N)\) & (20.22 \%) & (96.52 \%) & (53.64 \%) & (18.95 \%) & (27.73 \%) & (53.45 \%) \\ \cline{2-7} \(\Gamma_{8}(\eta_{c}N)\) & 4.81 & & 2.12 & & 9.06 & \\ \cline{2-7} \(\Gamma_{93}(\bar{\gamma}\)\) & - & (12.23 \%) & - & (45.98 \%) & - \\ \hline Scenario & \multicolumn{6}{c}{B} \\ \hline Molecule & \(P_{\psi 1}^{N}\) & \(P_{\psi 2}^{N}\) & \(P_{\psi 3}^{N}\) & \(P_{\psi 4}^{N}\) & \(P_{\psi 5}^{N}\) & \(P_{\psi 6}^{N}\) \\ \cline{2-7} \(\Gamma_{2}(\Sigma_{c}\bar{D}^{*})\) & - & - & - & - & 0.36 & 0.64 \\ \cline{2-7} \(\Gamma_{3}(\Sigma_{c}^{*}\bar{D})\) & - & - & - & (13.63 \%) & - & (18.19 \%) \\ \cline{2-7} \(\Gamma_{4}(\Sigma_{c}\bar{D})\) & - & - & (12.88 \%) & - & (5.92 \%) & - \\ \cline{2-7} \(\Gamma_{5}(\Lambda_{c}\bar{D}^{*})\) & 2.27 & 0.26 & 2.97 & 1.17 & & 3.05 (18.48 \%) & 2.82 \\ \cline{2-7} \(\Gamma_{6}(\Lambda_{c}\bar{D})\) & (26.71 \%) & (2.10 \%) & (30.92 \%) & (52.05 \%) & (52.05 \%) & 3.05 (36.37 \%) \\ \cline{2-7} \(\Gamma_{6}(\Lambda_{c}\bar{D})\) & 0.02 & 2.40 & & 5.65 & & - \\ \cline{2-7} \(\Gamma_{7}(J/\psi N)\) & (0.23 \%) & - & (25.03 \%) & - & (34.18 \%) & - \\ \cline{2-7} \(\Gamma_{7}(J/\psi N)\) & 1.57 & 12.28 & 2.13 & 0.77 & 2.92 & 2.88 \\ \cline{2-7} \(\Gamma_{7}(J/\psi N)\) & (18.45 \%) & (97.90 \%) & (22.26 \%) & (34.33 \%) & (17.67 \%) & (37.14 \%) \\ \cline{2-7} \(\Gamma_{8}(\eta_{c}N)\) & 4.65 & - & 0.85 & & 3.57 & & \\ \cline{2-7} \(\Gamma_{8}(\eta_{c}N)\) & (54.61 \%) & - & (8.86 \%) & - & (21.58 \%) & - \\ \hline \hline \end{tabular}
\end{table}
Table 7: Two-body partial decay widths (in units of MeV) of hidden-charm pentaquark molecules as well as their branching fractions in Scenario A and Scenario B.
\(P_{\psi}^{N}(4440)\) are of the same order as their experimental counterparts, but the branching fraction for \(P_{\psi}^{N}(4457)\) is smaller by one order of magnitude. For Scenario B, the branching fractions for \(P_{\psi}^{N}(4440)\) and \(P_{\psi}^{N}(4457)\) are of the same order as their experimental counterparts, but the branching fraction for \(P_{\psi}^{N}(4312)\) is larger by one order of magnitude. We can see that our model can not simultaneously describe the branching fractions of these three pentaquark states. In Ref. [103], the ChUA estimated the couplings \(g_{P_{\psi}^{N}\bar{D}^{(*)}\Lambda_{c}}\) and the branching fractions \(\mathcal{B}(P_{\psi}^{N}\to J/\psi p)\), which actually corresponds to Scenario A of our results. Using the values estimated by ChUA we recalculate the branching fractions \(\mathcal{B}[\Lambda_{b}\to(P_{\psi}^{N}\to J/\psi p)K]\) as shown in Table 8. The branching fractions for \(P_{\psi}^{N}(4312)\) and \(P_{\psi}^{N}(4457)\) are of the same order as their experimental counterparts, but the branching fraction for \(P_{\psi}^{N}(4440)\) is smaller by one order of magnitude. Obviously, the branching fractions for \(P_{\psi}^{N}(4312)\), \(P_{\psi}^{N}(4440)\), and \(P_{\psi}^{N}(4457)\) in our model are related to the couplings of the pentaquark molecules to \(\bar{D}^{(*)}\Lambda_{c}\) and \(J/\psi p\). Nevertheless, the production mechanism of these three pentaquark states via the triangle diagrams shown in Fig. 1 and Fig. 2 is capable of qualitatively reproducing the experimental data, which further corroborates the hadronic molecular picture of these pentaquark states.
In Table 8, we show the branching fractions for the HQSS partners of \(P_{\psi}^{N}(4312)\), \(P_{\psi}^{N}(4440)\), and \(P_{\psi}^{N}(4457)\), where only the two-body decay modes contribute to the branching fractions of the decays \(P_{\psi}^{N}\to J/\psi p\). As shown in Table 4, the three-body decay widths of \(P_{\psi 2}^{N}\), \(P_{\psi 5}^{N}\), and \(P_{\psi 6}^{N}\) are up to several MeV. If taking into account the three-body decay widths, the branching fractions of \(P_{\psi 2}^{N}\), \(P_{\psi 5}^{N}\), and \(P_{\psi 6}^{N}\) decaying into \(J/\psi p\) become \(\mathcal{B}(P_{\psi 2}^{N}\to J/\psi p)=75\%\), \(\mathcal{B}(P_{\psi 5}^{N}\to J/\psi p)=23\%\), and \(\mathcal{B}(P_{\psi 6}^{N}\to J/\psi p)=37\%\) in Scenario A and \(\mathcal{B}(P_{\psi 2}^{N}\to J/\psi p)=91\%\), \(\mathcal{B}(P_{\psi 5}^{N}\to J/\psi p)=10\%\), and \(\mathcal{B}(P_{\psi 6}^{N}\to J/\psi p)=17\%\) in Scenario B. As result, the corresponding branching fraction of the decays \(\Lambda_{b}\to(P_{\psi}^{N}\to J/\psi p)K\) reduce to \(\mathcal{B}[\Lambda_{b}\to(P_{\psi 2}^{N}\to J/\psi p)K]=1.11\times 10^{-6}\), \(\mathcal{B}[\Lambda_{b}\to(P_{\psi 5}^{N}\to J/\psi p)K]=1.47\times 10^{-6}\) and \(\mathcal{B}[\Lambda_{b}\to(P_{\psi 6}^{N}\to J/\psi p)K]=3.37\times 10^{-6}\) in Scenario A and \(\mathcal{B}[\Lambda_{b}\to(P_{\psi 2}^{N}\to J/\psi p)K]=2.08\times 10^{-6}\), \(\mathcal{B}[\Lambda_{b}\to(P_{\psi 5}^{N}\to J/\psi p)K]=2.22\times 10^{-6}\) and \(\mathcal{B}[\Lambda_{b}\to(P_{\psi 6}^{N}\to J/\psi p)K]=1.26\times 10^{-6}\) in Scenario B. We can see that the branching fractions of the pentaquark states \(P_{\psi 2}^{N}\), \(P_{\psi 5}^{N}\), and \(P_{\psi 6}^{N}\) as hadronic molecules are smaller than those of \(P_{\psi}^{N}(4312)\) and sum of \(P_{\psi}^{N}(4440)\) and \(P_{\psi}^{N}(4457)\) in Scenario A and Scenario B, which is consistent with the fact that these three HQSS partners have not been seen in the LHCb data sample of 2019.
These hidden-charm pentaquark molecules can be seen in the \(J\psi p\) invariant mass distribution, and one can also expect to see them in the \(\bar{D}^{*}\Lambda_{c}\) invariant mass distribution. Therefore, with the same approach we calculate the branching fractions of the decays \(\Lambda_{b}\to(P_{\psi}^{N}\to\bar{D}^{*}\Lambda_{c})K\) and the results are shown in Table 9. We can see that the branching fractions of the pentaquark molecules in the decays \(\Lambda_{b}\to(P_{\psi}^{N}\to J/\psi p)K\) and \(\Lambda_{b}\to(P_{\psi}^{N}\to\bar{D}^{*}\Lambda_{c})K\) are similar except for \(P_{\psi 2}^{N}\). The branching fraction \(\mathcal{B}[\Lambda_{b}\to(P_{\psi 2}^{N}\to\bar{D}^{*}\Lambda_{c})K]\) is smaller than the branching fraction \(\mathcal{B}[\Lambda_{b}\to(P_{\psi 2}^{N}\to J/\psi p)K]\) by two order of magnitude. We encourage experimental searches for these pentaquark states in the \(\bar{D}^{*}\Lambda_{c}\) invariant mass distributions of the \(\Lambda_{b}\) decays.
## IV Summary and outlook
The three pentaquark states \(P_{\psi}^{N}(4312)\), \(P_{\psi}^{N}(4440)\), and \(P_{\psi}^{N}(4457)\) can be nicely arranged into a complete multiplet of \(\bar{D}^{(*)}\Sigma_{c}^{(*)}\) hadronic molecules, while their partial decay widths and production rates in the \(\Lambda_{b}\) decay remain undetermined. In this work, we employed the contact-range effective field theory approach to dynamically generate the pentaquark molecules via the \(\bar{D}^{(*)}\Sigma_{c}^{(*)}\), \(\bar{D}^{(*)}\Lambda_{c}\), \(J/\psi p\), and \(\eta_{c}p\) coupled-channel interactions, where the six relevant unknown parameters were determined by fitting to the experimental data. With the obtained pole positions, we estimated the couplings of the pentaquark molecules to their constituents \(J/\psi p\) and \(\bar{D}^{*}\Lambda_{c}\), and then calculated the productions rates of these molecules in the \(\Lambda_{b}\) decays via the triangle diagrams, where the \(\Lambda_{b}\) baryon weakly decays into \(\Lambda_{c}D_{s}^{(*)}\), then the \(D_{s}^{(*)}\) mesons scatter into \(\bar{D}^{(*)}K\), and finally the pentaquark molecules are dynamically generated by the \(\bar{D}^{*}\Lambda_{c}\) interactions. In this work, with no extra parameters (except those contained in the contact range EFT approach and determined by their masses and widths) we took the effective Lagrangian approach to calculate the triangle diagrams and their production rates in the \(\Lambda_{b}\) decays.
Our results showed that the masses of the three pentaquark states are well described either in Scenario A or Scenario B, which confirmed our previous conclusion that we can not determine the favorable scenario in terms of their masses alone. However, we found that Scenario A is more favored than Scenario B once their widths are taken into account. Moreover, our results showed that their couplings to \(\bar{D}^{(*)}\Lambda_{c}\) are smaller than those to \(J/\psi p\) in Scenario A, but larger in Scenario B. For the branching fractions of the decays \(\Lambda_{b}\to P_{\psi}^{N}K\), that of the \(P_{\psi}^{N}(4312)\) is the largest and those of \(P_{\psi}^{N}(4440)\) and \(P_{\psi}^{N}(4457)\) with \(J=1/2\) are always larger than those with \(J=3/2\) in both Scenario A and Scenario B. More
over, we predicted the following branching fractions: \({\cal B}[\Lambda_{b}\to P_{\psi 2}^{N}K]=(1\sim 2)\times 10^{-6}\), \({\cal B}[\Lambda_{b}\to P_{\psi 5}^{N}K]=(6\sim 22)\times 10^{-6}\) and \({\cal B}[\Lambda_{b}\to P_{\psi 6}^{N}K]=(7\sim 9)\times 10^{-6}\), respectively.
With the couplings between the molecules and their constituents determined, we estimated the branching fractions \({\cal B}(P_{\psi}^{N}\to J/\psi p)\), and then obtained the branching fraction \({\cal B}[\Lambda_{b}\to(P_{\psi}^{N}\to J/\psi p)K]\). Our results showed that such branching fractions for \(P_{\psi}^{N}(4312)\) and \(P_{\psi}^{N}(4440)\) are consistent with the experimental data, while that for \(P_{\psi}^{N}(4457)\) is larger than the experimental data in Scenario A. For Scenario B, the branching fractions for \(P_{\psi}^{N}(4440)\) and \(P_{\psi}^{N}(4457)\) are consistent with the experimental data, while that for \(P_{\psi}^{N}(4312)\) is larger than the experimental data. Given the complicated nature of these decays and the various physical processes involved, we deem the agreements with the existing data acceptable. Therefore, we conclude that the three pentaquark states as hidden-charm meson-baryon molecules can be dynamically generated via the \(\bar{D}^{(*)}\Lambda_{c}\) interactions in the \(\Lambda_{b}\) decay, which further corroborated the molecular interpretations of the pentaquark states. Moreover, the branching fractions of the HQSS partners of \(P_{\psi}^{N}(4312)\), \(P_{\psi}^{N}(4440)\), and \(P_{\psi}^{N}(4457)\) are estimated to be the order of \(10^{-6}\), smaller than that of \(P_{\psi}^{N}(4312)\) and the sum of those of \(P_{\psi}^{N}(4440)\) and \(P_{\psi}^{N}(4457)\). Therefore, we can attribute the non-observation of the other HQSS partners in the decay \(\Lambda_{b}\to(P_{\psi}^{N}\to J/\psi p)K\) to their small production rates. As a byproduct, we further predicted the production rates of the pentaquark molecules in the decays \(\Lambda_{b}\to(P_{\psi}^{N}\to\bar{D}^{*}\Lambda_{c})K\).
## Appendix A Contact-range potentials
To systematically generate the complete multiplet of hidden-charm pentaquark molecules, we take into account the \(\bar{D}^{(*)}\Lambda_{c}\) channels in the \(\bar{D}^{(*)}\Sigma_{c}^{(*)}\) coupled-channel systems, where the HQSS plays an important role. First, we express the spin wave function of the \(\bar{D}^{(*)}\Sigma_{c}^{(*)}\) pairs in terms of the spins of the heavy quarks \(s_{1h}\) and \(s_{2h}\) and those of the light quark(s) (often referred to as brown mucus [105; 106]) \(s_{1l}\) and \(s_{2l}\), where 1 and 2 denote \(\bar{D}^{(*)}\) and \(\Sigma_{c}^{(*)}\), respectively, via the following spin coupling formula,
\[|s_{1l},s_{1h},j_{1};s_{2l},s_{2h},j_{2};J\rangle=\] \[\sqrt{(2j_{1}+1)(2j_{2}+1)(2s_{L}+1)(2s_{H}+1)}\left(\begin{matrix} s_{1l}&s_{2l}&s_{L}\\ s_{1h}&s_{2h}&s_{H}\\ j_{1}&j_{2}&J\end{matrix}\right)\] \[|s_{1l},s_{2l},s_{L};s_{1h},s_{2h},s_{H};J\rangle. \tag{11}\]
The total light quark spin \(s_{L}\) and heavy quark spin \(s_{H}\) are given by \(s_{L}=s_{1l}\otimes s_{2l}\) and \(s_{H}=s_{1H}\otimes s_{2H}\), respectively.
More explicitly, for the \(\bar{D}^{(*)}\Sigma_{c}\) states, the decompositions read
\[|\Sigma_{c}\bar{D}(1/2^{-})\rangle=\frac{1}{2}0_{H}\otimes 1/2_{L} +\frac{1}{2\sqrt{3}}1_{H}\otimes 1/2_{L}+\sqrt{\frac{2}{3}}1_{H}\otimes 3/2_{L}\,,\] \[|\Sigma_{c}^{*}\bar{D}(3/2^{-})\rangle=-\frac{1}{2}0_{H}\otimes 3 /2_{L}+\frac{1}{\sqrt{3}}1_{H}\otimes 1/2_{L}+\frac{\sqrt{\frac{5}{3}}}{2}1_{H} \otimes 3/2_{L}\,,\] \[|\Sigma_{c}\bar{D}^{*}(1/2^{-})\rangle=\frac{1}{2\sqrt{3}}0_{H} \otimes 1/2_{L}+\frac{5}{6}1_{H}\otimes 1/2_{L}-\frac{\sqrt{2}}{3}1_{H} \otimes 3/2_{L}\,,\] \[|\Sigma_{c}\bar{D}^{*}(3/2^{-})\rangle=\frac{1}{\sqrt{3}}0_{H} \otimes 3/2_{L}-\frac{1}{3}1_{H}\otimes 1/2_{L}+\frac{\sqrt{5}}{3}1_{H} \otimes 3/2_{L}\,, \tag{12}\] \[|\Sigma_{c}^{*}\bar{D}^{*}(1/2^{-})\rangle=\sqrt{\frac{2}{3}}0_{H }\otimes 1/2_{L}-\frac{\sqrt{2}}{3}1_{H}\otimes 1/2_{L}-\frac{1}{3}1_{H} \otimes 3/2_{L}\,,\] \[|\Sigma_{c}^{*}\bar{D}^{*}(3/2^{-})\rangle=\frac{\sqrt{\frac{5}{3} }}{2}0_{H}\otimes 3/2_{L}+\frac{\sqrt{5}}{3}1_{H}\otimes 1/2_{L}-\frac{1}{6}1_{H} \otimes 3/2_{L}\,.\]
The total light quark spin \(1/2_{L}\) of the \(\bar{D}^{(*)}\Sigma_{c}^{(*)}\) system is given by the coupling of the light quark spins, \(1/2_{1l}\otimes 1_{2l}\). Since the light quark spin of \(\Lambda_{c}\) is \(0\), the total light quark spin \(1/2^{\prime}_{L}\) of the \(\bar{D}^{(*)}\Lambda_{c}\) system is given by \(1/2_{1l}\otimes 0_{2l}\). The decompositions of the \(\bar{D}^{(*)}\Lambda_{c}\) states are written as
\[|\bar{D}\Lambda_{c}(J^{P}=1/2^{-})\rangle=-\frac{1}{2}0_{H}\otimes 1 /2^{\prime}_{L}+\frac{\sqrt{3}}{2}1_{H}\otimes 1/2^{\prime}_{L}\,,\] \[|\bar{D}^{*}\Lambda_{c}(J^{P}=1/2^{-})\rangle=\frac{\sqrt{3}}{2}0_{ H}\otimes 1/2^{\prime}_{L}+\frac{1}{2}1_{H}\otimes 1/2^{\prime}_{L}\,,\] \[|\bar{D}^{*}\Lambda_{c}(J^{P}=3/2^{-})\rangle=1_{H}\otimes 1/2^{ \prime}_{L}\,. \tag{13}\]
In the heavy quark limit, the \(\bar{D}^{(*)}\Sigma_{c}^{(*)}\to\bar{D}^{(*)}\Sigma_{c}^{(*)}\) interactions are independent of the spin of the heavy quark, and therefor the potentials can be parameterized by two coupling constants describing the interactions between light quarks of spin 1/2 and 3/2, respectively, i.e., \(F_{1/2}=\langle 1/2_{L}|V|1/2_{L}\rangle\) and \(F_{3/2}=\langle 3/2_{L}|V|3/2_{L}\rangle\):
\[V_{\Sigma_{c}\bar{D}}(1/2^{-}) = \frac{1}{3}F_{1/2L}+\frac{2}{3}F_{3/2L}\,, \tag{10}\] \[V_{\Sigma_{c}\bar{D}}(3/2^{-}) = \frac{1}{3}F_{1/2L}+\frac{2}{3}F_{3/2L}\,,\] \[V_{\Sigma_{c}\bar{D}^{*}}(1/2^{-}) = \frac{7}{9}F_{1/2L}+\frac{2}{9}F_{3/2L}\,,\] \[V_{\Sigma_{c}\bar{D}^{*}}(3/2^{-}) = \frac{1}{9}F_{1/2L}+\frac{8}{9}F_{3/2L}\,,\] \[V_{\Sigma_{c}^{*}\bar{D}^{*}}(1/2^{-}) = \frac{8}{9}F_{1/2L}+\frac{1}{9}F_{3/2L}\,,\] \[V_{\Sigma_{c}^{*}\bar{D}^{*}}(3/2^{-}) = \frac{5}{9}F_{1/2L}+\frac{4}{9}F_{3/2L}\,.\]
which can be rewritten as a combination of \(C_{a}\) and \(C_{b}\), i.e., \(F_{1/2}=C_{a}-2C_{b}\) and \(F_{3/2}=C_{a}+C_{b}\)[63].
In the heavy quark limit, the \(\bar{D}^{(*)}\Lambda_{c}\to\bar{D}^{(*)}\Lambda_{c}\) interactions are parameterised by one coupling constant, i.e., \(F^{\prime}_{1/2L}=\langle 1/2^{\prime}_{L}|V|1/2^{\prime}_{L}\rangle\):
\[V_{\bar{D}\Lambda_{c}}(1/2^{-})=V_{\bar{D}^{*}\Lambda_{c}}(1/2^{-})=V_{\bar{D} ^{*}\Lambda_{c}}(3/2^{-})=F^{\prime}_{1/2}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
tentials, while \(\sigma\),\(\omega\) exchanges and \(\rho\) exchange are allowed for the \(\bar{D}^{(*)}\Lambda_{c}\rightarrow\bar{D}^{(*)}\Lambda_{c}\) potentials and \(\bar{D}^{(*)}\Sigma_{c}^{(*)}\rightarrow\bar{D}^{(*)}\Lambda_{c}\) potentials due to isospin symmetry. This gives us [13]
\[C_{a}^{\text{sat}(\sigma)}(\Lambda\sim m_{\sigma}) \propto -\frac{g_{\sigma 1}g_{\sigma 2}}{m_{\sigma}^{2}}\,, \tag{31}\] \[C_{a}^{\text{sat}(V)}(\Lambda\sim m_{\rho}) \propto \frac{g_{V1}g_{V2}}{m_{V}^{2}}\left(1+\vec{\tau}_{1}\cdot\vec{T}_ {2}\right),\] (32) \[C_{b}^{\text{sat}(V)}(\Lambda\sim m_{\rho}) \propto \frac{f_{V1}f_{V2}}{6M^{2}}(1+\vec{\tau}_{1}\cdot\vec{T}_{2})\,,\] (33) \[C_{a}^{\text{sat}(\sigma)\,\prime}(\Lambda\sim m_{\sigma}) \propto -\frac{g_{\sigma 1}g_{\sigma 3}}{m_{\sigma}^{2}}\,,\] (34) \[C_{a}^{\text{sat}(V)\,\prime}(\Lambda\sim m_{\omega}) \propto \frac{g_{V1}g_{V3}}{m_{V}^{2}}\,,\] (35) \[C_{b}^{\text{sat}(V)\prime}(\Lambda\sim m_{\rho}) \propto \frac{f_{V1}f_{V3}}{6M^{2}}(\vec{\tau}_{1}\cdot\vec{t}_{2})\,. \tag{36}\]
where \(V=\rho,\omega\) and we have made the simplification that \(m_{\rho}=m_{\omega}=m_{V}\). The proportionality constant is unknown and depends on the details of the renormalization process. In this work, we assume that these proportionality constants are the same. The \(g_{\sigma_{1}}\), \(g_{\sigma_{2}}\), and \(g_{\sigma_{3}}\) denote the couplings of the \(\bar{D}^{(*)}\) mesons, \(\Sigma_{c}^{(*)}\) baryons, and \(\Lambda_{c}\) baryon to the sigma meson, and \(g_{v1}\), \(g_{v2}\), and \(g_{v3}\) (\(f_{v1}\), \(f_{v2}\), and \(f_{v3}\) ) denote the electric-type (magnetic-type) couplings between the \(\bar{D}^{(*)}\) mesons, \(\Sigma_{c}^{(*)}\) baryons, and \(\Lambda_{c}\) baryon and a light vector meson. \(M\) is a mass scale to render \(f_{v}\) dimensionless. Following Refs. [13; 108], we tabulate the values of these couplings in Table 1. The \(\vec{\tau}_{1}\cdot\vec{T}_{2}\) and \(\vec{\tau}_{1}\cdot\vec{t}_{2}\) are the isospin factors of \(\bar{D}^{(*)}\Sigma_{c}^{(*)}\rightarrow\bar{D}^{(*)}\Sigma_{c}^{(*)}\) potentials and \(\bar{D}^{(*)}\Sigma_{c}^{(*)}\rightarrow\bar{D}^{(*)}\Lambda_{c}\) potentials, which are \(\vec{\tau}_{1}\cdot\vec{T}_{2}=-2\) and \(\vec{\tau}_{1}\cdot\vec{t}_{2}=-\sqrt{3}\) for the total isospin \(I=1/2\).
###### Acknowledgements.
We are grateful to Eulogio Oset, Fu-Sheng Yu, Chu-Wen Xiao, Jun-Xu Lu, and Qi Wu for useful discussions. This work is supported in part by the National Natural Science Foundation of China under Grants No.11975041 and No.11961141004. Ming-Zhu Liu acknowledges support from the National Natural Science Foundation of China under Grant No.12105007.
|
2309.06025 | Classification of separable hypersurfaces with constant sectional
curvature | In this paper, we give a full classification of the separable hypersurfaces
of constant sectional curvature in the Euclidean $n$-space $\mathbb{R}^n$. In
dimension $n=3$, this classification was solved by Hasanis and L\'opez
[Manuscripta Math. 166, 403-417 (2021)]. When $n>3$, we prove that the
separable hypersurfaces of null sectional curvature are three particular
families of such hypersurfaces. Finally, we prove that hyperspheres are the
only separable hypersurfaces with nonzero constant sectional curvature. | Muhittin Evren Aydin, Rafael Lopez, Gabriel-Eduard Vilcu | 2023-09-12T07:52:11Z | http://arxiv.org/abs/2309.06025v1 | # Classification of separable hypersurfaces with constant sectional curvature
###### Abstract.
In this paper, we give a full classification of the separable hypersurfaces of constant sectional curvature in the Euclidean \(n\)-space \(\mathbb{R}^{n}\). In dimension \(n=3\), this classification was solved by Hasanis and Lopez [Manuscripta Math. 166, 403-417 (2021)]. When \(n>3\), we prove that the separable hypersurfaces of null sectional curvature are three particular families of such hypersurfaces. Finally, we prove that hyperspheres are the only separable hypersurfaces with nonzero constant sectional curvature.
Key words and phrases:separable hypersurface, translation hypersurface, homothetical hypersurface, sectional curvature, Gaussian curvature 2020 Mathematics Subject Classification: 53A07
## 1. Introduction
The study of submanifolds of constant sectional curvature in different ambient spaces is one of the important topics in submanifold theory, originating in the investigation of surfaces with constant Gaussian curvature in the Euclidean 3-space (for more details see [13]). When the ambient is a space form, explicit examples of such submanifolds can be constructed using a very useful tool developed in [9] under the name of _Ribaucour transformation_. Moreover, in this setting of an ambient space form, the geometry of hypersurfaces having constant sectional curvature is well understood (see [23]). On the other hand, if the ambient is not a space form, obtaining classification results for submanifolds of constant sectional curvature with arbitrary codimensions is a very challenging question. However, even in this context, some classification results can be obtained using a technique known as _Tsinghua principle_, originally discovered by Li, Vrancken and Wang at Tsinghua University in 2013. Using this interesting principle, it has been demonstrated in [42] the nonexistence of locally conformally flat real hypersurfaces in the complex quadric with dimension \(\geq 3\). We would like to emphasize that many interesting results concerning the sectional curvature of hypersurfaces in various ambient spaces were obtained in the last decades (see, e.g. [2, 7, 8, 22, 29, 35]). Notice that the latest result was established in [15], where the authors obtained the complete classification of hypersurfaces of constant isotropic curvature in a real space form, under some topological assumptions: completeness, connectedness, orientability. Recall that any manifold
Introduction
Let \(\mathbb{R}^{n}\) be a
It is clear that any translation hypersurface is a particular type of separable hypersurface. Moreover, taking logarithms in the equation of a homothetical hypersurface, namely \(x_{n}=\prod_{i=1}^{n-1}f_{i}(x_{i})\), we obtain immediately that this equation reduces to \(\log x_{n}=\sum_{i=1}^{n-1}\log f_{i}(x_{i})\), and this shows us that the hypersurface is separable. It is quite interesting that the family of separable hypersurfaces includes not only translation and homothetical hypersurfaces as particular subfamilies, but also some more general sets of graph hypersurfaces, namely quasi-sum and quasi-product hypersurfaces, which are of particular interest in production theory [12, 19, 28]. We would like to point out that the classification of the separable surfaces with constant Gaussian curvature is done in [17] (see [14] for a particular case), while the classification of the separable hypersurfaces with zero Gauss-Kronecker curvature is done in [6]. On the other hand, the classification of the separable surfaces with non-zero constant mean curvature is done in [16], while in Lorentz-Minkowski space, separable minimal surfaces were classified in [21]. Moreover, other classification results for the quasi-sum and quasi-product production models via the main curvature invariants of the corresponding hypersurfaces were obtained in [1, 3, 12, 19, 28].
Motivated by the previously mentioned articles, we investigate the problem of finding the separable hypersurfaces of constant sectional curvature in the Euclidean \(n\)-space. In the case \(n=3\), since the problem is equivalent to find such surfaces of constant Gaussian curvature and was completely solved in [17], we are interested in the dimension \(n>3\). In Sect. 3 we first classify the separable hypersurfaces of null sectional curvature, obtaining that there are three types of such hypersurfaces, namely: the hyperplanes, a particular type of Cobb-Douglas hypersurface, and the product \(\Gamma\times\mathbb{R}^{n-2}\), where \(\Gamma\) is a curve with non-null curvature included in a coordinate 2-plane of \(\mathbb{R}^{n}\). The case of non-zero constant sectional curvature is studied in Sect. 4. We prove that hyperspheres are the only separable hypersurfaces of nonzero constant sectional curvature in the Euclidean \(n\)-space, provided that \(n>3\).
## 2. Preliminaries
In this section we summarize the differential-geometrical properties of the hypersurfaces in the Euclidean ambient space, cf. [4, 11].
Let \((\mathbb{R}^{n},\langle\cdot,\cdot\rangle)\) be the Euclidean \(n-\)space and \(\tilde{\nabla}\) the Levi-Civita connection on \(\mathbb{R}^{n}\). We denote by \(\mathbb{S}^{n-1}=\{\mathbf{x}\in\mathbb{R}^{n}:\langle\mathbf{x},\mathbf{x} \rangle=1\}\) the unit hypersphere of \(\mathbb{R}^{n}\). Let \(M^{n-1}\) be an orientable hypersurface of \(\mathbb{R}^{n}\) and denote by \(\nu\) the _Gauss map_ of \(M^{n-1}\), i.e. \(\nu:M^{n-1}\to\mathbb{S}^{n-1}\) such that \(\nu(p)\) is a unit normal vector field \(N(p)\) on \(M^{n-1}\) at \(p\in M^{n-1}\).
Let \(T_{p}M^{n-1}\) be the tangent space of \(M^{n-1}\) at \(p\in M^{n-1}\). Then, the differential \(d\nu\) is called the _shape operator_ of \(M^{n-1}\) where \(d\nu_{p}\) is an endomorphism on \(T_{p}M^{n-1}\), i.e. \(d\nu_{p}:T_{p}M^{n-1}\to T_{p}M^{n-1}\) is a linear map. An important intrinsic invariant called _Gauss-Kronecker curvature_ at \(p\in M^{n-1}\) is defined as \(\det(d\nu_{p})\).
We define the _second fundamental form_ II of \(M^{n-1}\) as a symmetric and bilinear map given by
\[\text{II}(X_{p},Y_{p})=\langle-d\nu(X_{p}),Y_{p}\rangle,\quad X_{p},Y_{p}\in T _{p}M^{n-1}.\]
Setting \(h(X_{p},Y_{p})=\text{II}(X_{p},Y_{p})N(p)\), the _formula of Gauss_ is now
\[\tilde{\nabla}_{X}Y=\nabla_{X}Y+h(X,Y),\]
where \(\nabla\) is the induced Levi-Civita connection on \(M^{n-1}\). Let \(\Pi\) be a plane section of \(T_{p}M^{n-1}\) spanned by an orthonormal basis \(\{X_{p},Y_{p}\}\). Then, we define the _sectional curvature_ of \(\Pi\) as
\[K(X_{p},X_{p})=\langle R(X,Y)Y,X\rangle(p),\]
where \(R\) is the Riemannian curvature tensor of \(M^{n-1}\) given by
\[R(X,Y)Z=\nabla_{X}\nabla_{Y}Z-\nabla_{Y}\nabla_{X}Z-\nabla_{[X,Y]}Z,\]
for the smooth vector fields \(X,Y,Z\) tangent to \(M^{n-1}\).
We call \(M^{n-1}\) a _flat hypersurface_ if \(R\) is identically \(0\). It is clear from the definition of sectional curvature \(K\) that \(R\) determines \(K\). On the other hand, it is known that \(K\) determines \(R\) (see [33, p. 78]). Actually, if \(K(\Pi)=0\) for every plane section \(\Pi\) of \(T_{p}M^{n-1}\), then it follows that \(R(X_{p},Y_{p})Z_{p}=0\) for every \(X_{p},X_{p},Z_{p}\in T_{p}M^{n-1}\) (see [33, Proposition 41]).
By the _equation of Gauss_, we have
\[K(X_{p},X_{p})=\text{II}(X_{p},X_{p})\text{II}(Y_{p},Y_{p})-\text{II}(X_{p},Y_ {p})^{2}.\]
Assume now that \(M^{n-1}\) is a hypersurface given in implicit form. Explicitly, if \(F(x_{1},...,x_{n})\) is a smooth real valued function on \(\mathbb{R}^{n}\) and if \(\text{grad}F\) denotes the gradient of \(F\) in \(\mathbb{R}^{n}\) then we have
\[M^{n-1}=\{(x_{1},...,x_{n})\in\mathbb{R}^{n}:F(x_{1},...,x_{n})=0,\text{grad}F \neq 0\}.\]
The unit normal vector field is
\[N=\frac{\text{grad}F}{\|\text{grad}F\|},\]
where \(\|\text{grad}F\|\) is the Euclidean norm of \(\text{grad}F\). In addition, the sectional curvature is
\[K(X,Y)=\frac{1}{\|\text{grad}F\|^{2}}\left(H^{F}(X,X)H^{F}(Y,Y)-(H^{F}(X,Y))^{2 }\right),\]
where \(H^{F}\) is the _Hessian_ of \(F\) in \(M^{n-1}\) defined by
\[H^{F}(X,Y)=\langle\nabla_{X}\nabla_{F},Y\rangle.\]
At the end of this section, we recall the useful result from [6, Lemma 1] (see also [17, Lemma 1]).
**Lemma 2.1**.: _[_6, 17_]_ _Let \(Q(u_{1},...,u_{n})\) be a smooth function in a domain \(\Omega\subset\mathbb{R}^{n}\) and \(\Pi\) a hyperplane of the form \(u_{1}+...+u_{n}=0\). If \(Q=0\) on the intersection \(\Omega\cap\Pi\), then_
\[\frac{\partial Q}{\partial u_{1}}=...=\frac{\partial Q}{\partial u_{n}}.\]
## 3. Flat separable hypersurfaces
In this section, we classify the flat separable hypersurfaces \(M^{n-1}\) of \(\mathbb{R}^{n}\). But in view of the fact that \(M^{n-1}\) is flat if and only if the sectional curvature function \(K\) is identically zero, it follows that the problem we want to study is equivalent to finding all separable hypersurfaces of null sectional curvature.
To get the classification, we will first deduce a statement related to the sectional curvature of \(M^{n-1}\). Let \((x_{1},...,x_{n})\) be the canonical coordinates of \(\mathbb{R}^{n}\) and \(\{\partial/\partial x_{1},...,\partial/\partial x_{n}\}\) the coordinate vector fields. Consider a separable hypersurface \(M^{n-1}\) defined by the implicit equation
\[f_{1}(x_{1})+...+f_{n}(x_{n})=0,\quad x_{k}\in I_{k}\subset\mathbb{R},\quad k =1,...,n. \tag{3}\]
Denote by \(f_{k}^{\prime}=df/dx_{k}\), \(f_{k}^{\prime\prime}=d^{2}f/dx_{k}^{2}\) and so on. Then, the unit normal vector field is
\[N=\frac{1}{\sqrt{\sum_{k=1}^{n}{f_{k}^{\prime}}^{2}}}(f_{1}^{\prime},...,f_{n }^{\prime}).\]
By the regularity, we may assume that at least one of \(f_{1},...,f_{n}\) is not constant. Without lose of generality, we may assume \(f_{n}^{\prime}(x_{n})\neq 0\), for each \(x_{n}\in I_{n}\). We obey this assumption through the paper. Then, a basis of the tangent space \(T_{p}M^{n-1}\) at some \(p\in M^{n-1}\) is \(\{X_{1}(p),...,X_{n-1}(p)\}\), where
\[X_{i}=\frac{\partial}{\partial x_{i}}-\left(\frac{f_{i}^{\prime}}{f_{n}^{ \prime}}\right)\frac{\partial}{\partial x_{n}},\quad i=1,...,n-1.\]
The covariant differentiation of \(N\) is
\[\tilde{\nabla}_{X_{i}}N=\frac{1}{\sqrt{\sum_{k=1}^{n}{f_{k}^{\prime}}^{2}}} \left((0,...,f_{i}^{\prime\prime},...0)-\frac{f_{i}^{\prime}}{f_{n}^{\prime}} (0,0,...,f_{n}^{\prime\prime})\right)+\text{normal component},\]
and so, for every \(i,j\in\{1,...,n-1\},i\neq j\),
\[\langle\tilde{\nabla}_{X_{i}}N,X_{i}\rangle=\frac{f_{i}^{\prime\prime}+f_{i}^ {{}^{\prime}2}f_{n}^{\prime\prime}/f_{n}^{{}^{\prime}2}}{\sqrt{\sum_{k=1}^{n} {f_{k}^{\prime}}^{2}}},\quad\langle\tilde{\nabla}_{X_{i}}N,X_{j}\rangle=\frac{ f_{i}^{\prime}f_{j}^{\prime}f_{n}^{\prime\prime}/f_{n}^{{}^{\prime}2}}{ \sqrt{\sum_{k=1}^{n}{f_{k}^{\prime}}^{2}}}.\]
In addition, for every \(i,j\in\{1,...,n-1\},i\neq j\),
\[\langle X_{i},X_{i}\rangle=1+\left(\frac{f_{i}^{\prime}}{f_{n}^{\prime}} \right)^{2},\quad\langle X_{i},X_{j}\rangle=\frac{f_{i}^{\prime}f_{j}^{\prime }}{f_{n}^{{}^{\prime}2}}.\]
Now, let \(K(X_{i},X_{j})\) be the curvature of the plane section spanned by \(\{X_{i},X_{j}\}\), for \(i,j\in\{1,...,n-1\}\) and \(i<j\). A direct calculation yields
\[K(X_{i},X_{j})=\frac{f_{i}^{{}^{\prime}2}f_{j}^{\prime\prime}f_{n}^{\prime\prime }+f_{j}^{{}^{\prime}2}f_{i}^{\prime\prime}f_{n}^{\prime\prime}+f_{n}^{{}^{ \prime}2}f_{i}^{\prime\prime}f_{j}^{\prime\prime}}{\left(\sum_{k=1}^{n}f_{i}^{{}^ {\prime}2}\right)\left(f_{i}^{{}^{\prime}2}+f_{j}^{{}^{\prime}2}+f_{n}^{{}^{ \prime}2}\right)},\quad i,j\in\{1,...,n-1\},\quad i<j. \tag{4}\]
The following result completely classifies the separable hypersurfaces \(M^{n-1}\) of null sectional curvature under the condition \(n>3\).
**Theorem 3.1**.: _A separable hypersurface \(M^{n-1}\) in \(\mathbb{R}^{n}\)\((n>3)\) of null sectional curvature is congruent to one of the following three hypersurfaces:_
1. _a hyperplane,_
2. \(\Gamma\times\mathbb{R}^{n-2}\)_, where_ \(\Gamma\) _is a curve with non-null curvature included in a coordinate_ \(2-\)_plane of_ \(\mathbb{R}^{n}\)_,_
3. \(x_{n}=A\sqrt{x_{1}...x_{n-1}}\)_, where_ \(A\) _is some positive constant._
Proof.: By the assumption, we have
\[f_{i}^{{}^{\prime}2}f_{j}^{\prime\prime}f_{n}^{\prime\prime}+f_{j}^{{}^{\prime} 2}f_{i}^{\prime\prime}f_{n}^{\prime\prime}+f_{n}^{{}^{\prime}2}f_{i}^{\prime \prime}f_{j}^{\prime\prime}=0,\quad\text{for every $i,j\in\{1,...,n-1\},i<j.$} \tag{5}\]
A trivial solution to Equation (5) is obtained when each of \(f_{1},...,f_{n}\) is an affine function. Obviously, this implies that \(M^{n-1}\) is a hyperplane and we have the item (i) of Theorem 3.1. In addition, as can be seen in Equation (5), independently from the values of \(i\) or \(j\), we have that \(f_{n}^{{}^{\prime}2}\) and \(f_{n}^{\prime\prime}\) appear in each equation. So, we separate the investigation into two cases:
**Case 1.** Assume that \(f_{n}\) is an affine function, say \(f_{n}(x_{n})=\lambda_{n}x_{n}+\mu_{n}\), \(\lambda_{n},\mu_{n}\in\mathbb{R}\). Here, due to the assumption above, we have \(\lambda_{n}\neq 0\). Equation (5) is now
\[\lambda_{n}^{2}f_{i}^{\prime\prime}f_{j}^{\prime\prime}=0,\quad\text{for every $i,j\in\{1,...,n-1\},i<j,$}\]
yielding that \(f_{i}\) or \(f_{j}\) is an affine function. Without lose of generality, we may take \(f_{j}(x_{j})=\lambda_{j}x_{j}+\mu_{j}\), \(\lambda_{j},\mu_{j}\in\mathbb{R}\). Since \(\lambda_{n}\neq 0\), we write
\[x_{n}=\frac{-1}{\lambda_{n}}(f_{1}(x_{1})+\lambda_{2}x_{2}+...+\lambda_{n-1}x_ {n-1}+\alpha),\quad\alpha=\mu_{2}+...+\mu_{n},\]
which is indeed a cylindrical hypersurface with parametrization
\[(x_{1},...,x_{n-1})\mapsto\mathbf{x}(x_{1},...,x_{n-1})\] \[=(x_{1},...,\tfrac{-1}{\lambda_{n}}(f_{1}(x_{1})+\alpha))+x_{2}(0,1,...,\tfrac{-\lambda_{2}}{\lambda_{n}})+...+x_{n-1}(0,...,1,\tfrac{-\lambda_ {n-1}}{\lambda_{n}}).\]
So, we have proved the item (ii) of Theorem 3.1. In addition, according to the result of Seo (see [36, Theorem 1.2]), this is also a translation hypersurface with null Gauss-Kronecker curvature.
**Case 2.** Assume that \(f_{n}^{\prime\prime}\neq 0\) for every \(x_{n}\in I_{n}\). It is important to point out that the roles of \(f_{i}\) and \(f_{j}\) are symmetric, that is, if one case is valid for \(f_{i}\) then so is \(f_{j}\). Hence, it is sufficient to discuss the cases relating to \(f_{i}\).
If \(f_{i}\) is an affine function, then again we arrive to the item (ii) of Theorem 3.1. Suppose now that \(f_{i}^{\prime\prime}\neq 0\) for every \(x_{i}\in I_{i}\). By the symmetry, the same presume is also valid for \(f_{j}\). Therefore, in the rest of Case 2, we will assume that
\[\prod_{k=1}^{n}f_{k}^{\prime\prime}\neq 0,\quad\text{for every }x_{k}\in I_{k}.\]
From now on, we will use an analogous of the notations and arguments given in [6, 17] and [32, p. 71]. For this, set \(u_{k}=f_{k}(x_{k})\) (\(k=1,...,n\)) such that \(u_{1}+...+u_{n}=0\). Next, we introduce
\[X_{k}(u_{k})=f_{k}^{{}^{\prime}}(x_{k})^{2},\quad k=1,...,n\]
or, by the chain formula,
\[X_{k}^{\prime}(u_{k})=2f_{k}^{\prime\prime}(x_{k}),\quad k=1,...,n.\]
With these new notations, Equation (5) is now
\[X_{i}X_{j}^{\prime}X_{n}^{\prime}+X_{i}^{\prime}X_{j}X_{n}^{\prime}+X_{i}^{ \prime}X_{j}^{\prime}X_{n}=0,\quad\text{for every }i,j\in\{1,...,n-1\},\quad i<j, \tag{6}\]
and for every \((u_{1},...,u_{n})\) satisfying \(u_{1}+...+u_{n}=0\). We may rewrite (6) as
\[\frac{X_{i}}{X_{i}^{\prime}}+\frac{X_{j}}{X_{j}^{\prime}}+\frac{X_{n}}{X_{n}^ {\prime}}=0,\quad\text{for every }i,j\in\{1,...,n-1\},\quad i<j. \tag{7}\]
Considering Lemma 2.1 and then differentiating Equation (7) with respect to \(u_{i},u_{j},u_{n}\), we may deduce
\[\left(\frac{X_{i}}{X_{i}^{\prime}}\right)^{\prime}=\left(\frac{X_{j}}{X_{j}^ {\prime}}\right)^{\prime}=\left(\frac{X_{n}}{X_{n}^{\prime}}\right)^{\prime}= \alpha,\quad\text{for every }i,j\in\{1,...,n-1\},i<j,\quad\alpha\in\mathbb{R}. \tag{8}\]
We have to distinguish two subcases:
**Subcase 2.1.**\(\alpha=0\). This implies the existence of the nonzero constants \(\lambda_{1},..,\lambda_{n}\) such that
\[X_{k}^{\prime}=\frac{2}{\lambda_{i}}X_{k},\quad\text{for every }k=1,...,n.\]
In terms of the previous notations, we have
\[f_{k}^{\prime\prime}=\frac{1}{\lambda_{i}}f_{k}^{{}^{\prime}2},\quad\text{for every }k=1,...,n, \tag{9}\]
where, due to Equations (5) or (7),
\[\lambda_{i}+\lambda_{j}+\lambda_{n}=0,\quad\text{for every }i,j\in\{1,...,n-1\},i<j. \tag{10}\]
By Equation (10), we have the following system:
\[\begin{array}{l}\lambda_{1}+\lambda_{2}+\lambda_{n}=0,\\ \lambda_{1}+\lambda_{3}+\lambda_{n}=0,\\...\\ \lambda_{n-1}+\lambda_{n-2}+\lambda_{n}=0.\end{array} \tag{11}\]
Simplifying in terms of \(\lambda_{n}\),
\[(n-2)(\lambda_{1}+...+\lambda_{n-1})+\frac{(n-1)(n-2)}{2}\lambda_{n}=0,\]
or equivalently,
\[\lambda_{n}=\frac{-2}{n-1}(\lambda_{1}+...+\lambda_{n-1}). \tag{12}\]
By the system (11) we may conclude \(\lambda_{i}=\lambda_{j}\), for every \(i,j\in\{1,...,n-1\}\) and \(i<j\). Set \(\lambda_{i}=\lambda\) for every \(i=1,...,n-1\). Hence, Equation (12) implies \(\lambda_{n}=-2\lambda\).
The solutions to Equation (9) are
\[\begin{array}{l}f_{i}(x_{i})=-\lambda\log(x_{i}+\mu_{i})+\beta_{i},\quad\mu_ {i},\beta_{i}\in\mathbb{R},\\ f_{n}(x_{n})=2\lambda\log(x_{n}+\mu_{n})+\beta_{n},\quad\mu_{n},\beta_{n}\in \mathbb{R},\end{array}\]
for every \(i=1,...,n-1\). Hence, Equation (3) is now
\[x_{n}+\mu_{n}=A\sqrt{(x_{1}+\mu_{1})...(x_{n-1}+\mu_{n-1})},\]
where
\[A=e^{-(\beta_{1}+...+\beta_{n})/2\lambda}.\]
Up to suitable translations of \(x_{1},...,x_{n}\), the statement of the item (iii) of Theorem 3.1 is proved.
**Subcase 2.2.**\(\alpha\neq 0\). So, we can take \(1/(2\alpha)\) in Equation (8) instead of \(\alpha\). The integration in Equation (8) gives
\[\frac{X_{i}}{X_{i}^{\prime}}=\frac{u_{i}+\mu_{i}}{2\alpha},\quad i=1,...,n-1, \tag{13}\]
where \(\mu_{1},...,\mu_{n}\in\mathbb{R}\) and, due to Equation (7),
\[u_{i}+u_{j}+u_{n}+\mu_{i}+\mu_{j}+\mu_{n}=0,\quad\text{for every $i,j\in\{1,...,n-1\}$},i<j.\]
Hence, we have
\[\begin{array}{l}u_{1}+u_{2}+u_{n}+\mu_{1}+\mu_{2}+\mu_{n}=0,\\ u_{1}+u_{3}+u_{n}+\mu_{1}+\mu_{3}+\mu_{n}=0,\\...\\ u_{n-1}+u_{n-2}+u_{n}+\mu_{n-1}+\mu_{n-2}+\mu_{n}=0.\end{array}\]
Summing the above relations, we get
\[(n-2)(u_{1}+...+u_{n-1}+\mu_{1}+...+\mu_{n-1})+\frac{(n-2)(n-1)}{2}(u_{n}+\mu_ {n})=0.\]
Since \(u_{1}+...+u_{n}=0\), the above equation writes as
\[\mu_{1}+...+\mu_{n}+\frac{n-3}{2}(u_{n}+\mu_{n})=0. \tag{14}\]
Now, taking into account that \(u_{n}=f_{n}(x_{n})\) is not constant, it follows that \(n\) must be \(3\), which is not our case.
**Remark 3.2**.: We point out that the hypersurface appearing in Theorem 3.1, item (iii), is a particular type of Cobb-Douglas hypersurface. Recall that a hypersurface \(M^{n-1}\) of \(\mathbb{R}^{n}\) is said to be a _Cobb-Douglas hypersurface_ if \(M^{n-1}\) is the graph of the function \(F\) defined by \(F(x_{1},...,x_{n-1})=A\prod_{i=1}^{n-1}x_{i}^{\alpha_{i}}\), where \(A,\alpha_{1},...,\alpha_{n-1}\) are positive constants (see, e.g., [39]). Hence it is clear that the hypersurface appearing in Theorem 3.1, item (iii), is noting but a Cobb-Douglas hypersurface with \(\alpha_{1}=...=\alpha_{n-1}=\frac{1}{2}\).
**Remark 3.3**.: From Theorem 3.1 we can also obtain, by particularizing the type of separable hypersurface, the classifications of translation, factorable, quasi-sum and quasi-product hypersurfaces in \(\mathbb{R}^{n}\) with vanishing sectional curvature, recovering in particular some known results both in differential geometry and microeconomics. For example, if \(M^{n-1}\) is a quasi-product hypersurface, then Theorem 3.1 reduces to [19, Theorem 3.3], while if \(M^{n-1}\) is a factorable hypersurface, then Theorem 3.1 reduces to [1, Corollary 4.2 (viii)]. We also note that if \(M^{n-1}\) is a quasi-sum production hypersurface, then Theorem 3.1 leads to a generalization of the classification established in [38, Theorem 1.1 (iv\({}_{3}\))] under an additional hypothesis of interest in production theory, namely the so-called _proportional marginal rate of substitution_ property (for more details see [38]).
## 4. The case of non-zero constant sectional curvature
In this section, we will assume that \(K(\Pi)\) is a nonzero constant, say \(K(\Pi)=K_{0}/4\), \(K_{0}\neq 0\), for every plane section \(\Pi\) of \(T_{p}M^{n-1}\) at some point \(p\in M^{n-1}\). Hence, Equation (4) writes as
\[\frac{K_{0}}{4}=\frac{f_{i}^{{}^{\prime}2}f_{j}^{\prime\prime}f_{n}^{\prime \prime}+f_{j}^{{}^{\prime}2}f_{i}^{\prime\prime}f_{n}^{\prime\prime}+f_{n}^{{}^ {\prime}2}f_{i}^{\prime\prime}f_{j}^{\prime\prime}}{\left(\sum_{k=1}^{n}f_{k}^ {{}^{\prime}2}\right)\left(f_{i}^{{}^{\prime}2}+f_{j}^{{}^{\prime}2}+f_{n}^{{} ^{\prime}2}\right)},\quad\text{ for every }i,j\in\{1,...,n-1\},i<j. \tag{15}\]
Obviously, none of \(f_{1}^{\prime},...,f_{n}^{\prime}\) is \(0\) because otherwise we would have the contradiction \(K_{0}=0\). So, in terms of the notations that we have used in the previous section, we may rewrite Equation (15) as
\[K_{0}(X_{i}+X_{j}+X_{n})\sum_{k=1}^{n}X_{k}=X_{i}X_{j}^{\prime}X_{n}^{\prime}+ X_{j}X_{i}^{\prime}X_{n}^{\prime}+X_{n}X_{i}^{\prime}X_{j}^{\prime}, \tag{16}\]
for every \(i,j\in\{1,...,n-1\},i<j\). Here we recall that \(X_{k}\neq 0\) for every \(k=1,...,n\).
In order to give the main result of this section, we need to prove some lemmas which we will use later. But, as a prior investigation, we want to distinguish the
situation \(X^{\prime}_{k}(u_{k})=4\lambda\), for every \(k=1,...,n\) and \(\lambda\in\mathbb{R}\). Obviously, the case \(\lambda=0\), namely \(M^{n-1}\) is a hyperplane, is not our interest. Otherwise, i.e. \(\lambda\neq 0\), we have
\[f_{k}(x_{k})=\lambda(x_{k}+\mu_{k})^{2}+\beta_{k},\quad\text{for every $k=1,...,n$},\]
or, by Equation (3),
\[\sum_{k=1}^{n}(x_{k}+\mu_{k})^{2}=-\frac{1}{\lambda}\sum_{k=1}^{n}\beta_{k}.\]
Since \(M^{n-1}\) is a hypersurface, the right-hand side is a positive real number and so it is a hypersphere, which is known to be a hypersurface of constant sectional curvature.
From now on, we will discard the case that \(M^{n-1}\) is a hypersphere. Another investigation for the derivatives \(X^{\prime}_{1},...,X^{\prime}_{n}\) is as follows.
**Lemma 4.1**.: None of \(X^{\prime}_{1},...,X^{\prime}_{n}\) is \(0\) provided that \(K_{0}\) is different from \(0\).
Proof.: The proof is by contradiction. Without lose of generality, we assume \(X^{\prime}_{1}=0\), or equivalently, \(X_{1}=\lambda_{1}\), \(\lambda_{1}\in\mathbb{R}\), \(\lambda_{1}\neq 0\). For \((i,j)=(1,2)\), Equation (16) writes as
\[K_{0}(\lambda_{1}+X_{2}+X_{n})\left(\lambda_{1}+\sum_{k=2}^{n}X_{k}\right)- \lambda_{1}X^{\prime}_{2}X^{\prime}_{n}=0. \tag{17}\]
Considering Lemma 2.1 and then differentiating Equation (17) with respect to \(u_{2},...,u_{n}\), we get
\[\begin{array}{l}K_{0}X^{\prime}_{2}\left(2(\lambda_{1}+X_{2}+X_{n})+\sum_{k=3 }^{n-1}X_{k}\right)-\lambda_{1}X^{\prime\prime}_{2}X^{\prime}_{n}=K_{0}X^{ \prime}_{3}(\lambda_{1}+X_{2}+X_{n}),\\ K_{0}X^{\prime}_{3}(\lambda_{1}+X_{2}+X_{n})=K_{0}X^{\prime}_{4}(\lambda_{1}+X _{2}+X_{n})\\ \cdots\\ K_{0}X^{\prime}_{n-2}(\lambda_{1}+X_{2}+X_{n})=K_{0}X^{\prime}_{n-1}(\lambda_ {1}+X_{2}+X_{n}),\\ K_{0}X^{\prime}_{n-1}(\lambda_{1}+X_{2}+X_{n})=K_{0}X^{\prime}_{n}\left(2( \lambda_{1}+X_{2}+X_{n})+\sum_{k=3}^{n-1}X_{k}\right)\\ \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad-\lambda_{ 1}X^{\prime}_{2}X^{\prime\prime}_{n},\end{array} \tag{18}\]
where, due to \(\lambda_{1}+X_{2}+X_{n}\neq 0\) for every \((u_{2},u_{n})\), we may conclude
\[X^{\prime}_{3}=X^{\prime}_{4}=...=X^{\prime}_{n-1}.\]
Performing an analogous argument, in the case \((i,j)=(1,3)\), to Equation (16) we conclude
\[X^{\prime}_{2}=X^{\prime}_{4}=...=X^{\prime}_{n-1},\]
and so
\[X^{\prime}_{2}=X^{\prime}_{3}=...=X^{\prime}_{n-1}=\lambda,\quad\lambda\in \mathbb{R},\lambda\neq 0.\]
Now, from the first equality in Equation (18), we derive that
\[\lambda K_{0}\left(\lambda_{1}+\sum_{k=2}^{n}X_{k}\right)=0,\]
where none of the terms can be \(0\), a contradiction.
**Lemma 4.2**.: \(X^{\prime}_{i}-X^{\prime}_{j}\) is different from \(0\), for every \(i,j\in\{1,...,n-1\}\), \(i<j\).
Proof.: On the contrary, assume that \(X^{\prime}_{i}=X^{\prime}_{j}\) for some \(i,j\in\{1,...,n-1\}\), \(i<j\). Hence, it follows \(X^{\prime}_{i}=X^{\prime}_{j}=\lambda\), for a nonzero constant \(\lambda\). Equation (16) is now
\[K_{0}(X_{i}+X_{j}+X_{n})\sum_{k=1}^{n}X_{k}-\lambda^{2}X_{n}-\lambda(X_{i}+X_{ j})X^{\prime}_{n}=0, \tag{19}\]
for some \(i,j\in\{1,...,n-1\}\). Using Lemma 2.1 in Equation (19) and then differentiating with respect to \(u_{l}\) and \(u_{m}\) (with \(l\neq m\)), we get
\[K_{0}(X_{i}+X_{j}+X_{n})(X^{\prime}_{l}-X^{\prime}_{m})=0,\quad\text{ for every }l,m\in\{1,...,n-1\}-\{i,j\},\quad l\neq m.\]
Here, we may conclude
\[X^{\prime}_{i}=X^{\prime}_{j}=\lambda,\quad X^{\prime}_{l}=\mu,\text{ for every }l\in\{1,...,n-1\}-\{i,j\},\]
where \(\mu\) is a nonzero constant. After applying Lemma 2.1 in Equation (19), differentiating with respect to \(u_{i}\) and \(u_{l}\), we deduce
\[\lambda K_{0}\sum_{k=1}^{n}X_{k}+K_{0}(\lambda-\mu)(X_{i}+X_{j}+X_{n})-\lambda ^{2}X^{\prime}_{n}=0.\]
By using the same argument in the last equation, we derive \(\lambda=\mu\).
On the other hand, we apply Lemma 2.1 in Equation (19), differentiating with respect to \(u_{i}\) and \(u_{n}\), and we find
\[\begin{array}{l}\lambda K_{0}\left(X_{i}+X_{j}+X_{n}+\sum_{k=1}^{n}X_{k} \right)-\lambda^{2}X^{\prime}_{n}\\ =K_{0}\left(X_{i}+X_{j}+X_{n}+\sum_{k=1}^{n}X_{k}\right)X^{\prime}_{n}-\lambda ^{2}X^{\prime}_{n}-\lambda(X_{i}+X_{j})X^{\prime\prime}_{n},\end{array}\]
or equivalently,
\[K_{0}\left(X_{i}+X_{j}+X_{n}+\sum_{k=1}^{n}X_{k}\right)(X^{\prime}_{n}-\lambda )-\lambda(X_{i}+X_{j})X^{\prime\prime}_{n}=0,\]
Here, if \(X^{\prime\prime}_{n}=0\), then it must be \(X^{\prime}_{n}=\lambda\). But this case has been already discarded, being the case of a hypersphere. So, we have \((X^{\prime}_{n}-\lambda)X^{\prime\prime}_{n}\neq 0\), yielding
\[K_{0}\left(X_{i}+X_{j}+X_{n}+\sum_{k=1}^{n}X_{k}\right)-\lambda(X_{i}+X_{j}) \Omega=0, \tag{20}\]
where \(\Omega=X^{\prime\prime}_{n}/(X^{\prime}_{n}-\lambda)\). Again, we apply Lemma 2.1 to this equation where we differentiate with respect to \(u_{i}\) and \(u_{l}\), obtaining \(\Omega=K_{0}/\lambda\). Replacing in Equation (20),
\[X_{n}+\sum_{k=1}^{n}X_{k}=0.\]
Here, by Lemma 2.1 we may conclude \(X^{\prime}_{n}=\lambda/2\) or \(X^{\prime\prime}_{n}=0\), a contradiction.
Next, Equation (16) writes as
\[A_{ij}+B_{ij}X_{n}+C_{ij}X_{n}^{\prime}=K_{0}X_{n}^{2},\quad\text{ for every }i,j\in\{1,...,n-1\},i<j, \tag{21}\]
where
\[\begin{array}{l}A_{ij}(u_{1},...,u_{n-1})=-K_{0}(X_{i}+X_{j})\sum_{k=1}^{n-1} X_{k},\\ B_{ij}(u_{1},...,u_{n-1})=-K_{0}\left(X_{i}+X_{j}+\sum_{k=1}^{n-1}X_{k}\right)+X _{i}^{\prime}X_{j}^{\prime},\\ C_{ij}(u_{i},u_{j})=X_{i}X_{j}^{\prime}+X_{i}^{\prime}X_{j}.\end{array}\]
Our third lemma claims that nowhere the coefficient function \(C_{ij}\) is \(0\).
**Lemma 4.3**.: The function \(C_{ij}\) given in Equation (21) is always different from \(0\).
Proof.: By contradiction, suppose that
\[X_{i}X_{j}^{\prime}+X_{i}^{\prime}X_{j}=0,\quad\text{ for some }i,j\in\{1,...,n-1\}, \quad i<j.\]
This implies the existence of a nonzero constant \(\lambda\) such that
\[X_{i}^{\prime}=\lambda X_{i},\quad X_{j}^{\prime}=-\lambda X_{j},\quad\text{ for some }i,j\in\{1,...,n-1\},\quad i<j.\]
Hence, Equation (16) is
\[K_{0}(X_{i}+X_{j}+X_{n})\sum_{k=1}^{n}X_{k}+\lambda^{2}X_{i}X_{j}X_{n}=0. \tag{22}\]
Applying Lemma 2.1 to this equation where we differentiate with respect to \(u_{i}\) and \(u_{j}\),
\[\begin{array}{l}K_{0}\lambda\left(X_{i}+X_{j}+X_{n}+\sum_{k=1}^{n}X_{k} \right)X_{i}+\lambda^{3}X_{i}X_{j}X_{n}\\ =-K_{0}\lambda\left(X_{i}+X_{j}+X_{n}+\sum_{k=1}^{n}X_{k}\right)X_{j}-\lambda^ {3}X_{i}X_{j}X_{n},\end{array}\]
or equivalently,
\[K_{0}\left(X_{i}+X_{j}+X_{n}+\sum_{k=1}^{n}X_{k}\right)\left(X_{i}+X_{j}\right) +2\lambda^{2}X_{i}X_{j}X_{n}=0, \tag{23}\]
for some \(i,j\in\{1,...,n-1\}\), \(i<j\). From Equations (22) and (23),
\[\left(X_{i}+X_{j}+X_{n}+\sum_{k=1}^{n}X_{k}\right)\left(X_{i}+X_{j}\right)-2 (X_{i}+X_{j}+X_{n})\sum_{k=1}^{n}X_{k}=0. \tag{24}\]
By Lemma 2.1, where we differentiate with respect to \(u_{i}\) and \(u_{j}\), we may conclude
\[-X_{i}^{\prime}\left(X_{n}+\sum_{i\neq k\neq j}^{n}X_{k}\right)=-X_{j}^{\prime }\left(X_{n}+\sum_{i\neq k\neq j}^{n}X_{k}\right)\]
or equivalently,
\[\left(X_{n}+\sum_{i\neq k\neq j}^{n}X_{k}\right)(X_{i}+X_{j})=0,\quad\text{ for some }i,j\in\{1,...,n-1\},\quad i<j.\]
This is impossible in view of Lemma 4.2.
We now go back to Equation (21). We use Lemma 2.1 in Equation (21), where we differentiate with respect to \(u_{i}\) and \(u_{j}\), obtaining
\[A_{ij,u_{i}}-A_{ij,u_{j}}+(B_{ij,u_{i}}-B_{ij,u_{j}})X_{n}+(C_{ij,u_{i}}-C_{ij,u_ {j}})X_{n}^{\prime}=0, \tag{25}\]
for every \(i,j\in\{1,...,n-1\}\), \(i<j\). Here \(A_{ij,u_{i}}=\partial A_{ij}/\partial u_{i}\) and so on. A direct calculation leads to
\[\begin{array}{l}D_{ij}:=A_{ij,u_{i}}-A_{ij,u_{j}}=-K_{0}(X_{i}^{\prime}-X_{j }^{\prime})\left(X_{i}+X_{j}+\sum_{k=1}^{n-1}X_{k}\right),\\ E_{ij}:=B_{ij,u_{i}}-B_{ij,u_{j}}=-2K_{0}(X_{i}^{\prime}-X_{j}^{\prime})+X_{i} ^{\prime\prime}X_{j}^{\prime}-X_{i}^{\prime}X_{j}^{\prime\prime},\\ F_{ij}:=C_{ij,u_{i}}-C_{ij,u_{j}}=X_{i}^{\prime\prime}X_{j}-X_{i}X_{j}^{ \prime\prime}.\end{array}\]
Hence, Equation (25) is now
\[D_{ij}+E_{ij}X_{n}+F_{ij}X_{n}^{\prime}=0,\quad\text{ for every }i,j\in\{1,...,n-1\},\quad i<j. \tag{26}\]
As previously mentioned, we will present two lemmas more. Our next lemma claims that none of \(D_{ij}\), \(E_{ij}\) and \(F_{ij}\) is identically \(0\).
**Lemma 4.4**.: The functions \(D_{ij}\), \(E_{ij}\) and \(F_{ij}\) given in Equation (26) are always different from \(0\).
Proof.: The proof is by contradiction and we do it into separated cases.
**Case 1.**\(D_{ij}=0\), for some \(i,j\in\{1,...,n-1\}\), \(i<j\). Then
\[K_{0}(X_{i}^{\prime}-X_{j}^{\prime})\left(X_{i}+X_{j}+\sum_{k=1}^{n-1}X_{k} \right)=0,\]
which is possible only if \(X_{i}^{\prime}-X_{j}^{\prime}=0\), for some \(i,j\in\{1,...,n-1\}\), \(i<j\). However, this contradicts with Lemma 4.2.
**Case 2.**\(F_{ij}=0\), for some \(i,j\in\{1,...,n-1\}\), \(i<j\). Then, \(X_{i}^{\prime\prime}X_{j}-X_{i}X_{j}^{\prime\prime}=0\), yielding a constant \(\lambda\) such that
\[\frac{X_{i}^{\prime\prime}}{X_{i}}=\lambda=\frac{X_{j}^{\prime\prime}}{X_{j}},\quad\text{for some }i,j\in\{1,...,n-1\}.\]
If \(\lambda=0\), then Equation (26) implies
\[-K_{0}(X_{i}^{\prime}-X_{j}^{\prime})\left(X_{i}+X_{j}+\sum_{k=1}^{n-1}X_{k} \right)-2K_{0}(X_{i}^{\prime}-X_{j}^{\prime})X_{n}=0,\]
or equivalently,
\[X_{i}+X_{j}+X_{n}+\sum_{k=1}^{n}X_{k}=0,\quad\text{for some }i,j\in\{1,...,n-1\}.\]
Applying Lemma 2.1, the contradiction \(X_{i}^{\prime}-X_{j}^{\prime}=0\) can be derived. Assume now that \(\lambda\neq 0\). Then, Equation (26) writes as
\[\begin{array}{l}G_{ij}:=-K_{0}(X_{i}^{\prime}-X_{j}^{\prime})\left(X_{i}+X_{j} +\sum_{k=1}^{n-1}X_{k}\right)\\ +\left(-2K_{0}(X_{i}^{\prime}-X_{j}^{\prime})+\lambda(X_{i}X_{j}^{\prime}-X_{ i}^{\prime}X_{j})\right)X_{n}=0,\end{array} \tag{27}\]
for some \(i,j\in\{1,...,n-1\}\). Using Lemma 2.1, the partial derivatives of \(G_{ij}\) with respect to \(u_{i}\) and \(u_{j}\) satisfy for some \(i,j\in\{1,...,n-1\}\):
\[\begin{array}{l}H_{ij}:=G_{ij,u_{i}}-G_{ij,u_{j}}\\ =-\lambda K_{0}(X_{i}+X_{j})\left(X_{i}+X_{j}+\sum_{k=1}^{n-1}X_{k}\right)\\ -2K_{0}(X_{i}^{\prime}-X_{j}^{\prime})^{2}+\left(-2\lambda K_{0}(X_{i}+X_{j})+ 2\lambda X_{i}^{\prime}X_{j}^{\prime}-2\lambda^{2}X_{i}X_{j}\right)X_{n}=0. \end{array}\]
Again proceeding the same argument,
\[\begin{array}{l}H_{ij,u_{i}}-H_{ij,u_{j}}=-\lambda K_{0}(X_{i}^{\prime}-X_{ j}^{\prime})\left(X_{i}+X_{j}+\sum_{k=1}^{n-1}X_{k}\right)\\ -6\lambda K_{0}(X_{i}+X_{j})(X_{i}^{\prime}-X_{j}^{\prime})\\ +(-2\lambda K_{0}(X_{i}^{\prime}-X_{j}^{\prime})+4\lambda^{2}(X_{i}X_{j}^{ \prime}-X_{i}^{\prime}X_{j}))X_{n}=0.\end{array} \tag{28}\]
Multiplying Equation (27) by \(-\lambda\) and adding Equation (28), we get
\[-2K_{0}(X_{i}+X_{j})(X_{i}^{\prime}-X_{j}^{\prime})+\lambda(X_{i}X_{j}^{\prime }-X_{i}^{\prime}X_{j})X_{n}=0,\quad\text{for some $i,j\in\{1,...,n-1\}$.}\]
Here the coefficient of \(X_{n}\) is different from \(0\) because otherwise we would have \(X_{i}+X_{j}=0\) or \(X_{i}^{\prime}-X_{j}^{\prime}=0\), this being impossible due to Lemma 4.2. So, we conclude
\[X_{n}=2K_{0}\frac{(X_{i}+X_{j})(X_{i}^{\prime}-X_{j}^{\prime})}{\lambda(X_{i}X _{j}^{\prime}-X_{i}^{\prime}X_{j})}\quad\text{for some $i,j\in\{1,...,n-1\}$.}\]
Substituting into Equation (27),
\[X_{n}+\sum_{i\neq k\neq j}^{n}X_{k}=0.\]
Considering Lemmas 2.1 and 4.2, the last equation yields a contradiction.
**Case 3.**\(E_{ij}=0\), for some \(i,j\in\{1,...,n-1\}\), \(i<j\). Then,
\[2K_{0}(X_{i}^{\prime}-X_{j}^{\prime})-X_{i}^{\prime\prime}X_{j}^{\prime}+X_{i} ^{\prime}X_{j}^{\prime\prime}=0,\quad\text{ for some $i,j\in\{1,...,n-1\}$,}\quad i<j.\]
Dividing by \(X_{i}^{\prime}X_{j}^{\prime}\), one arrives to the existence of a constant \(\lambda\) where
\[\frac{X_{i}^{\prime\prime}+2K_{0}}{X_{i}^{\prime}}=\lambda=\frac{X_{j}^{\prime \prime}+2K_{0}}{X_{j}^{\prime}},\quad\text{ for some $i,j\in\{1,...,n-1\},i<j$,}\]
or equivalently
\[X_{i}^{\prime\prime}=\lambda X_{i}^{\prime}-2K_{0},\quad X_{j}^{\prime\prime}= \lambda X_{j}^{\prime}-2K_{0},\quad\text{ for some $i,j\in\{1,...,n-1\}$.}\]
Replacing in Equation (26),
\[\begin{array}{l}L_{ij}:=-K_{0}(X_{i}^{\prime}-X_{j}^{\prime})\left(X_{i}+X_{ j}+\sum_{k=1}^{n-1}X_{k}\right)\\ +(\lambda(X_{i}^{\prime}X_{j}-X_{i}X_{j}^{\prime})+2K_{0}(X_{i}-X_{j}))X_{n}^{ \prime}=0.\end{array} \tag{29}\]
Due to Lemma 2.1, the partial derivatives of \(L_{ij}\) satisfy for some \(i,j\in\{1,...,n-1\}\),
\[\begin{array}{l}L_{ij,u_{i}}-L_{ij,u_{j}}=-\lambda K_{0}(X_{i}^{\prime}+X_{j}^{ \prime})\left(X_{i}+X_{j}+\sum_{k=1}^{n-1}X_{k}\right)\\ +4K_{0}^{2}\left(X_{i}+X_{j}+\sum_{k=1}^{n-1}X_{k}\right)-2K_{0}(X_{i}^{\prime }-X_{j}^{\prime})^{2}\\ +(\lambda^{2}(X_{i}^{\prime}X_{j}+X_{i}X_{j}^{\prime})-2\lambda K_{0}(X_{i}+X_ {j})-2\lambda X_{i}^{\prime}X_{j}^{\prime}+2K_{0}(X_{i}^{\prime}+X_{j}^{\prime }))X_{n}^{\prime}=0.\end{array}\]
The same argument yields
\[\begin{array}{l}-\lambda^{2}K_{0}(X_{i}^{\prime}-X_{j}^{\prime})\left(X_{i}+ X_{j}+\sum_{k=1}^{n-1}X_{k}\right)\\ -6\lambda K_{0}(X_{i}^{\prime}+X_{j}^{\prime})(X_{i}^{\prime}-X_{j}^{\prime}) +24K_{0}^{2}(X_{i}^{\prime}-X_{j}^{\prime})\\ +(\lambda^{3}(X_{i}^{\prime}X_{j}-X_{i}X_{j}^{\prime})+2\lambda^{2}K_{0}(X_{i} -X_{j})-4\lambda K_{0}(X_{i}^{\prime}-X_{j}^{\prime}))X_{n}^{\prime}=0,\end{array} \tag{30}\]
for some \(i,j\in\{1,...,n-1\}\). Here, due to Lemma 4.2, \(\lambda\neq 0\). Multiplying Equation (29) by \(-\lambda^{2}\) and adding Equation (30), one obtains
\[M_{ij}:=-12K_{0}+3\lambda(X_{i}^{\prime}+X_{j}^{\prime})+2\lambda X_{n}^{ \prime}=0,\quad\text{ for some }i,j\in\{1,...,n-1\},\quad i<j,\]
Using Lemma 2.1, we have \(M_{ij,u_{i}}-M_{ij,u_{j}}=0\), yielding the following contradiction
\[3\lambda(X_{i}^{\prime\prime}-X_{j}^{\prime\prime})=3\lambda^{2}(X_{i}^{ \prime}-X_{j}^{\prime})=0,\quad\text{ for some }i,j\in\{1,...,n-1\},\quad i<j.\]
Now, by Lemma 4.4, Equation (26) is ready so that Lemma 2.1 can be applied. Differentiating Equation (26) with respect to \(u_{i}\) and \(u_{j}\),
\[D_{ij,u_{i}}-D_{ij,u_{j}}+(E_{ij,u_{i}}-E_{ij,u_{j}})X_{n}+(F_{ij,u_{i}}-F_{ij,u_{j}})X_{n}^{\prime}=0, \tag{31}\]
for every \(i,j\in\{1,...,n-1\}\), \(i<j\). The following is the last lemma before presenting the proof of the theorem.
**Lemma 4.5**.: The functions \(D_{ij}\), \(E_{ij}\) and \(F_{ij}\) satisfy
\[\frac{D_{ij,u_{i}}-D_{ij,u_{j}}}{D_{ij}}=\frac{E_{ij,u_{i}}-E_{ij,u_{j}}}{E_{ij }}=\frac{F_{ij,u_{i}}-F_{ij,u_{j}}}{F_{ij}},\]
or equivalently,
\[\left(\frac{D_{ij}}{E_{ij}}\right)_{u_{i}}=\left(\frac{D_{ij}}{E_{ij}}\right) _{u_{j}},\ \left(\frac{D_{ij}}{F_{ij}}\right)_{u_{i}}=\left(\frac{D_{ij}}{F_{ij}}\right)_ {u_{j}},\ \left(\frac{E_{ij}}{F_{ij}}\right)_{u_{i}}=\left(\frac{E_{ij}}{F_{ij}} \right)_{u_{j}}, \tag{32}\]
for every \(i,j\in\{1,...,n-1\}\), \(i<j\).
Proof.: Equation (21) is explicitly rewritten as
\[\begin{array}{l}\left(X_{i}X_{j}^{\prime}+X_{i}^{\prime}X_{j}\right)X_{n}^{ \prime}=K_{0}(X_{i}+X_{j})\sum_{k=1}^{n-1}X_{k}+\\ \left(K_{0}\left(X_{i}+X_{j}+\sum_{k=1}^{n-1}X_{k}\right)-X_{i}^{\prime}X_{j}^{ \prime}\right)X_{n}+K_{0}X_{n}^{2},\end{array} \tag{33}\]
for every \(i,j\in\{1,...,n-1\}\), \(i<j\). We will eliminate the term \(X_{n}^{\prime}\) from Equations (26) and (33). We first multiply Equations (26) and (33) by \(-(X_{i}X_{j}^{\prime}+X_{i}^{\prime}X_{j})\) and by \(F_{ij}\), respectively. We then add the resulting equalities, deducing
\[P_{ij}+Q_{ij}X_{n}+R_{ij}X_{n}^{2}=0,\quad\text{ for every }i,j\in\{1,...,n-1\}, \quad i<j, \tag{34}\]
where
\[\begin{array}{l}P_{ij}=K_{0}F_{ij}(X_{i}+X_{j})\sum_{k=1}^{n-1}X_{k}+D_{ij}(X_{i }X_{j}^{\prime}+X_{i}^{\prime}X_{j})\\ Q_{ij}=F_{ij}\left(K_{0}\left(X_{i}+X_{j}+\sum_{k=1}^{n-1}X_{k}\right)-X_{i}^{ \prime}X_{j}^{\prime}\right)+E_{ij}(X_{i}X_{j}^{\prime}+X_{i}^{\prime}X_{j})\\ R_{ij}=K_{0}F_{ij},\end{array}\]
for every \(i,j\in\{1,...,n-1\}\), \(i<j\). By Lemma 2.1, Equation (34) yields
\[P_{ij,u_{i}}-P_{ij,u_{j}}+(Q_{ij,u_{i}}-Q_{ij,u_{j}})X_{n}+(R_{ij,u_{i}}-R_{ij, u_{j}})X_{n}^{2}=0, \tag{35}\]
for every \(i,j\in\{1,...,n-1\}\), \(i<j\). Here the coefficients are
\[\begin{array}{l}P_{ij,u_{i}}-P_{ij,u_{j}}=\\ K_{0}(F_{ij,u_{i}}-F_{ij,u_{j}})(X_{i}+X_{j})\sum_{k=1}^{n-1}X_{k}+(D_{ij,u_{i} }-D_{ij,u_{j}})(X_{i}X_{j}^{\prime}+X_{i}X_{j}^{\prime}),\\ Q_{ij,u_{i}}-Q_{ij,u_{j}}=\\ (F_{ij,u_{i}}-F_{ij,u_{j}})\left(K_{0}\left(X_{i}+X_{j}+\sum_{k=1}^{n-1}X_{k} \right)-X_{i}^{\prime}X_{j}^{\prime}\right)+(E_{ij,u_{i}}-E_{ij,u_{j}})(X_{i}X _{j}^{\prime}+X_{i}^{\prime}X_{j}),\\ R_{ij,u_{i}}-R_{ij,u_{j}}=K_{0}(F_{ij,u_{i}}-F_{ij,u_{j}}).\end{array}\]
Now we have two separated cases according as \(R_{ij,u_{i}}-R_{ij,u_{j}}\) is or is not \(0\).
**Case 1.**\(R_{ij,u_{i}}-R_{ij,u_{j}}\neq 0\). This assumption is equivalent to \(F_{ij,u_{i}}-F_{ij,u_{j}}\neq 0\). Notice that Equations (34) and (35) have the common pair of roots. Hence, we get
\[\frac{P_{ij,u_{i}}-P_{ij,u_{j}}}{P_{ij}}=\frac{Q_{ij,u_{i}}-Q_{ij,u_{j}}}{Q_{ ij}}=\frac{R_{ij,u_{i}}-R_{ij,u_{j}}}{R_{ij}}.\]
Equivalently, we have
\[\begin{array}{l}P_{ij}(R_{ij,u_{i}}-R_{ij,u_{j}})=R_{ij}(P_{ij,u_{i}}-P_{ij,u_{j}})\\ Q_{ij}(R_{ij,u_{i}}-R_{ij,u_{j}})=R_{ij}(Q_{ij,u_{i}}-Q_{ij,u_{j}})\end{array} \tag{36}\]
and a direct calculation in Equation (36) implies
\[\begin{array}{l}K_{0}D_{ij}(X_{i}X_{j}^{\prime}+X_{i}^{\prime}X_{j})(F_{ij,u _{i}}-F_{ij,u_{j}})=K_{0}F_{ij}(X_{i}X_{j}^{\prime}+X_{i}^{\prime}X_{j})(D_{ij,u_{i}}-D_{ij,u_{j}})\\ K_{0}E_{ij}(X_{i}X_{j}^{\prime}+X_{i}^{\prime}X_{j})(F_{ij,u_{i}}-F_{ij,u_{j}})= K_{0}F_{ij}(X_{i}X_{j}^{\prime}+X_{i}^{\prime}X_{j})(E_{ij,u_{i}}-E_{ij,u_{j}}). \end{array}\]
In view of Lemmas 4.3 and 4.4, as well as our assumption, none of the terms in the above equation is \(0\), proving the result of the lemma.
**Case 2.**\(R_{ij,u_{i}}-R_{ij,u_{j}}=0\). Equation (35) is now
\[P_{ij,u_{i}}-P_{ij,u_{j}}+(Q_{ij,u_{i}}-Q_{ij,u_{j}})X_{n}=0.\]
Here \(Q_{ij,u_{i}}-Q_{ij,u_{j}}=0\) (or \(\neq 0\)) if and only if \(P_{ij,u_{i}}-P_{ij,u_{j}}=0\) (or \(\neq 0\)). There is nothing to prove in the case \(Q_{ij,u_{i}}-Q_{ij,u_{j}}=0\). Assume now that \(Q_{ij,u_{i}}-Q_{ij,u_{j}}\neq 0\). Then we have
\[X_{n}=-\frac{P_{ij,u_{i}}-P_{ij,u_{j}}}{Q_{ij,u_{i}}-Q_{ij,u_{j}}}.\]
Considering in Equation (34), we understand that the value of \(X_{n}\) is unique. Hence,
\[X_{n}=-\frac{Q_{ij}}{2R_{ij}}\]
and by Lemma 2.1 we get
\[\left(\frac{Q_{ij}}{R_{ij}}\right)_{u_{i}}=\left(\frac{Q_{ij}}{R_{ij}}\right)_{u_ {j}}.\]
Equivalently, we have
\[\frac{(Q_{ij,u_{i}}-Q_{ij,u_{j}})R_{ij}-Q_{ij}(R_{ij,u_{i}}-R_{ij,u_{j}})}{R_{ ij}^{2}}=0,\]
yielding \(Q_{ij,u_{i}}-Q_{ij,u_{j}}=0\). This is a contradiction and completes the proof. So, there are two functions \(S_{ij},T_{ij}\) of single variable
\[S_{ij}(u_{1}+...+u_{n-1})=\frac{D_{ij}(u_{1},...,u_{n-1})}{F_{ij}(u_{1},...,u_{ n-1})},\quad T_{ij}(u_{1}+...+u_{n-1})=\frac{E_{ij}(u_{1},...,u_{n-1})}{F_{ij}(u_{1},...,u_{n-1})}.\]
After all these preparatory results, we are ready to present the following result.
**Theorem 4.6**.: _Hyperspheres are the only separable hypersurfaces in \(\mathbb{R}^{n}\)\((n>3)\) having nonzero constant sectional curvature._
Proof.: By contradiction, assume that \(M^{n-1}\) is not a hypersphere but has constant sectional curvature \(K_{0}/4\), \(K_{0}\neq 0\). By Lemma 4.4, \(R_{ij}=K_{0}F_{ij}\) is always different from \(0\) for every \(i,j\in\{1,...,n-1\}\), \(i<j\). Hence, Equation (34) is
\[\frac{P_{ij}}{R_{ij}}+\frac{Q_{ij}}{R_{ij}}X_{n}+X_{n}^{2}=0,\quad\text{ for every }i,j\in\{1,...,n-1\},\quad i<j. \tag{37}\]
Denote by \(X_{n,1}\) and \(X_{n,2}\) the roots of this equation. So,
\[X_{n,1}+X_{n,2}=-\frac{Q_{ij}}{R_{ij}},\quad X_{n,1}X_{n,2}=\frac{P_{ij}}{R_{ ij}},\]
where the statements in the right-hand sides depend on the variables \(u_{1},...,u_{n-1}\), while those in the left-hand sides depend on the variable \(u_{n}\). Now, we can set
\[\tilde{S}_{ij}(u_{1}+...+u_{n-1})=\frac{P_{ij}(u_{1},...,u_{n-1})}{R_{ij}(u_{ 1},...,u_{n-1})},\quad\tilde{T}_{ij}(u_{1}+...+u_{n-1})=\frac{Q_{ij}(u_{1},...,u_{n-1})}{R_{ij}(u_{1},...,u_{n-1})}.\]
Obviously, we have \(\tilde{T}_{ij,u_{i}}-\tilde{T}_{ij,u_{j}}=0\), for every \(i,j\in\{1,...,n-1\}\), \(i<j.\) On the other hand, Equation (37) becomes
\[\tilde{S}_{ij}+\tilde{T}_{ij}X_{n}+X_{n}^{2}=0,\quad\text{ for every }i,j\in\{1,...,n-1\},\quad i<j.\]
By Lemma 2.1, differentiating with respect to \(u_{i}\) and \(u_{n}\), we get
\[(2X_{n}+T_{ij})X_{n}^{\prime}-T_{ij,u_{i}}X_{n}-S_{ij,u_{i}}=0,\quad\text{ for every }i,j\in\{1,...,n-1\},\quad i<j. \tag{38}\]
We will eliminate the term \(X^{\prime}_{n}\) from Equations (21) and (38). We first multiply Equations (21) and (38) by \(-(2X_{n}+\tilde{T}_{ij})\) and by \(C_{ij}\), respectively. We then add the resulting equalities, obtaining a polynomial equation on \(X_{n}\) of degree 3
\[-\frac{A\tilde{T}_{ij}+C_{ij}\tilde{S}_{ij,u_{i}}}{2K_{0}}-\frac{2A_{ij}+B_{ij} \tilde{T}_{ij}+C_{ij}\tilde{T}_{ij,u_{i}}}{2K_{0}}X_{n}+\frac{K_{0}\tilde{T}_{ ij}-2B_{ij}}{2K_{0}}X_{n}^{2}+X_{n}^{3}=0,\]
for every \(i,j\in\{1,...,n-1\}\), \(i<j\). Here we denote the roots \(X_{n,1}\), \(X_{n,2}\) and \(X_{n,3}\). Hence, the sum of the roots are the negative of the coefficient of \(X_{n}^{2}\), namely
\[X_{n,1}+X_{n,2}+X_{n,3}=\frac{B_{ij}}{K_{0}}-\frac{\tilde{T}_{ij}}{2},\quad \text{ for every }i,j\in\{1,...,n-1\},\quad i<j,\]
where the statement in the right-hand side depend on the variables \(u_{1},...,u_{n-1}\), while the others depend on the variable \(u_{n}\). Using Lemma 2.1, where we differentiate with respect to \(u_{i}\) and \(u_{j}\), we obtain
\[\frac{1}{K_{0}}(B_{ij,u_{i}}-B_{ij,u_{j}})-\frac{1}{2}(\tilde{T}_{ij,u_{i}}- \tilde{T}_{ij,u_{j}})=0,\quad\text{ for every }i,j\in\{1,...,n-1\},\quad i<j,\]
or equivalently,
\[\frac{1}{K_{0}}(B_{ij,u_{i}}-B_{ij,u_{j}})=0,\quad\text{ for every }i,j\in\{1,...,n-1\},\quad i<j.\]
Since \(B_{ij,u_{i}}-B_{ij,u_{j}}=E_{ij}\), by Lemma 4.4 we arrive to a contradiction.
\(\square\)
**Acknowledgment.** Rafael Lopez is partially supported by MINECO/MICINN/FEDER grant no. PID2020-117868GB-I00, and by the "Maria de Maeztu" Excellence Unit IMAG, reference CEX2020-001105- M, funded by MCINN/AEI/ 10.13039/501100011033/ CEX2020-001105-M. Gabriel-Eduard Vilcu was supported by a grant of the Ministry of Research, Innovation and Digitization, CNCS/CCCDI - UEFISCDI, project number PN-III-P4-ID-PCE-2020-0025, within PNCDI III.
|
2309.10647 | Optimum optical designs for diffraction-limited terahertz spectroscopy
and imaging systems using off-axis parabolic mirrors | Off-axis parabolic mirrors (OAPMs) are widely used in the THz and mm-wave
communities for spectroscopy and imaging applications, as a result of their
broadband, low-loss operation and high numerical apertures. However, the
aspherical shape of an OAPM creates significant geometric aberrations that make
achieving diffraction-limited performance a challenge, and which lowers the
peak electric field strength in the focal plane. Here we quantify the impact of
geometric aberrations on the performance of the most widely-used spectrometer
designs, by using ray tracing and physical optics calculations to investigate
whether diffraction-limited performance can be achieved in both the sample and
the detector plane. We identify simple rules, based on marginal ray
propagation, that allow spectrometers to be designed that are more robust to
misalignment errors, and which have minimal aberrations for THz beams. For a
given source this allows the design of optical paths that give the smallest THz
beam focal spot, with the highest THz electric field strength possible. This is
desirable for improved THz imaging, for better signal-to-noise ratios in linear
THz spectroscopy and optical-pump THz-probe spectroscopy, and to achieve higher
electric field strengths in non-linear THz spectroscopy | Nishtha Chopra, James Lloyd-Hughes | 2023-09-19T14:28:24Z | http://arxiv.org/abs/2309.10647v2 | Optimum optical designs for diffraction-limited terahertz spectroscopy and imaging systems using off-axis parabolic mirrors
###### Abstract
Off-axis parabolic mirrors (OAPMs) are widely used in the THz and mm-wave communities for spectroscopy and imaging applications, as a result of their broadband, low-loss operation and high numerical apertures. However, the aspherical shape of an OAPM creates significant geometric aberrations that make achieving diffraction-limited performance a challenge, and which lowers the peak electric field strength in the focal plane. Here we quantify the impact of geometric aberrations on the performance of the most widely-used spectrometer designs, by using ray tracing and physical optics calculations to investigate whether diffraction-limited performance can be achieved in both the sample and the detector plane. We identify simple rules, based on marginal ray propagation, that allow spectrometers to be designed that are more robust to misalignment errors, and which have minimal aberrations for THz beams. For a given source this allows the design of optical paths that give the smallest THz beam focal spot, with the highest THz electric field strength possible. This is desirable for improved THz imaging, for better signal-to-noise ratios in linear THz spectroscopy and optical-pump THz-probe spectroscopy, and to achieve higher electric field strengths in non-linear THz spectroscopy.
Keywords:Terahertz off-axis parabolic mirror ray tracing
## 1 Introduction
Within the research community, different optical setups containing off-axis parabolic mirrors (OAPMs) are widely utilized to characterise broadband THz radiation sources [1; 2; 3], as well as to perform linear THz spectroscopy in the transmission or reflection geometry [4; 5], non-linear THz spectroscopy [6], optical-pump THz-probe spectroscopy (OPTPS) [7] and near-field THz microscopy [8]. The optimum performance for a THz spectrometer will be achieved if an aberration-free image of the THz source can be formed at each focal plane. For linear THz spectroscopy, a minimal spot size is often desired: to investigate samples that are small in transverse extent [9]; to couple more effectively to sub-wavelength structures and waveguides [10]; or to image spatially inhomogeneous materials, such as large-area graphene [11]. Further, the signal detected in the detector plane (_e.g._ via electro-optic sampling or photoconductive detection) in THz time-domain spectroscopy is proportional to the electric field of the THz beam (rather than the area- and time-integrated power, as in a bolometer or pyroelectric), and is hence larger when the THz beam has a smaller area. In OPTPS, the THz probe beam must have a smaller spatial extent than the optical pump beam, in order to probe a uniform carrier density: it is therefore also desirable to have a smaller THz beam so this condition can be readily achieved for lower power optical pump lasers [12]. Finally, intense THz pulses with high electric field strengths (typically \(>100\,\mathrm{kV/cm}\)) can be used to study the non-linear dynamical motion of vibrational modes or free charges [6]. For non-linear THz spectroscopy, it is thus important to efficiently focus one or more THz beams down to as close to the diffraction limit as possible, to obtain the highest electric field strength.
The majority of sources of broadband THz radiation (such as photoconductive antennae, optical rectification in non-linear crystals and spintronic emitters) produce THz beams that propagate in free space in the fundamental (TEM00) transverse Gaussian mode. The Gaussian nature of terahertz (THz) beams has been corroborated through a combination of theoretical and experimental techniques, wherein, scalar and vectorial diffraction theory has been
utilized to analyze the far-field behaviour of the THz pulses [13]. Similarly, experimental validation was achieved by imaging the spatial and temporal distribution of pulses, in a standard THz-TDS setup [14]. In free space, the divergence angle of this Gaussian mode is \(\theta=\lambda/\pi w_{0}\) in the paraxial approximation, where \(w_{0}\) is the radius of the initial beam waist. For example, \(\theta=18^{\circ}\) for light at \(1\,\mathrm{THz}\) (\(\lambda=300\,\mathrm{\mu m}\)) for \(w_{0}=300\,\mathrm{\mu m}\), and the divergence angle increases for longer wavelengths or smaller initial beam sizes. Hence, THz radiation tends to diverge rapidly, and large numerical aperture optics are required, in particular, to collect low-frequency components with longer wavelengths [15]. While polymer lenses are widely adopted in many THz systems due to their light weight and ease of fabrication [4], their finite Fresnel losses and absorption reduces optical throughput, and they can introduce optical aberrations.
To overcome these challenges, high-reflectivity metal-coated off-axis parabolic mirrors (OAPMs) are widely used [15]. An OAPM is a segment of a parent parabolic surface and can be defined by the off-axis angle, reflected effective focal length \(f\) and diameter \(D\), as illustrated in Fig. 1. These mirrors offer broadband performance while eliminating Fresnel losses, and can collimate a beam diverging from an on-axis point source without aberrations, or focus a collimated beam to a diffraction-limited spot. In the geometric optics (ray optics) picture, an OAPM can perfectly collimate rays of light that originate from a point source only if the point source is on the optical axis and in the focal plane, at a distance \(f\) from the mirror (blue rays in Fig. 1). However, real THz sources have a finite spatial extent, set by the size of the incident laser beam or the emitter's active area in laser-based THz systems, and the propagation of light from off-axis points in the object plane must also be considered (green rays in Fig. 1).
Different optical analysis techniques have been deployed to model OAPM systems, such as deriving analytical expressions for Hermite-Gaussian beam propagation after reflection from an OAPM in the paraxial approximation [16] or by using optical ray tracing methods [17; 18]. In the former case, after a fundamental Gaussian beam was incident onto an OAPM, the reflected electric field was found to have a skewed and broadened profile corresponding to the presence of additional higher order (TEM\({}_{30}\) and TEM\({}_{12}\)) Hermite-Gaussian modes [16]. Importantly, by adding a second OAPM placed at the correct distance and with the right orientation, it was shown that the distortions introduced by the second mirror can cancel out the distortions introduced by the first mirror, resulting again in a clean fundamental Gaussian beam (desirable in order to achieve a diffraction-limited spot). Alternatively, analytical ray tracing was used to show that if a collimated beam of light is incident onto an OAPM at an angle away from its optical axis, the focal spot is distorted [17].
While analytical approaches are useful for specific cases where the underlying assumptions are valid, they become cumbersome for realistic optical systems containing many OAPMs. An alternative approach is to use optics modelling software to simulate THz beam propagation, such as performed using ray-tracing to model OAPMs [18], or using point spread functions to simulate bi-conic curved mirrors that produce a line focus [19]. In modelling packages such as ZEMAX, the optical performance of beams with finite spatial extent can be modelled using ray tracing and point sources that are placed off-axis, or via physical optics propagation, which takes coherent propagation effects (diffraction and interference) into account. Bruckner _et al._ used the ray tracing approach in the ZEMAX software to show that for an off-axis point source, the beam after a single right-angled OAPM has an astigmatic wavefront [18]. Further, and similar to the conclusion derived in Murphy's work using analytical theory [16], Bruckner _et al._ demonstrated that a judiciously-oriented second OAPM can cancel out the astigmatic
Figure 1: Geometry of a right-angle OAPM.
wavefront, leading to diffraction-limited performance for off-axis field points. It was shown that the second OAPM, which was placed in a 4f geometry, has to be oriented to send the reflected beam anti-parallel to the original beam direction (i.e. along \(-z\) if the THz source radiates along \(z\)) in order to substantially cancel out aberrations for off-axis field points. This was in contrast to the alternative orientation for the second OAPM, reflecting the THz beam along \(+z\), in which aberrations were evident.
Based on these considerations, Laurita _et al._ subsequently performed an experimental comparison of the two geometries and showed that the beam waist at the sample focus of a typical THz-TDS system was smaller, at around 7 mm compared to 11 mm, for the aberration-corrected arrangement than the alternative orientation [20]. However, diffraction-limited performance was not achieved and the performance of the different OAPM arrangements in the detector plane was not considered. Recently, the present authors experimentally investigated a linear array of photoconductive THz emitters, where each pixel acted as a THz source at a different distance from the optical axis [3], and reported close to diffraction-limited performance obtained using an arrangement of OAPMs that corrected for aberrations.
Given the above challenges for optical systems containing OAPMs, it is timely and pertinent to consider how best to minimise geometric aberrations, such that diffraction-limited performance can be achieved at both the sample position and the detector position in THz spectrometers. In this article, we provide a comprehensive analysis of how the design of a spectrometer impacts its performance, with particular emphasis on the spatial and temporal spread in the sample and detector focal planes. In Section 2 we describe the modelling approach taken: we used the ZEMAX optical design software to model beam propagation (using ray tracing and physical optics) for the two optical systems most commonly used in THz spectroscopy, as well as an uncommon but optimum design, starting from a variety of on- and off-axis points. Further, we introduce a notation that captures the relative orientation of different OAPMs, based on marginal ray propagation. We then, in Section 3, examine and evaluate the performance of the different spectrometer designs by using spatial plots of beam propagation, and optical path differences in the time-domain. In Section 4 we conclude our analysis by ranking the three competing optical designs in terms of their robustness to misalignment, and their ability to form accurate images of the THz source in the sample plane and the detector plane. Finally, we provide rules based on marginal ray propagation to allow the optimum arrangement of OAPMs to be deduced without the need for optical modelling, which we check for a two OAPM system including a reflective mirror.
## 2 Modelling approach and spectrometer designs
In this Section, we define the ray-tracing and physical optics methodologies used and describe the key nomenclature used in the optical design field. Illustrated in Fig. 1 is a right-angled mirror viewed in the \(y-z\) plane, also termed the tangential plane. The \(x-z\) plane is referred to as the sagittal plane. The ray that propagates from the focus in the emitter plane along \(z\), along the optical axis of the OAPM, is referred to as the _chief ray_: it changes direction by \(90^{\circ}\) after hitting the geometric centre of the OAPM, to travel along \(y\). _Marginal rays_ are incident on the edges of the OAPM, and span the maximum aperture as seen from the geometric centre. The asymmetric shape of the OAPM is particularly evident when considering the two marginal rays that hit the closest side (at point \(a\)) or the further side (point \(b\)) of the OAPM, as the two marginal rays travel at different angles to the chief ray. Here, all OAPMs were assumed to be right-angled, with \(f=76.2\) mm and \(D=50.8\) mm, representative of the mirrors most often used. For this choice, the numerical aperture was 0.31, and the angle of the marginal rays to the chief ray was \(\alpha\simeq\tan^{-1}(D/2f)=18.4^{\circ}\). This is similar to the divergence calculated above for a 1 THz Gaussian beam, suggesting that the majority of the THz radiation produced by the emitter would be collected by this OAPM, and that the OAPM would be filled effectively such that smaller diffraction-limited beam foci can be achieved.
Typical THz spectroscopy and imaging systems require at least two pairs of two OAPMs: the first pair collects the THz radiation from the source, and focuses it onto a sample, while the second pair collects the transmitted or reflected THz beam and focuses it onto a detector. There are a substantial number of degrees of freedom possible for a complete spectrometer design, as each individual OAPM has six positional degrees of freedom (three translational and three rotational). Here we examined the most common spectrometer designs, which consist of four right-angled OAPMs, with their optical axes all within the same plane (here denoted \(y-z\)). For each OAPM added to an optical setup there are two options for its orientation that keep the optical axis in the same plane: these change the beam direction by \(\pm 90^{\circ}\) relative to its previous direction in the \(y-z\) plane.
To describe three different geometries reported in the literature we define the nomenclature U-shape, step-shape and S-shape to refer to the spectrometers illustrated in Figs. 2(a)-(c), based on the pattern formed when considering the chief ray's beam path (yellow lines and arrows). The U-shape and step-shape design are the most widely used in THz time-domain spectrometers, while the S-shape was suggested to minimise aberrations in the sample plane [20]. Laurita _et al._ compared the performance of the U-shape and S-shape (referred to as the "conventional" and "modified" designs in their work), but did not consider the step-shape or the performance in the detector plane.
In the U-shape and step-shape designs [Figs. 2(a)-(b)], the first two OAPMs are oriented identically, but the designs differ in how the second pair of OAPMs is oriented. For the U-shape, the second pair of OAPMs is oriented to return the THz beam to the same \(y\) co-ordinate, leading to a relatively compact design, while the step- and S-shapes have a larger size in \(y\).
We stress here that the U-shape does not have the correct layout to collect all the _on-axis_ rays after the second OAPM: across the sample plane, the marginal rays highlighted in red are _not_ collected by the third OAPM [Figs. 2(a)], and hence the throughput of the system is not perfect. This problem is not seen for the step-shape or S-shape, where the third OAPMs are oriented correctly, and all the marginal rays after the sample plane are collected.
Based on a consideration of the marginal rays and the symmetry properties of an OAPM system, we define here a nomenclature that we find useful to describe the optical arrangement of pairs of OAPMs, inspired by the Glazer and Aleksandrov notation schemes widely used to describe the tilt patterns of perovskite octahedra. As we show in the following section, the orientation of each OAPM within a pair and the relative orientation of each pair of OAPMs are both critical to achieve minimal aberrations. Therefore, we introduce the notation \((x_{i},x_{j})\) to denote the orientation of a pair of OAPMs, where \(x=a,b\) to denote whether or not the marginal ray reflects from the "near" side or "far" side points \(a\) and \(b\) in Fig. 1, and the subscript denotes the focal lengths \(f_{i}\) and \(f_{j}\). If the focal lengths of all mirrors are identical, as in this work, we drop the subscript notation. With this scheme, the U-shape design can be written \((a,b)(a,b)\) as pictured by the cyan ray in Fig. 2(a), as the marginal ray that reflects from point \(a\) on the first OAPM then reflects from \(b\), \(a\) and \(b\) points on the subsequent mirrors. The step-shape can be written \((a,b)(b,a)\) and the S-shape is \((a,a)(a,a)\). Retaining the brackets is useful as a guide to show where the foci of the optical system are, namely before and after each bracket.
To assess the performance of each setup, we introduced an offset in the position of the point source in the emitter (object) plane, either in the \(y\)-direction, as illustrated in Fig. 1, in the \(x\) direction, or in both. As a consequence of
Figure 2: Ray tracing models of three different THz-TDS geometries for an on-axis point source at centred at \((0,0)\) in the emitter plane (EP), propagating to the sample plane (SP) and detector plane (DP). (a) U-geometry, or \((a,b)(a,b)\) using the notation defined in the text. The red highlighted marginal rays are not captured as they propagate. (b) Step-geometry, or \((a,b)(b,a)\) arrangement. The second OAPM pair is oriented the same as the first. (c) S-geometry, or \((a,a)(a,a)\) setup. (d) For the U- and step-shapes, propagation from an off-axis \((0,2\,\mathrm{mm})\) source, shown as green rays, leads to a tangential plane focus behind the SP focus for on-axis propagation, thus creating aberrations. (e) For the sample plane of the S-geometry, the off-axis rays rays from \((0,2\,\mathrm{mm})\) converge in the SP with minimal aberration.
the point source being off-axis by a distance \(\delta=2\,\)mm, rays propagate at a slight angle \(1.5^{\circ}\) after the first OAPM, in comparison to the on-axis rays. This tilt can be observed in Fig. 2(d), where the off-axis rays are represented in green. The second OAPM, which forms an \((a,b)\) orientation pair, focused rays to a point at a distance \(-\delta\) in \(y\) from the optical axis, where the negative sign signifies that the image formed after the two OAPMs was inverted, and at a distance along the \(z\) axis after from the SP. In the sagittal plane (not shown here) the focus forms before the SP. Notably, for the alternative orientation of the second OAPM, \((a,a)\), pictured in Fig. 2(e), the off-axis rays are refocussed at the SP.
Geometric optics provides a rudimentary understanding of aberrations, but it has limitations as it does not consider diffraction or interference effects. In contrast, the physical optics module of ZEMAX can rigorously model the coherent propagation of light through the OAPMs, taking into account diffraction and interference. The beam is numerically defined as an array of sampled points with a complex amplitude in the plane normal to the chief ray. Here we performed physical optics calculations at 1 THz, with a Gaussian initial beam profile in the emitter plane, with beam waist \(w_{0}=0.4\,\)mm in \(x\) and \(y\) directions, initial power 1 W and hence peak irradiance \(I_{0}=4\,\)Wmm\({}^{-2}\). While the power and irradiance here were arbitrary, the beam waist is typical of some pulsed THz sources, such as large area photoconductive emitters [1] and optical rectification from amplified laser pulses. With this initial beam waist the beam diverges to fill the first OAPM without substantial loss at the edges, owing to the mirror's finite diameter. In order to achieve the best numerical accuracy, the physical optics module automatically chooses between propagating light from surface to surface along \(z\) via different algorithms (either Fresnel diffraction, or an angular spectrum propagation approach). Further, it adjusts the spatial grid to ensure the correct sampling as beams change diameter. Finally, the physical optics approach uses Gaussian "pilot" beams, propagated through the optical system, in order to aid the numerical algorithms: here separate \(x\) and \(y\) pilot beams were enabled to help better match the asymmetric shape of the OAPMs.
## 3 Results
To better investigate the impact of OAPM orientation on the capabilities of THz systems, we now consider the spatial and temporal performance of the different designs in more detail, using both ray optics and physical optics calculations. To comprehensively assess image quality, we conducted a comparative analysis of the spatial distribution and the temporal evolution of the THz field across all geometries, at a frequency of 1 THz (wavelength \(\lambda=300\,\mu\)m). Note that the only impact of the wavelength is that it changes the diameter of the diffraction-limited spot size. We set the THz beams to radiate from different positions in the emitter plane, in order to assess the impact of transverse mis-alignments and of imaging quality. Throughout this work we present results with an offset \(\delta=2\,\)mm as representative of the typical level of misalignment in coarsely-positioned THz setups, and also as a size representative of the mm scale over which far-field THz imaging can be performed. It should be noted that while results obtained with \(\delta=2\,\)mm are informative, larger \(\delta\) produces more significant aberrations.
### Spatial performance
The physical optics models produced beams that propagated through the spectrometers with close to Gaussian profiles near the focii. Pictured in Figure 3(a)-(c) are the beam profiles (normalised irradiance maps relative to the beam centres) at different positions along the propagation direction near the SP focus for the \((a,a)\) orientation, with a THz beam radiating from \((0,2\,\)mm) in the emitter plane. A circular beam profile was obtained before (panel a) and after (c) the focus, with a small spot in the focal plane (panel b). In contrast, before and after the focus the \((a,b)\) geometry results in a highly asymmetric beam profile (panels d and f) and a larger spot size in the focal plane than for the \((a,a)\) design. These results from numerical physical optics reinforce the conclusions drawn from Gaussian beam theory [16] and ray tracing [18] that the \(4f\)\((a,a)\) geometry can cancel out the geometric aberrations of each OAPM, and obtain good imaging performance even for off-axis THz beams.
The aberrations are not cancelled out in the \((a,b)\) orientation, with the consequence that the beam becomes asymmetric and exhibits an astigmatic difference \(d\): the beam is narrowest in \(x\) and \(y\) at different positions along \(z\), separated by a distance \(d\). This can be further seen in Fig. 3(f), where the beamwidths in \(x\) and \(y\) were calculated using the second order moment of the irradiance according to the ISO 11146-1 standard. For the \((a,a)\) geometry the sagittal (\(x\), blue circles) and tangential (\(y\), blue squares) beamwidths were similar, and were smallest (around 0.3 mm) at the SP (\(z=0\)). The sagittal (red circles) and tangential (red squares) beamwidths for the \((a,b)\) geometry had minima separated by a distance \(d=10\,\)mm. The astigmatic difference, \(d\), grows for larger THz beam offsets \(\delta\) in the emitter plane.
We used ray tracing and spot diagrams to assess the image quality for a wide variety of initial beam offsets, as ray optics calculations are substantially faster than physical optics simulations, and capture many of the same
features. In ray tracing, whether or not an optical system can achieve diffraction-limited performance can be assessed by examining "spot diagrams," which show the spatial distribution of rays as they intersect the focal plane, and comparing it to the diffraction-limited beam size, for instance, given by an Airy disk or from physical optics. Alternatively, because ray tracing keeps track of the phase of the electric field, the optical path difference (OPD) of different rays can be determined relative to the chief ray. The system can be assumed to be diffraction-limited if the phase change for all rays is sufficiently small: an OPD of less than a quarter wavelength is a common convention used in the optical modelling community to define diffraction-limited performance.
We report spot diagrams in Fig. 4 for each geometry (three rows) and at both the sample plane (left-hand column) and in the detector plane (right-hand column). In each case, the sequence of spot diagrams shows how the beam propagates through the focus in steps of 1 mm along the beam propagation direction. Different colours represent the spot diagrams obtained for on-axis and off-axis point sources. The spot diagrams are made up of a large number of spots, not all of which are separable on the scale shown here (10 mm by 10 mm for each spot diagram), where 1 mm corresponds to one of the smaller grey squares. It is evident for all geometries that propagation from an on-axis point source (blue points) is well-behaved, in that the rays traverse the system and reach a point-like focus both in the SP and the DP.
Considering the spot diagrams for the SP for the U-shape and the step-shape (top-left and middle-left black rectangles), which are identical owing to sharing the same initial \((a,b)\) geometry, it is clear that a small offset of 2 mm in \(x\) or \(y\) is sufficient to substantially distort the shape of the beam through the focus in a way that is entirely consistent with the physical optics results. The offset in \(y\) (green) creates a beam that is extended along \(y\) at a defocus of -2 mm (before the focus), and stretched along \(x\) after the focus. The rays cover a range about 2 times wider than the diffraction-limited size at 1 THz, as evident from a comparison with the black circles (Airy disks), which have radius \(\phi_{r}=1.22\lambda/f_{\theta}\) for f-number \(f_{\theta}\). The system does not have diffraction-limited performance at 1 THz or higher frequencies, as the Airy disk reduces in diameter at higher frequencies.
In contrast, for the S-shaped design, with \((a,a)\) OAPMs, the performance at the sample position (bottom-left) is excellent for all the off-axis points pictured. The reason for the better performance can be intuitively understood as follows: at the second OAPM the marginal rays hit a matched surface, such that the distortion in the wavefront introduced by the first OAPM can be "inverted" by the second OAPM, as the beam propagates to the focus. Considering the \((a,a)\) configuration further, we found it continued to achieve diffraction-limited performance for offsets as large as \(\pm 4\) mm, although for even larger offsets (_e.g._ 8 mm) the beam tilt in the tangential plane in the collimated section became too substantial and a large fraction of marginal and central rays were lost. This limits the maximum area for useful THz imaging (using these OAPM diameters) to around \(\pm 5\) mm from the optical axis.
We continue to discuss the spot diagrams in Fig. 4 by considering the right-hand column, which illustrates the spot diagrams close to the detector plane for the same point source offsets and defocus positions as discussed above. From these diagrams, it is clear that the U-shape, \((a,b)(a,b)\), and step-shape, \((a,b)(b,a)\), designs have
Figure 3: (a)-(c) Beam profiles of \((a,a)\) geometry from physical optics calculations for a \((0,2\) mm) offset source at the emitter, at different \(z\). (a) \(z=-5\) mm relative to the focus formed (at \(f=76.2\) mm from the second OAPM), (b) at \(z=0\) (the focus) and (c) \(z=5\) mm. Note that the \(x\) and \(y\) co-ordinates are shown relative to the beam centres (the beam is actually centred at \(y=-2\) mm relative to the optical axis). (d)-(f) are the same as (a)-(c), but for positions close to the focus formed for the \((a,b)\) geometry. (g) Illustrates the beamwidths \(\sigma\) in \(x\) (sagittal, filled circles) and \(y\) (tangential, open squares) calculated from the second moment of the profiles at different positions \(z\) along the beam (relative to the SP, at \(z=0\)), for both \((a,a)\) (blue lines) and \((a,b)\) (red) oriented OAPMs.
substantially different imaging performances at the detector plane. For the U-shape, the aberrations evident at the SP are further compounded by the second OAPM pair, such that the off-axis spots are even larger in the DP. In contrast, for the step-shape geometry, the DP performance is diffraction-limited and comparable to that of the S-shape.
Returning to the diffraction-based results, these allow the irradiance of the beam to be readily calculated, as reported in Fig. 5 for gaussian beams originating from \((0,2\,\mathrm{mm})\) in the emitter plane, away from the optical axis. Near the sample position, shown in Fig. 5(a), two maxima are formed before and after the nominal focal length (shown as \(z=0\)) as a result of the astigmatic difference. These correspond to positions close to the minimum beam waists in \(x\) and \(y\) (Fig. 3(b)). In contrast, the \((a,a)\) geometry has a smaller beam waist and negligible astigmatic difference, resulting in a 5 times greater peak irradiance for the \((a,a)\) geometry than the \((a,b)\) geometry. Similarly to the spot diagrams near the detector plane (Fig. 4), the irradiance near the detector plane (Fig. 5(b)) demonstrates that the \((a,a)(a,a)\) and \((a,b)(b,a)\) designs perform far better than the \((a,b)(a,b)\) geometry.
Figure 4: Spatial profile of the THz beam near the sample plane (left column) and detector plane (right column), for the three different designs (rows). Within each black rectangle, spot diagrams are shown for a particular plane and design, as a function of the offset of the source in the emitter plane and the “defocus” - the difference along \(z\) from the ideal focus. The colours and far right-hand labels indicates the \((x,y)\) offsets of the point source in the emitter plane (in mm). Every plot has an Airy-disk (black line) that provides a visual indicator of the diffraction-limited beam size at \(1\,\mathrm{THz}\) (\(\lambda=300\mu\mathrm{m}\)) for an entrance pupil diameter of \(50.8\,\mathrm{mm}\). The U-geometry has significant aberrations for a \(2\,\mathrm{mm}\) offset, both at the SP and at the DP: rays fall outside the Airy disk limit, and the image is distorted. The DP performance of the step-geometry is poor at the SP but good at the DP, while the S-shape rays fall within the Airy disk for both SP and DP, and it exhibits the least aberrations.
### Temporal performance
Spot diagrams (from ray optic calculations) and intensity profiles (from physical optics simulations) offer a purely geometric perspective on aberrations, and provide insights into their spatial characteristics. However, for a more comprehensive understanding, it is advantageous to investigate the optical path difference (OPD) of the system relative to the ideal wavefront surface, using ray optics. The ideal wavefront is taken to be a sphere with radius \(f\), centred at the chief ray's position in the image plane. The OPD is then calculated for the different rays and is shown graphically by plotting it as a function of positions \(P_{x}\) and \(P_{y}\), which correspond to the ray's position in the exit pupil (the last optic) in the sagittal and tangential planes, respectively. By convention \(P_{x}\) and \(P_{y}\) are given normalised by the radius of the last optic. In Figure 6 we present the OPD for the sample plane versus \(P_{x}\) (top row) and \(P_{y}\) (bottom row), for various point source positions in the emitter plane (left to right), and for each design. While the OPD is often shown as the path difference divided by the wavelength, we opted here to show the OPD as a time delay to allow a better understanding of the scale of the OPD changes for the THz time-domain spectroscopy community.
For the on-axis case, \((0,0)\), the horizontal lines in the top-left and bottom-left panels in Fig. 6 signify the absence of an optical path difference (OPD) for all designs, i.e. zero phase difference, representing a perfect spherical wavefront converging to a point image. The off-axis points (representing translational misalignment or finite beam size) result in the incident wavefront converging at the focal point with a slight tilt caused by either the delayed or advanced arrival of these rays. For the \((0,2\,\mathrm{mm})\) field point a parabola-like OPD can be seen in \(P_{x}\) and \(P_{y}\) for the step and U geometries, with large OPD values \(>500\,\mathrm{fs}\) for the marginal rays (rays at large positive or negative values of \(P_{x}\) or \(P_{y}\)). In contrast, for S-geometry, the OPD change is small.
For THz time-domain spectroscopy applications, it is important to assess how the different path lengths impact the THz pulse duration at the sample position. One could average over the OPD curve by using an assumed profile for the THz beam at the exit pupil (_e.g._ a Gaussian) to weight the amplitude for each ray. Rather than perform that exercise, which would be wavelength and beam specific, we instead highlight the typical pulse duration of the fs laser pulses (\(\sim 100\,\mathrm{fs}\)) used in THz time-domain spectroscopy by the horizontal dashed lines. An OPD larger than this limit will introduce a substantial phase shift, and hence will broaden the duration of a THz pulse, simultaneously lowering the peak amplitude of the THz electric field. It is clear from Fig. 6 that for off-axis points (e.g. from poor emitter alignment) the temporal duration of the THz pulse at the sample plane is likely to be longer in duration for the step- and U-geometries, as a result of geometric aberrations. Further, it can be seen that precise alignment in the \(y\) direction is more critical (the OPDs are smaller for offsets in \(x\)).
The OPD for the complete 4 OAPM optical setups is illustrated in Fig. 7, which provides a clear depiction of why the aberrations in the DP differ for the three geometries. The OPD for off-axis point sources is largest for the U-geometry, reaching as large as \(1\,\mathrm{ps}\) (green curves), suggesting that the U-shape will temporally broaden the THz pulse at the detection position as a result of the substantial geometric aberrations. In comparison, the step-shape (orange curves) and S-geometry (blue curves) have a phase difference near zero, and within the \(\pm 100\,\mathrm{fs}\) limit that we suggest is important for typical THz time-domain spectrometer systems.
While the excellent performance of the S-geometry can be assumed to result from the cancellation of distortions within each pair in the \((a,a)(a,a)\) orientation, the similar performance of the step-shape at the detector position, and the poor performance of the U-shape need further explanation. The relative performance can be explained by considering the OPD introduced by each OAPM pair, as well as keeping track of the relative orientation of each pair using the marginal ray notation introduced above: \((a,b)(b,a)\) for the step-shape and \((a,b)(a,b)\) for the
Figure 5: Peak irradiance extracted from physical optics calculations for an initial Gaussian beam centred at \((0,2\,\mathrm{mm})\) in the emitter plane, shown (a) near the sample focal plane and (b) near the detector focal plane.
U-shape. For the first OAPM pair, the \((a,b)\) orientation creates a distorted wavefront for an off-axis field point, for example, the roughly parabolic OPDs in \(P_{x}\) and \(P_{y}\) seen in Fig. 6 for a \((0,2\,\mathrm{mm})\) point source in the emitter plane. In the step-shape, the second OAPM pair has \((b,a)\) orientation, and is in the same orientation in the lab coordinate system as the first pair. However, the OPD that the \((b,a)\) pair produces is not the same as for the first \((a,b)\) pair, because the image is inverted from the EP to the SP: e.g. the \((0,2\,\mathrm{mm})\) point in the emitter plane maps to \((0,-2)\,\mathrm{mm}\) in the sample plane. Critically, the OPD flips in sign for inverted field points in the \(x-y\) plane [18]. Thus the OPD (or wavefront distortion) created by the second, \((b,a)\) pair in the step-geometry acts to cancel out the OPD of the first pair. In contrast, if the second pair has \((a,b)\) orientation, as in the U-shape, the OPD of the second pair has the same sign, and the wavefront distortions add rather than subtract.
## 4 Discussion
From the results presented above we can conclude that the S-shape geometry should be adopted in linear, non-linear and OPTP spectroscopy applications, as it is the most robust to the misalignment of the THz source in the \(x-y\) plane, and can best cancel out aberrations introduced by the OAPMs in order to achieve a diffraction-limited performance and the highest irradiance in both the SP and DP. For linear THz spectroscopy, how critical diffraction-limited performance is at the intermediate focal plane (the sample position) depends on the application: if raster scanned images are required, or if samples with small \(x-y\) extent are studied, then it is clearly advantageous to also work with the S-shape \((a,a)(a,a)\) geometry. However, if only spectroscopic measurements are desired, and the highest peak electric field/smallest beam area at the sample position is not critical, then the step-shape can be alternatively used as it has reasonable performance in the DP.
While the magnitude of the geometric aberrations in the SP and DP are properties of the spectrometer design and the EP offset, whether or not diffraction-limited performance can be achieved depends on the wavelength and it is therefore useful to consider the frequency-dependent performance of each design. For example, diffraction-limited performance is not achieved at \(1\,\mathrm{THz}\) in the SP for the U- and step-geometries (Fig. 4) for a \(2\,\mathrm{mm}\) offset, because geometric aberrations cause rays to fall over a region about twice the diameter of the Airy disk. However at \(500\,\mathrm{GHz}\), where the Airy disk doubles in radius relative to that at \(1\,\mathrm{THz}\), the majority of rays would fall within the Airy disk and diffraction-limited performance would be achieved at the SP. For higher THz frequencies, the Airy disk reduces in diameter and the relative contribution of aberrations becomes more significant.
Based on a consideration of the spot diagrams and OPDs presented above, we can now suggest rules to aid the design of OAPM systems for THz spectroscopy and imaging:
Figure 6: Wavefront distortion illustrated by the optical path difference for the first two OAPMs. The plot presents the tangential (\(P_{y}\)) and sagittal plane (\(P_{x}\)) optical path differences at the exit pupil. The dashed red line denotes the typical fs laser pulse duration (\(100\,\mathrm{fs}\)) used in THz-TDS setups. The OPD is not defined for exit pupil coordinates where there are no rays on the final OAPM (because of beam tilt), for example along \(P_{y}\) in the S-geometry for \((0,2\,\mathrm{mm})\).
1. geometric aberrations may be expected to impact the optical performance of typical OAPM systems at 1 THz and higher frequencies;
2. an OAPM pair should have \((a,a)\) orientation to minimise aberrations at the focus;
3. if an \((a,b)\) orientation is imposed by some other experimental constraint, then it will have aberrations in its image plane;
4. these aberrations can be inverted by a \((b,a)\) pair.
These rules allow the optimum OAPM orientation to be deduced for more complex spectrometer designs, without needing to ray-trace the geometry. For example, if an additional optic such as a mirror is required to change the THz beam direction, or to couple an optical beam along the optical axes of the OAPMs (using a THz mirror that passes visible light and reflects THz radiation), then it is desirable to know the optimum orientation for the OAPM(s) after the THz mirror. We now consider the setup pictured in Fig. 8, where a mirror is placed after the first OAPM, and the THz beam propagates along \(z\). To form a focus, two orientations of the second OAPM are possible (top and bottom left diagrams). To identify the correct orientation of the final OAPM, such that aberrations are minimised, marginal ray notation and the rules above are useful. According to these rules the \((a,a)\) configuration (bottom left) should be used as it will have less significant geometric aberrations than the \((a,b)\) configuration (top left). Indeed the spot diagrams in the DP (right hand panels) validate this assertion. Similar considerations can be applied to correctly orient OAPMs in reflection geometry systems, where the THz beam reflected from a sample is collected, and THz beamsplitters are used.
For completeness we briefly discuss the THz detection process. In THz detection with photoconductive antennae, the typical antenna gap is around 10-20 \(\mu\)m, which is strongly sub-wavelength. The effective width of the antenna is larger than its gap, and varies with geometry (e.g. bow-tie, dipole, log-spiral). Hence commercial photoconductive antennae use high numerical aperture silicon lenses to focus THz beams down to sizes smaller than the free-space limit. Here the numerical model does not treat metallic antennae or silicon lenses, and our numerical results are hence closest to the experimental case of electro-optic sampling, where no lens or antenna is used. More advanced electromagnetism calculations would be required to include antenna effects. However, the conclusions drawn here are independent of the detection process: for example the electric field input into the detector will be less distorted for the \((a,a)(a,a)\) geometry than the \((a,b)(a,b)\) geometry.
Finally, it is worth discussing the impact of geometric aberrations on polarisation. There have been studies that reported asymmetric field and polarization distributions for OAPMs with different f-numbers used in the U- and step-shape [21]. Conversely, abrupt changes in THz polarization states were experimentally observed by Takai _et al._[22], and were ascribed to the inherent geometry of the OAPMs. In the present study, the polarization performance of different OAPM geometries was not considered. We note that in a recent experimental study using
Figure 7: Wavefront distortion illustrated by the optical path difference for the entire optical path to the detection plane. The S- and step-geometry (blue and orange respectively) demonstrate a superior performance for off-axis points, showing minimal path difference and the least temporal broadening. On the other hand, for the U-geometry, we observe that translational misalignment rapidly cause wavefront aberrations leading to large OPDs.
multi-pixel THz emitters producing radial and azimuthal polarization, the polarization state can be experimentally corrected [23].
## 5 Conclusion
The study presented here contributes to a better understanding of the propagation and behaviour of THz pulses in multi-OAPM geometries. We demonstrated that geometric aberrations can limit the optical performance both in the sample and detector plane: they increase the spot size and decrease the amplitude of the THz field (they lower the irradiance). Depending on the specific orientation of each OAPM in the optical system, the wavefront distortions, and aberrations at each subsequent focus, can be reduced or enhanced. Results from both ray tracing and physical optics, including diffraction and interference effects, were in agreement. We introduced marginal ray notation to capture the orientation of each OAPM in a system. The S-shape cancels out the geometric aberrations by the correct orientation of the second OAPM for each pair throughout the optical path, while the step-shape achieves good performance for off-axis rays at the detection plane, by the second pair of OAPMs cancelling out the distortion produced by the first pair. The use of the U-shape design should be discouraged as it adds, rather than subtracts, the distortion from each OAPM pair. Our modelling approach and design rules can be readily applied to the design of more complex THz imaging and spectroscopy setups based on OAPMs.
## Declarations
**Ethical Approval. N/A Competing interests.** The authors declare that they have no competing interests.
**Authors' contributions.** N.C. performed the optical modelling, prepared the figures, and drafted and edited the paper. J.L. discussed the optical modelling, helped prepare the figures, and drafted and edited the paper.
**Funding.** The authors would like to acknowledge funding from the EPSRC (UK) (Grant No. EP/V047914/1).
**Availability of data and materials.** ZEMAX modelling files and data are available from the authors on reasonable request.
|
2310.03980 | Assessment and Application of Wavelet-based Optical Flow Velocimetry
(wOFV) to Wall-Bounded Turbulent Flows | The performance of a wavelet-based optical flow velocimetry (wOFV) algorithm
to extract high accuracy and high resolution velocity fields from particle
images in wall-bounded turbulent flows is assessed. wOFV is first evaluated
using synthetic particle images generated from a channel flow DNS of a
turbulent boundary layer. The sensitivity of wOFV to the regularization
parameter (lambda) is quantified and results are compared to PIV. Results on
synthetic particle images indicated different sensitivity to
under-regularization or over-regularization depending on which region of the
boundary layer is analyzed. Synthetic data revealed that wOFV can modestly
outperform PIV in vector accuracy across a broad lambda range. wOFV showed
clear advantages over PIV in resolving the viscous sublayer and obtaining
highly accurate estimates of the wall shear stress. wOFV was also applied to
experimental data of a developing turbulent boundary layer. Overall, wOFV
revealed good agreement with both PIV and PIV + PTV. However, wOFV was able to
successfully resolve the wall shear stress and correctly normalize the boundary
layer streamwise velocity to wall units where PIV and PIV + PTV showed larger
deviations. Analysis of the turbulent velocity fluctuations revealed spurious
results for PIV in close proximity to the wall, leading to significantly
exaggerated and non-physical turbulence intensity. PIV + PTV showed a minor
improvement in this aspect. wOFV did not exhibit this same effect, revealing
that it is more accurate in capturing small-scale turbulent motion in the
vicinity of boundaries. The enhanced vector resolution of wOFV enabled improved
estimation of instantaneous derivative quantities and intricate flow structure
both closer to the wall. These aspects show that, within a reasonable lambda
range, wOFV can improve resolving the turbulent motion occurring in the
vicinity of physical boundaries. | Alexander Nicolas, Florian Zentgraf, Mark Linne, Andreas Dreizler, Brian Peterson | 2023-02-28T11:52:02Z | http://arxiv.org/abs/2310.03980v1 | **Assessment and Application of Wavelet-based Optical Flow Velocimetry (wOFV) to Wall-Bounded Turbulent Flows**
###### Abstract
The performance of a wavelet-based optical flow velocimetry (wOFV) algorithm to extract high accuracy and high resolution velocity fields from tracer particle images in wall-bounded turbulent flows is assessed. wOFV is first evaluated using synthetic particle images generated from a channel flow DNS of a turbulent boundary layer. The sensitivity of wOFV to the regularization parameter (\(\lambda\)) is quantified and results are compared to cross-correlation-based PIV. Results on synthetic particle images indicated different sensitivity to under-regularization or over-regularization depending on which region of the boundary layer is being analyzed. Nonetheless, tests on synthetic data revealed that wOFV can modestly outperform PIV in vector accuracy across a broad \(\lambda\) range. wOFV showed clear advantages over PIV in resolving the viscous sublayer and obtaining highly accurate estimates of the wall shear stress and thus normalizing boundary layer variables. wOFV was also applied to experimental data of a developing turbulent boundary layer. Overall, wOFV revealed good agreement with both PIV and a combined PIV \(+\) PTV method. However, wOFV was able to successfully resolve the wall shear stress and correctly normalize the boundary layer streamwise velocity to wall units where PIV and PIV \(+\) PTV showed larger deviations. Analysis of the turbulent velocity fluctuations revealed spurious results for PIV in close proximity to the wall, leading to significantly exaggerated and non-physical turbulence intensity in the viscous sublayer region. PIV \(+\) PTV showed only a minor improvement in this aspect. wOFV did not exhibit this same effect, revealing that it is more accurate in capturing small-scale turbulent motion in the vicinity of boundaries. The enhanced vector resolution of wOFV enabled improved estimation of instantaneous derivative quantities and intricate flow structure both closer to the wall and more accurately than the other velocimetry methods. These aspects show that, within a reasonable \(\lambda\) range that can be verified using physical principles, wOFV can provide improvements in diagnostics capability in resolving turbulent motion occurring in the vicinity of physical boundaries.
## 1 Introduction
Fluid flow dynamics and the interaction with walls are of prime importance in a variety of engineering applications. The dynamics of the 'boundary layer' region are of major interest and have been the subject of extensive research since the fundamental work of (Prandtl, 1904). Detailed knowledge of the momentum transport processes within turbulent boundary layers underpins the success of many industrial, aerodynamic and medical designs and their relevant applications. Obtaining accurate velocity measurements across the extent of the boundary layer flow is key to developing a sound understanding of the complex multiscale phenomena present in wall-bounded turbulence. The structure of the turbulent boundary layer is commonly delineated based on regions primarily dominated by either viscous stresses (the viscous sublayer), turbulent Reynolds stresses (the logarithmic region) or influenced by both (the buffer layer). To evaluate and facilitate comparison of these different regions between theoretical, numerical and experimental results, the boundary layer mean streamwise velocity \(\langle U_{1}\rangle\) and wall-normal distance coordinate \(x_{2}\) are typically normalised to so-called "wall units":
\[u^{+}=\frac{\langle U_{1}\rangle}{u_{\tau}} \tag{1}\]
\[y^{+}=\frac{x_{2}}{v} \tag{2}\]
where \(\nu\) is kinematic viscosity, a physical property of the fluid. The key variable involved in the nondimensionalization is the friction velocity \(u_{\tau}\), defined as:
\[u_{\tau}=\sqrt{\frac{\tau_{w}}{\rho}} \tag{3}\]
where \(\rho\) is the fluid density. \(\tau_{w}\) is the mean wall shear stress:
\[\tau_{w}=\left.\mu\frac{\partial(U_{1})}{\partial x_{2}}\right|_{x_{2}=0}=\left. \mu\gamma\right. \tag{4}\]
where \(\mu\) is the dynamic viscosity (\(\mu=\nu\rho\)). For flows with constant physical properties, it can be seen from Eqns. 1-4 that the normalising variable \(u_{\tau}\) is ultimately defined by the wall shear stress \(\tau_{w}\) and therefore by the estimate of the velocity gradient at the wall \(\gamma\). The accurate determination of \(\gamma\) is thus crucial for accurate scaling and subsequent evaluation of boundary layer quantities. This sharp gradient due to the no-slip condition at the wall is challenging to resolve experimentally due to the need to sample flow motion down to the wall with sufficient accuracy with minimal disturbance to the flow itself.
Non-intrusive flow measurement techniques such as digital particle image velocimetry (PIV) have become well established for boundary layer investigations (Adrian et al., 2000; Willert, 2015; De Silva et al., 2014; Dennis and Nickels, 2011; Gao et al., 2013; Herpin et al., 2008; Lehew et al., 2013; Schroder et al., 2011). To determine the velocity, each PIV image is subdivided into interrogation windows (IWs), which are cross-correlated between image frames. PIV has become a mature diagnostic technique that is robust, efficient and well-understood in terms of its sources of error and theoretical underpinnings. However, a fundamental limitation still exists in that the spatial resolution of PIV is directly related to the smallest size of the IW (Kahler et al., 2012). Since the velocity vector represents a spatially-averaged velocity of particles within each IW, the estimated velocity is a low-pass filtered version of the true fluid flow, which is problematic if turbulent fluctuations and velocity gradients are present within the IW itself. Particularly in the case of wall-bounded flows, which always feature strong velocity gradients near the wall, obtaining accurate and reliable velocity measurements in the vicinity of the wall can present challenges for cross-correlation-based PIV. The low-pass filtering effect increases uncertainty in regions of high velocity gradients due to an increased spread and biasing of the correlation peak (Scarano and Riethmuller, 2000; Kahler et al., 2012; Raffel et al., 2018).
Another velocimetry technique is particle tracking velocimetry (PTV) which attempts to detect and subsequently match individual tracer particles between frames to determine their velocity. This method is sometimes used as a subsequent step following an initial PIV result in hybrid PIV + PTV algorithms (Keane et al., 1995; Sittou and Riethmuller, 2001). Use of PIV + PTV can significantly improve the achievable spatial resolution over PIV (Kahler et al., 2012), without requiring low seeding densities and has been employed to study boundary layer flows (Renaud et al., 2018; Ding et al., 2019; Kahler et al., 2012). However, PTV vector fields often contain higher noise levels in the signal, and sufficient filtering or direct spatial/ensemble averaging is required to mitigate this noise.
A promising alternative to these traditional velocimetry techniques is a method originating from the field of computer vision, known as optical flow (Horn and Schunck, 1981). Optical flow is a method often referred to in literature as dense motion estimation, i.e., a velocity vector is calculated for every pixel in a digital image. Application of optical flow velocimetry (OFV) techniques have previously demonstrated increased accuracy and resolution over conventional correlation-based methods (Yuan et al., 2007; Ruhnau et al., 2007; Corpetti et al., 2006; Heas et al., 2012; Derian et al., 2013; Kadri-Harouna et al., 2013; Schmidt and Sutton, 2020; B. E. Schmidt et al., 2021). Such studies have primarily focused on synthetic and experimental test cases involving analytical flows, isotropic turbulence and free shear flows.
The impressive spatial resolution and improved velocity vector accuracy associated with OFV makes it an attractive tool to resolve velocities close to the wall, enabling reliable calculation of \(\gamma\). At the same time, OFV can improve estimation of small-scale turbulent fluctuations near the wall as well as computation of derivative quantities which yield insight on near-wall vortical structures that are believed to play an important role in the organization of turbulence within the boundary layer (Robinson, 1991; Herpin et al., 2012; Adrian et al., 2000). Despite its capabilities, only a few applications of OFV to wall-bounded flows exist in the literature (Kapulla et al., 2011; Kahler et al., 2016; Stanislas et al., 2005; Ruhnau and Schnorr, 2006; Stark, 2013; Cai et al., 2019; Gevelber et al., 2022). Such studies primarily use wall-bounded environments as test cases for other aspects of the specific OFV algorithms and limit investigations to velocity profiles. A thorough evaluation of OFV to resolve a turbulent boundary layer is limited. Furthermore, analysis of derived quantities such as the wall shear stress as well as evaluation of the accuracy and effect on resolution of the inner-scaled turbulent boundary layer quantities is absent in the literature.
Variational OFV techniques involve selection of a scalar regularization parameter that is typically determined empirically. Regularization imparts a degree of spatial regularity to the estimated flow field that
suppresses non-physical noise and provides closure to the optical flow problem. Correct selection of the regularization parameter \(\lambda\) is key in obtaining accurate velocity fields that accurately resolve fine-scale motion without excessive damping or smoothing of velocity gradients. This is especially important in estimating \(\gamma\), where the discontinuity in motion at the wall can be particularly susceptible to the smoothing effect inherent in regularization (Weickert & Schnorr, 2001; Kalmonu, 2018; Zach et al., 2007; Black & Anandan, 1996; Aubert, 1999). To the best of the authors' knowledge, other works exploring or discussing this parameter in the context of fluid velocimetry are limited to those of (Corpetti et al., 2002; Kapulla et al., 2011; Stark, 2013; Schmidt & Sutton, 2020; Cai et al., 2018; Heas et al., 2013). None of these, however, have investigated the sensitivity of \(\lambda\) specifically in relation to near-wall measurements in wall-bounded flows.
The present work assesses the performance of an advanced wavelet-based optical flow velocimetry (wOFV) method to obtain highly resolved and accurate measurements of velocity and derived quantities such as the wall shear stress in turbulent wall-bounded flows. The influence of regularization on velocity results and normalized boundary layer quantities is investigated to understand the effect of this parameter. The first part of the manuscript provides an overview of optical flow and a brief outline of the wavelet-based implementation. This is followed by a detailed assessment and sensitivity study of the regularization parameter on wOFV results in comparison to correlation-based PIV using synthetic particle images generated from DNS of a turbulent channel flow. The final part of this work applies wOFV to an experimental PIV dataset featuring a developing turbulent boundary layer. Results are compared to correlation-based PIV processing to demonstrate the advantages of wOFV as an alternative technique in the study of turbulent wall-bounded flows.
## 2 Optical Flow
### Principles
Optical flow describes the apparent displacement of brightness intensity patterns in an image sequence (Horn & Schunck, 1981). The basic assumption in optical flow techniques is the conservation of a quantity in the image plane, typically brightness intensity along a point trajectory. This is expressed as the optical flow constraint equation (OFCE):
\[\frac{dI(\mathbf{x},t)}{dt}=\frac{\partial I(\mathbf{x},t)}{\partial t}+\mathbf{U}(\mathbf{x}, t)\cdot\nabla I(\mathbf{x},t)=\ 0\hskip 28.452756pt(5)\]
where \(I(\mathbf{x},t)\) is the brightness intensity at pixel locations \(\mathbf{x}=(x_{1},x_{2})^{T}\) in the image domain \(\Omega\) and \(\mathbf{U}(\mathbf{x},t)=\big{(}U_{1}(\mathbf{x},t),U_{2}(\mathbf{x},t)\big{)}^{T}\)is the two-dimensional displacement. Equation 5 is recognisable as a transport equation of a passive scalar in a divergence-free flow (Liu & Shen, 2008). Assuming a constant velocity and a unit time interval between the image pair, Eq. 5 can be integrated to the displaced frame difference (DFD):
\[I_{0}(\mathbf{x})-\ I_{1}\big{(}\mathbf{x}+\mathbf{U}(\mathbf{x})\big{)}=0\hskip 28.452756pt(6)\]
Equation 5 or 6 is known as the _data term_ in OF literature. It establishes the relationship between a measurement in the image plane \(I(\mathbf{x},t)\) and the variable to be calculated \(\mathbf{U}(\mathbf{x})\). The data term is incorporated into a penalty function to be minimised, commonly a quadratic penalty as employed in the present study:
\[J_{D}=\ \int\limits_{\Omega}\ [I_{0}(\mathbf{x})-\ I_{1}(\mathbf{x}+\mathbf{U}(\mathbf{x}))]^{2 }\ d\Omega\hskip 28.452756pt(7)\]
The data term however is ill-posed, as it relates a two-dimensional velocity to only one observed variable being the image intensity. This results in an ambiguous situation where only motion perpendicular to brightness gradient contours can be determined, known in literature as the aperture problem (Beauchemin & Barron, 1995). Different methods of resolving the aperture problem exist (Barron et al., 1994). In the seminal work of (Horn & Schunck, 1981), a variational approach was proposed to assimilate the data term together with an additional smoothness constraint known as the _regularization term_\(J_{R}\), weighted by a scalar parameter \(\lambda\), into a minimization problem to solve for the image plane per-pixel displacement:
\[\mathbf{U}=\arg\ \min_{\mathbf{U}}J_{D}(I_{0},I_{1},\mathbf{U})+\ \lambda J_{R}(\mathbf{U})\]
where the caret (\(\mathbf{\widehat{\ }}\)) denotes the final estimated quantity. The regularization term, which is solely a function of the velocity field, provides additional information about the velocity field to compensate for regions where motion
cannot be determined from the data term alone, such as tangentially along image contours, as well as regions of constant, uniform image features where spatiotemporal image gradients vanish. \(J_{R}\) affects the spatial coherence of neighbouring velocity vectors and enforces a degree of regularity or visually perceived "smoothness" to the velocity field. Regularization also functions as a type of outlier rejection process during the minimization (Heitz et al., 2010) and reduces the susceptibility of the estimated vector field to noise and imaging imperfections. In the context of fluid velocimetry, regularization terms involving higher-order derivatives of the velocity field are preferred to better preserve velocity gradients and thus improve estimation of derived quantities such as vorticity and strain-rate. The present work uses the Laplacian regularization in a quadratic penalty:
\[J_{R}=\int\limits_{\partial}|\boldsymbol{\gamma}^{2}U_{1}|^{2}+|\boldsymbol{ \gamma}^{2}U_{2}|^{2}\,d\Omega \tag{9}\]
with the continuous wavelet operator approximation described in (Kadri-Harouna et al., 2013). Laplacian regularization imparts a physically sound smoothing in a similar manner to viscosity in divergence-free two-dimensional flows (Schmidt and Sutton, 2021). Using the Laplacian regularization provides nearly identical accuracy but with significantly less computing time than other high-order schemes such as the second-order divergence curl (Suter, 1994) or viscosity-based regularization (Schmidt and Sutton, 2021).
Variational optical flow techniques seek a per-pixel vector field transformation \(\boldsymbol{\bar{U}}\) that maps one image onto the subsequent that best: (1) conserves pixel brightness intensity and (2) enforces the regularity defined by \(J_{R}\). The parameter \(\lambda\) in Eq. 8 establishes the relative importance of \(J_{D}\) versus \(J_{R}\) during the minimization process and determines the extent to which \(J_{R}\) can deviate the estimated velocity field \(\boldsymbol{\bar{U}}\) from the constraint of brightness conservation in \(J_{D}\). Lower \(\lambda\) values place a stronger emphasis on reducing \(J_{D}\) during minimization, thus attempting to better match pixel intensities between \(I_{0}\) and \(I_{1}\) to reduce the DFD even if the intensity variations do not correspond to the true motion. This creates non-physical velocity fluctuations at fine scales visible as noise in \(\boldsymbol{\bar{U}}\). Increasing \(\lambda\) dampens the small-scale motion, however, a higher \(\lambda\) weighting can lead to excessive smoothing of the velocity field. Sensitivity analysis of this parameter is a key aspect in understanding the applicability of OFV as an alternative diagnostic technique to studying wall-bounded flows.
### Wavelet-based Optical Flow
The current wOFV implementation was proposed by Derian (2013), developing on the original wavelet-based optical flow methods of Wu et al. (2000) and Chen et al. (2002) for computer vision applications. Improvements in the form of symmetric boundary conditions (Schmidt and Sutton, 2020) and efficient implementation of high-order and physically-sound regularization terms (Kadri-Harouna et al., 2013; Schmidt and Sutton, 2021) have furtheed the robustness and accuracy of this technique. For brevity, only an overview of the wavelet-based optical flow method is presented. Details of the wOFV algorithm can be found in (Derian et al., 2013; Kadri-Harouna et al., 2013; Schmidt and Sutton, 2019; Schmidt and Sutton, 2020).
The principle of wOFV, in contrast to other OFV techniques, is to perform the minimization in Eq. 8, not over the physical velocity field \(\boldsymbol{U(x)}\), but over the wavelet coefficients \(\boldsymbol{\theta}=(\theta_{1},\theta_{2})^{\mathsf{T}}\) from its Discrete Wavelet Transform (DWT) \(\boldsymbol{\theta}=\boldsymbol{\Psi^{-1}(x)U(x)}\), where \(\boldsymbol{\Psi^{-1}(x)}\) denotes the wavelet transform decomposition operator. The minimization problem is then expressed as:
\[\boldsymbol{\bar{\theta}}=\arg\,\min_{\boldsymbol{\theta}}J_{D}(I_{0},I_{1}, \boldsymbol{\theta})+\,\lambda J_{R}(\boldsymbol{\theta}) \tag{10}\]
Broadly speaking, a wavelet transform extracts the frequency content of a signal (or image in 2D) at different scales of resolution (Mallat, 2009). The wavelet transformed velocity field coefficients \(\boldsymbol{\theta}\) are optimized sequentially in a multi-resolution strategy. The wavelet coarsest-scale coefficients are estimated first, before estimating coefficients associated with progressively finer scales until the pixel scale is reached. Previous coarse-scale velocity estimates are included in every level of estimation, therefore earlier spurious vectors from coarser-scale velocity estimates are corrected for as finer-scale motion is determined. Once the full minimization is complete, the velocity field in physical space is recovered by application of the DWT reconstruction operator \(\boldsymbol{\Psi(x)}\) to the output wavelet coefficients \(\boldsymbol{\Psi(x)}\boldsymbol{\bar{\theta}}=\boldsymbol{\bar{U}}\).
To cope with large displacements, traditional OFV methods commonly use multiresolution coarse-to-fine warping strategies (Heitz et al., 2010), which extend the achievable dynamic range. This approach, however, can lead to propagation of errors during the multi-scale estimation process with no possibility of posterior correction. Conversely, the multiresolution framework inherent in wavelet decompositions provides a natural scheme that is well-suited to represent the multi-scale nature of turbulence (Deriaz and Perrier, 2009; Farge et al., 1996). Decomposition of the velocity field across the wavelet basis functions also allows for accurate implementation of high-order derivatives (Beylkin, 1992) used in the calculation of \(J_{R}\). Previous studies
demonstrated wOFV to be among some of the more accurate existing modern OFV methods, see (Cai et al., 2018; Kadri-Harouna et al., 2013; Derian et al., 2013; Schmidt & Sutton, 2019).
In this work, the wOFV implementation uses the odd length biorthogonal nearly-coiflet wavelet basis (BNC 17/11) introduced by (Winger & Verestanopoulos, 2001). This basis has a _nearly_ maximum number of vanishing moments possible for a given biorthogonal wavelet filter size. Maximising the number of vanishing moments increases velocity estimation accuracy up to a degree (Derian et al., 2013). This wavelet family is notable for the improved retention of fine details in its wavelet transform partial reconstructions which are implicit in the multi-scale estimation process in wOFV methods. Since the basis is biorthogonal, it is implemented using the non-expansive symmetric boundary condition described in (Schmidt & Sutton, 2020) which eliminates boundary artefacts resulting from a lack of periodicity in the imaged motion.
## 3 Description of Synthetic Test Case
In order to quantitatively assess wOFV performance and \(\lambda\) parameter sensitivity in wall-bounded flows, it is first necessary to compare estimated velocity fields from wOFV to a known ground truth velocity available from synthetic data. Synthetic data provides a useful test platform where parameters can be easily and independently controlled in an idealized image environment. In this work, the synthetic data are derived from direct numerical simulation (DNS) of a turbulent channel flow (Graham et al., 2016) hosted online at John Hopkins Turbulence Database (JHTDB) (Li et al., 2008). Details of the synthetic data are described below.
### DNS dataset
Table 1 describes the simulation parameters of the JHTDB, which are stored in a nondimensional form based on the half-channel height \(h\). 100 temporally correlated velocity fields are extracted with a time separation of 2.5 \(\delta t\) (stored) DNS database timesteps from a subset of the DNS domain that includes the no-slip velocity grid point of the lower wall. The fields are of nondimensional size \(0.17h\times 0.17h\times(5\times 10^{-4})h\) and sampled from the database at a grid resolution of 1024 \(\times\) 1024 \(\times\) 3 using fourth-order Lagrange polynomial interpolation (Berrut & Trefethen, 2004).
### Particle Image Generation
Once the velocity fields from the DNS are extracted, it is necessary to determine tracer particle displacements between frames of each image pair as they are advected by the DNS velocity. For the initial frame of each image pair, synthetic particle tracer locations are initialized from a random distribution for each of the extracted DNS velocity fields. The velocity field is assumed to be constant between consecutive images and the tracers assumed to be spherical and massless. The displacement of each particle in each second frame is computed numerically using an explicit Runge-Kutta scheme (Dormand & Prince, 1980) and a modified Akima spline interpolation for the velocities at particle locations (Akima, 1974). The velocity fields and particle displacements are then scaled from the nondimensional DNS units to pixel displacements per unity interframe time interval (\(dt=1\)) such that the maximum image plane velocity magnitude corresponds to \(\sim\)3.5 \(px/dt\), and the maximum out-of-plane displacement is \(\sim\)0.8 \(px/dt\).
The particle image pixel intensities are determined using classical methods of synthetic particle image generation (Raffel et al., 2018). The maximum particle intensity is governed by its diameter \(d_{p}\) and out-of-plane position \(x_{3,p}\) within a Gaussian profile synthetic laser sheet:
\begin{table}
\begin{tabular}{|l c|} \hline Bulk velocity \(U_{b}\) & 0.99994 \\ \hline Centreline velocity \(U_{c}\) & 1.1312 \\ \hline Friction velocity \(u_{t}\) & 0.0499 \\ \hline Kinematic viscosity \(v\) & \(5\times 10^{-5}\) \\ \hline Bulk velocity Reynolds number \(Re_{b}=U_{h}2h/\nu\) & \(3.9998\times 10^{4}\) \\ \hline Centreline velocity Reynolds number \(Re_{c}=U_{c}h/\nu\) & \(2.2625\times 10^{4}\) \\ \hline Friction velocity Reynolds number \(Re_{r}=u_{r}h/\nu\) & 999.35 \\ \hline DNS database timestep \(\delta t\) & 0.0065 \\ \hline Full domain size & \(8\pi h\times 2h\times 3\pi h\) \\ \hline Full grid resolution & 2048 \(\times\) 512 \(\times\) 1536 \\ \hline \end{tabular}
\end{table}
Table 1: Simulation parameters (nondimensional) of the JHTDB channel flow DNS.
\[I_{p}=d_{p}^{2}\exp\left(\frac{-8\big{(}x_{3,LS}-x_{3,p}\big{)}^{2}}{2\sigma_{LS}^ {2}}\right) \tag{11}\]
The laser sheet position \(x_{3,LS}\) is centred in the middle of the extracted DNS domain. The standard deviation of the laser sheet profile is set to \(\sigma_{LS}=2\) such that the \(1/e^{2}\) profile thickness is equal to the out-of-plane \(x_{3}\) thickness of the DNS volume subsection. In this way, the out-of-plane particle displacement is less than 1/4 of the laser sheet thickness as recommended by (Adrian and Westerweel, 2011). Each particle is randomly assigned a diameter that is drawn from a log-normal distribution of values:
\[PDF=\frac{1}{x\sigma\sqrt{2\pi}}exp\left(\frac{-(\log x-\ \mu)^{2}}{2\sigma^{2}}\right) \tag{12}\]
with parameters \(\mu=0.90\ px\) and \(\sigma=0.76\ px\) for the mean and standard deviation, respectively. The particle seeding density is 0.03 particles per pixel\({}^{2}\) (PPP), representative of that estimated from the experimental data presented in Sect. 5. The in-plane pixel intensity is computed from the integral form of the Gaussian function solved analytically using error functions. This is a more representative method of how a camera integrates the light intensity over individual pixels compared to simply using the analytical Gaussian expression. Finally, after the pixel intensities have been determined, the values are scaled to the dynamic range of an 8-bit camera sensor and rounded to integers to mimic discretisation.
Once the particle images are rendered, the images are vertically shifted upwards by 160 pixels to create a masked wall region of zero intensity. This vertical shift avoids having the flow region near the bottom of the image where boundary conditions in the wavelet transforms of wOFV can affect the near-wall velocity estimates. Moreover, this shift of the flow region from the image boundary is consistent with experimental images presented in Sect. 5. As this masked region in the images has 0 intensity, it does not contribute to the DFD and is effectively ignored in the minimization (Schmidt and Woike, 2021). An example of a rendered particle field image is shown in Fig. 1.
## 4 wOFV assessment using synthetic data
In this section, wOFV performance is assessed using the ground truth DNS data for comparison. In particular, the sensitivity of the wOFV results to the regularization weighting \(\lambda\) is evaluated to understand how \(\lambda\) selection affects estimation of turbulent boundary layer motion. wOFV findings are reported for 6 values of \(\lambda=[2,40,100,180,520,1000]\). This range of \(\lambda\) covers velocity estimates ranging from under-regularized (visibly noise-dominated) to over-regularized (over-smoothed). The wOFV error throughout the \(\lambda\) range is determined to identify suitable \(\lambda\) values where the wOFV error outperforms PIV. In Sect. 4.1, characteristic velocity estimates for each \(\lambda\) are presented together with the error over the entire image domain. Sect. 4.2 evaluates the effect of \(\lambda\) on the calculation of wall units and the effect this has on the interpretation of the mean velocity within each region of the boundary layer.
PIV is also applied to the synthetic data, providing a benchmark to compare wOFV with the current state-of-the-art. A commercial cross-correlation-based PIV software (DaVis 10.0.05, LaVision) was used for PIV
Figure 1: Example rendered particle field image from the channel flow DNS.
processing. The cross-correlation algorithm used 2 and 3 passes for the initial and final Gaussian-weighted IWs of size 64 x 64 down to 16 x 16 with 75% overlap. The anisotropic denoising filter in DaVis was applied to the PIV vector fields. The filter strength was selected for the most accurate results for the given IW size. Thus, it should be strongly emphasized that the PIV results presented are _optimized_. A geometric mask was placed 1 pixel below the no-slip grid point to capture the entire particle image region. This results in the first PIV velocity vector being 11 pixels above the no-slip pixel (\(x_{2}=0\)). For both PIV and wOFV the particle images were preprocessed using a min-max filter (Adrian & Westerweel, 2011) to account for changes in particle intensity resulting from out-of-plane motion within the synthetic laser sheet.
A notable feature of wOFV is its ability to provide dense velocity estimates with per-pixel vector spacing. Although this impressive vector spacing is achievable, the true spatial resolution of wOFV is a subject not often discussed and requires thorough analysis which is beyond the scope of this work. The average spacing between particles can be considered to be a conservative estimate of wOFV spatial resolution, since this is the average maximum distance between image features containing a genuine intensity signal. In the synthetic data the average spacing between particles is 5.8 pixels. This value is considered as an upper bound for wOFV's spatial resolution, as this estimate only considers the average distance between particle centres and does not take into account each particles' local intensity distribution for which additional valid vectors are associated. Additionally, because an explicit regularization scheme is used, the vectors in regions without particles will contain physically-sound flow field information from regions containing genuine signals (Schmidt & Sutton, 2021). Such features would decrease the true spatial resolution, but this requires further analysis. Thus, we report 5.8 pixels as the spatial resolution for wOFV, while vectors are resolved per pixel. The PIV spatial resolution is reported as the final IW size (i.e., 16 pixels), while the vector spacing is 4 pixels.
### \(\lambda\) sensitivity based on entire image domain
#### 4.1.1 Single image analysis
The influence of \(\lambda\) is first described by evaluating features of the wOFV velocity field within the entire image domain. Vector accuracy over the entire image domain is assessed by the normalized root mean square error:
\[\varepsilon_{u}=\sqrt{\frac{1}{n_{v}}\sum_{i}\frac{\left(U_{1}-\ U_{1,DNS} \right)^{2}+\ \left(U_{2}-\ U_{2,DNS}\right)^{2}}{U_{1,DNS}{}^{2}+\ U_{2,DNS}{}^{2}}} \tag{13}\]
In Eq. 13, \(n_{v}\) is the total number of vectors and \(U_{i}\) is the individual velocity value in the streamwise and normal direction denoted by subscripts \(i=1,2\) respectively. Normalization by the DNS velocity magnitude ensures that errors in regions of very low velocities near the wall are properly accounted for and not dominated by errors from large velocity magnitudes (McCane et al., 2001). Vectors from wOFV outside the PIV masked boundary are ignored in the error calculation of wOFV for equivalent comparison. For comparison with PIV, the DNS ground truth velocity is subsampled to a lower resolution grid using spline interpolation.
Figure 2 shows the instantaneous velocity field magnitude from a subset of 4 selected \(\lambda\) values. For comparison, the true velocity field from DNS and the corresponding velocity field from PIV are also shown. The associated \(\varepsilon_{u}\) value for each result is reported above each sub-figure. For wOFV with \(\lambda=2\), the velocity estimate is under-regularized, leading to fine-scale noise visible as a speckle-like pattern within the velocity field and yields the highest \(\varepsilon_{u}\) of the results shown. As \(\lambda\) increases to 40, the regularity of the estimated flow field is increased and the noise becomes noticeably suppressed. At \(\lambda=180\), the noise is effectively removed from the velocity field and achieves the lowest \(\varepsilon_{u}\), thus providing the closest agreement with the DNS. This \(\lambda\) value producing the minimum \(\varepsilon_{u}\) will be referred to as \(\lambda^{*}\). Far beyond \(\lambda^{*}\) at \(\lambda=1000\), the flow field is considered over-regularized; the noise has been eliminated entirely at the expense of over-smoothing the flow and therefore deviating from the DNS with \(\varepsilon_{u}\) nearly doubling. PIV produces a high-quality velocity estimate with \(\varepsilon_{u}\) as low as the \(\lambda=40\) wOFV result. With \(\lambda^{*}\), a modest improvement of \(\sim\)23% in \(\varepsilon_{u}\) is achieved over PIV, demonstrating wOFV's improved accuracy over the state-of-the-art.
Further assessment of how \(\lambda\) influences the estimated wOFV velocity field is shown by evaluating local velocity profiles. Figure 3 shows the instantaneous streamwise velocity \(U_{1}\) profiles extracted normal to the wall at pixel location \(x_{1}=400\) marked by the gray dashed line in Fig. 2. This \(x_{1}\) location was chosen arbitrarily but reveals trends consistent across all \(x_{1}\) locations. The characteristic noise present for the under-regularized \(\lambda=2\) is clearly seen as spurious small-scale velocity oscillations. These fluctuations are reduced significantly as \(\lambda\) is increased to \(\lambda^{*}\), leading to overall better agreement with the DNS. In regions that contain high velocity gradients as shown near the inflection point at \(x_{2}=240\) in Fig. 3b, it is shown that \(\lambda=40\) follows the DNS better than \(\lambda^{*}=180\). Thus, even though \(\lambda^{*}\) is on average optimal for the entire imaged motion, localized regions of sharp velocity gradients may prefer a slightly lower \(\lambda\) to avoid washing out small-scale flow features. This aspect is further discussed in Sect 4.1.3. As \(\lambda\) exceeds \(\lambda^{*}\), the over-smoothing effect is seen as a deviation from the DNS with velocity gradients becoming increasingly underestimated as clearly visible in Fig. 3b. As a benchmark, PIV processing achieves good agreement with the DNS, but with the reduced vector spacing (1 vector per 4 pixels) as visible in Fig. 3b.
The wOFV \(\varepsilon_{u}\) sensitivity to \(\lambda\) is further evaluated for a broader range of \(\lambda\) computed across 118 values shown in Fig. 4. Increments of \(\delta\lambda=1\) are used for the first 20 values to resolve the initial rapid \(\varepsilon_{u}\) variation, starting from \(\lambda=0.001\) to \(\lambda=20\), before changing to a coarser spacing of \(\delta\lambda=10\) for the remaining values. While this \(\lambda\) sensitivity is shown for a single image pair, trends are consistent for all image pairs within the 100
Figure 3: **a** Velocity profile along grey line location in Fig. 2. **b** Zoomed view of high velocity gradient region within velocity profile (extracted region marked by the square in **a**).
Figure 2: Instantaneous velocity magnitude for the DNS, PIV and wOFV (\(\lambda=[2,40,180,1000]\)) results. The gray dashed line marks the location of the velocity profiles shown in Fig. 3.
image sequence. The selected \(\lambda=[2,40,100,180,520,1000]\) values discussed throughout this work are shown in Fig. 4. For clarity, these chosen \(\lambda\) values correspond to under-regularized (\(\lambda=2\)), slightly under-regularized with \(\varepsilon_{u}\) equivalent to PIV (\(\lambda=40\)), near minimum \(\varepsilon_{u}\) (\(\lambda=100\)), minimum \(\varepsilon_{u}\) (\(\lambda^{*}=180\)), slightly over-regularized with \(\varepsilon_{u}\) equivalent to PIV (\(\lambda=520\)), and over-regularized (\(\lambda=1000\)). The corresponding PIV error for the same image pair is marked by the red line for comparison.
As shown in Fig. 4, wOFV results for \(\lambda<40\) give unacceptable \(\varepsilon_{u}\) values significantly greater than PIV. The large \(\varepsilon_{u}\) is a result of the noise introduced into the under-regularized flow field. As \(\lambda\) increases above \(40\), \(\varepsilon_{u}\) values decrease less rapidly to the minimum \(\varepsilon_{u}\) at \(\lambda^{*}=180\). A gradual, linear increase in \(\varepsilon_{u}\) past \(\lambda^{*}\) ensues as the estimated velocity field becomes increasingly over-regularized. For the flow field in Fig. 2, \(\lambda\) values in the range \(\lambda=40-520\) provide modestly more accurate velocity fields than PIV, at best reaching a \(\sim\)23% improvement in \(\varepsilon_{u}\) at \(\lambda^{*}\). The exact range of \(\lambda\) yielding improvements over PIV varies slightly from image to image. However, it is positive to see a broad range of \(\lambda\) yield acceptable error values beyond the current state-of-the-art and shows the strength of the current wOFV approach.
#### 4.1.2 Image sequence analysis
The findings in Sect. 4.1.1 consider a single image pair from the synthetic dataset. The influence of \(\lambda\) for the complete 100 image sequence, which involves a temporally varying wall-bounded flow, will be now considered. Figure 5a shows the \(\varepsilon_{u}\) values for wOFV at selected \(\lambda\) values, as well as for PIV across the image sequence. The 100 image average (\(\varepsilon_{u}\)) values are shown by the bar chart in Fig. 5b.
For all results presented, the \(\varepsilon_{u}\) for a given image pair can be seen to vary slightly across the image sequence. This variation is dependent on the complexity of the instantaneous flow dynamics for a given image pair as coherent structures and streaks propagate across the image. Despite varying \(\varepsilon_{u}\) values across the image sequence, the trends remain consistent with those presented for the single image pair. In particular, \(\varepsilon_{u}\) values are exceptionally large for the under-regularized \(\lambda=2\) value, but \(\varepsilon_{u}\) decreases substantially as \(\lambda\) increases. \(\varepsilon_{u}\) values are lowest for \(\lambda^{*}=180\), but wOFV findings with \(\lambda=100\) yield similarly low \(\varepsilon_{u}\) values, which is consistent with the broad local minimum curve feature shown in Fig. 4. wOFV findings with \(\lambda=40,520\) yield comparable \(\varepsilon_{u}\) values as PIV. As \(\lambda\) increases beyond 180 values \(\varepsilon_{u}\) gradually increase but remain lower than \(\lambda=2\).
Overall, the error analysis reveals that wOFV can surpass PIV accuracy for a relatively broad range of \(\lambda\) values consistent with Fig. 4. wOFV can provide improvements in accuracy up to 24% compared to PIV. In addition, the gradual increase in \(\varepsilon_{u}\) for over-regularized \(\lambda\) values compared to the sharp rise in \(\varepsilon_{u}\) for under-regularized \(\lambda\) values suggest that in the absence of a ground truth reference, it may be preferable to select over-regularized as opposed to under-regularized velocity estimates. However, it should be emphasized that \(\varepsilon_{u}\) represents a _spatially averaged_ value across the _entire_ image domain. It is unlikely that a single \(\lambda\) value is optimal for all locations of the velocity field. This has already been seen in Fig. 3b, where it was shown that \(\lambda\) values closer towards the under-regularized side of \(\lambda^{*}\) were able to resolve sharp velocity gradients compared to \(\lambda^{*}\). This finding is discussed further in the following section.
Figure 4: Sensitivity of wOFV \(\varepsilon_{u}\) as a function of \(\lambda\) for the velocity field in Fig. 2.
#### 4.1.3 Regional \(\lambda\) sensitivity
A single \(\lambda\) value weighting applied to an entire image can lead to a non-optimal velocity estimation in various regions of an image. It is thus important to understand the local distribution of error across the individual boundary layer regions. In this section, the error from the ground truth is evaluated within each boundary layer region contained within the synthetic images. This analysis reveals the trend of \(\lambda\) to optimize wOFV accuracy in each boundary layer region. For clarity, Fig. 6a shows the physical domain of the viscous sublayer (\(y^{+}<5\)), buffer layer (\(5<y^{+}<30\)) and logarithmic region which covers the remainder of the image field of view (\(30<y^{+}<138\)) in this dataset. In addition, the full viscous sublayer resolvable by wOFV is considered in this analysis, as opposed to only considering the equivalent PIV region as performed for \(\varepsilon_{u}\).
The _unnormalized_ root mean square error (RMSE) is calculated to quantify the absolute error within each boundary layer region:
\[\text{RMSE}=\sqrt{\frac{1}{n_{\nu}}\sum\left(U_{1}-\ U_{1,DNS}\right)^{2}+\ \left(U_{2}-\ U_{2,DNS}\right)^{2}} \tag{14}\]
In contrast to \(\varepsilon_{u}\), the absence of normalization by the DNS magnitude in the RMSE avoids an exaggeration of errors closest to the wall where the velocity magnitude approaches zero. The 100 image average RMSE in each region is shown in Fig. 6 b-d for the various \(\lambda\) values and for PIV.
In Fig. 6, it can be seen that the RMSE trend as a function of \(\lambda\) is regionally dependent. In the logarithmic region, wOFV performs exceptionally well for the previously acclaimed over-regularized \(\lambda=520,1000\), but suffers from high RMSE as the velocity field becomes more under-regularized. In the buffer layer, wOFV is sensitive to both under- and over-regularization; while the under-regularized \(\lambda=2\) still yields the highest RMSE, the RMSE for the over-regularized \(\lambda=520,1000\) more than doubles compared to the logarithmic region and the optimal \(\lambda\) decreases from \(\lambda=520\) to \(\lambda=100\). In the viscous sublayer, wOFV now becomes more sensitive to over-regularization, as \(\lambda=1000\) now has the highest RMSE, while the RMSE for \(\lambda=2\) decreases substantially and the optimal \(\lambda\) decreases to 40. PIV performs consistently well in each boundary layer region, however, wOFV at its optimized \(\lambda\) values achieves RMSE improvements of 21%, 11% and 29% in accuracy over PIV in the viscous sublayer, buffer layer and logarithmic region, respectively.
This trend of wOFV preferring lower \(\lambda\) values and becoming more sensitive to higher \(\lambda\) values as the wall is approached can be explained by considering the particular flow dynamics in these regions. In the logarithmic region, velocity gradients are weaker compared to closer to the wall. Therefore, effects of over-smoothing in the logarithmic region will have less of a detrimental effect on accuracy as the motion is predominantly uniform. As the wall is approached in the buffer layer, stronger velocity gradients exist requiring a slightly lower \(\lambda\) to resolve them without over-smoothing. In the viscous sublayer, where the lowest velocities are present, even lower \(\lambda\) values are preferred to resolve the sub-pixel particle displacements and consistently large velocity gradients, which are both significantly more sensitive to over-smoothing than noise compared to the regions away from the wall.
These findings demonstrate that a single \(\lambda\) value can slightly compromise the wOFV accuracy within the various regions of the boundary layer. While spatially adaptive regularization schemes have been proposed in the literature (Stark, 2013; Lu et al., 2021; Ouyang et al., 2021; Zhang et al., 2020), implementation of these
Figure 5: **a** wOFV and PIV \(\varepsilon_{u}\) across 100 image sequence. **b** 100 image average value.
schemes is non-trivial and is beyond the scope of this work. Although wOFV cannot be fully optimized using a single \(\lambda\) value, these findings positively demonstrate that values between \(\lambda=100-180\), including the optimal on average \(\lambda^{*}\), provide well-balanced solutions in each region with wOFV offering up to 23% improved accuracy over PIV.
### \(\lambda\) sensitivity in the near-wall region
This section evaluates wOFV's ability to estimate the mean velocity behavior within the boundary layer by analyzing the normalized velocity profiles depicted by \(u^{+},y^{+}\). The effect of lambda on wOFV to accurately calculate (\(U_{1}\)) and the near-wall velocity gradient \(\gamma\) is first assessed, since these variables are necessary to calculate \(u^{+}\) and \(y^{+}\). Subsequently, the fidelity of wOFV to resolve the various turbulent boundary layer regions is evaluated. The ensemble-mean velocity field presented in this section is composed from 100 velocity images, and is evaluated separately for each \(\lambda\) as well as for PIV.
#### 4.2.1 Viscous sublayer mean velocity
The ensemble average streamwise velocity (\(U_{1}\)) within the viscous sublayer (\(y^{+}<5\)) is shown in Fig. 7a and a zoomed view closest to the wall is shown in Fig. 7b. The (\(U_{1}\)) profiles shown are extracted from the \(x_{1}=400\) location marked by the dashed line in Fig. 2. Immediately obvious in Fig. 7 is the finer vector resolution for wOFV compared to PIV; wOFV provides vectors per pixel all to way to the wall, while PIV has one-fourth the vector spacing and resolves approximately half of the viscous sublayer region.
The effect of increasing \(\lambda\) on wOFV is evident in Fig. 7b. As \(\lambda\) increases, the streamwise velocity approaching the wall is elevated and increasingly deviates from the no-slip condition at the wall as the vector field becomes over-smoothed. For \(\lambda\leq 180\), the deviation from the DNS is significantly less with low velocities of (\(U_{1}\)) = 0.003 to 0.01 at the wall. wOFV with \(\lambda=2,40\) provides the best agreement with DNS, which is consistent with the error distribution analysis in Sect. 4.1.3 showing slightly under-regularized wOFV results produce the most accurate vector estimates in the viscous sublayer. However, the differences between \(\lambda=2,40\)
Figure 6: DNS velocity field with marked regions. Variation of average RMSE across image sequence as a function of \(\lambda\) for the \(\mathbf{b}\) Logarithmic region, \(\mathbf{c}\) Buffer layer and \(\mathbf{d}\) Viscous sublayer.
and \(\lambda^{*}=180\) are small (\(<3\%\)). This smoothing effect of the regularization term \(J_{R}\) becomes particularly more apparent for \(\lambda>180\). This tendency for the regularization term to dominate and oversmooth at motion and image intensity discontinuities is well-known in optical flow literature and is also influenced by using a quadratic penalty in \(J_{R}\)(Zach et al., 2007). Compared to the DNS and wOFV results, PIV estimates a slightly lower velocity within the resolved PIV region down to \(x_{2}=14\). Although relatively minor, this systematic error occurring in the vicinity of the wall is absent in all of the wOFV results.
#### 4.2.2 Near-wall gradient
Having established how \(\lambda\) affects estimates of \(\langle U_{1}\rangle\) in the vicinity of the wall, it is necessary to understand how these effects propagate into deriving the near-wall gradient \(\gamma\) and therefore the friction velocity \(u_{\tau}\) needed for the normalization of boundary layer quantities. Accurate and direct estimation of \(\gamma\) can be challenging for several reasons. In particular, there is the need to resolve reliable velocity vectors as close to the wall as possible and maximise the spatial resolution. The sharp velocity gradient also needs to be resolved reliably in the presence of the image discontinuity (i.e., the masked wall region).
The calculation of \(\gamma\) is performed by using a linear regression routine. For PIV, linear regression is performed from \(\gamma^{+}=4.8\) to the final vector at \(y^{+}=2.2\) as illustrated by the dashed line in Fig. 7. For wOFV, linear regression is applied at \(y^{+}=4.8\) and extends to \(y^{+}=0.32\) to avoid the no-slip pixel at \(x_{2}=0\). The regression calculation includes 5 vectors for PIV and 28 vectors for wOFV. A normalized percentage error in \(\gamma\) is calculated by:
\[\varepsilon_{\gamma}=\frac{|\gamma-\gamma_{DNS}|}{\gamma_{DNS}}\times 100 \tag{15}\]
The true \(\gamma_{DNS}\) is calculated using a linear regression across the same respective regions for each technique. The near-wall gradient error \(\varepsilon_{\gamma}\) is calculated at each valid pixel position for wOFV away from the image edges and a subsampled \(\gamma_{DNS}\) is used for comparison with the lower resolution PIV grid. The mean average of the normalized near-wall gradient error \(\varepsilon_{\gamma}\) across the available \(x_{1}\) distance is shown in Fig. 8.
Figure 8: Average \(\varepsilon_{\gamma}\) error across the \(x_{1}\) distance.
Figure 7: **a** Mean streamwise velocity profiles in the viscous sublayer taken at the grey dashed line in Fig. 2. The region used for the \(\gamma\) calculation for PIV is marked by the dashed line. **b** Highlight of the final wOFV vectors
Figure 8 shows that the under-regularized \(\lambda\) values are more conducive to reduced error in \(\gamma\), with \(\lambda=40\) achieving the minimum \(\langle\varepsilon_{\gamma}\rangle\). This trend is similar to the RMSE within the viscous sublayer (Fig. 6d); however, now the under-regularized \(\lambda=2\) outperforms \(\lambda=100,180\). wOFV with \(\lambda=2\) performs better for \(\varepsilon_{\gamma}\) than for the RMSE because the noise present in each image at low \(\lambda\) is mostly washed out when calculating the ensemble mean velocity \(\langle U_{1}\rangle\). Despite the preference towards under-regularization, it must be emphasized that wOFV results for \(\lambda=2\) to \(\lambda^{*}~{}=~{}180\) all provide higher accuracy than PIV. The \(\varepsilon_{\gamma}\) values for this \(\lambda\) range remain less than 1% and are a 45%-83% improvement over PIV. In contrast, the over-regularized \(\lambda=520\) and \(1000\) cases have serious and unacceptable levels of error, which are 147%-425% greater than PIV. These unacceptable errors are a result of over-smoothing the velocity field at the wall as shown in Fig. 7. Clearly over-regularization should be avoided when evaluating velocity quantities closest to the wall.
#### 4.2.3 Normalized Mean Velocity Profile
The normalized \(u^{+}\) velocity profiles are analysed to understand the effect of \(\lambda\) on wOFV's ability to interpret the mean streamwise velocity in each region of the boundary layer. The inner scaled profiles are presented in Fig. 9, taken at the location marked by the dashed line in Fig. 2. The relations for the linear \(u^{+}=y^{+}\) viscous sublayer and logarithmic region \(u^{+}=1/\kappa\ln(y^{+})+\beta\) with the constants \(\kappa=0.41\) and \(\beta=5.2\)(Pope, 2002) are indicated by the dashed lines. For the results in Fig. 9, each profile is normalised using its respective \(u_{\rm r}\) calculated from \(\gamma\).
When considering the results in Fig. 9, it is first important to discuss the effect of \(\gamma\) on the velocity profiles. Recall that over-regularized \(\lambda=520,1000\) yields an underestimation of \(\gamma\) due to over-smoothing the velocity field. According to Eqns. 1-4, an underestimated \(\gamma\) will decrease \(y^{+}\) and increase \(u^{+}\), creating a slight vertical and leftward shift in the normalized velocity profiles for \(\lambda=520,1000\). In the viscous sublayer (\(y^{+}<5\)), this shift, as well as general over-smoothing of \(\langle U_{1}\rangle\), creates a deviation from DNS and the established linear \(u^{+}=y^{+}\) relationship. This shift also causes a mild deviation from the DNS throughout the buffer layer (\(y^{+}\approx 5-30\)) followed by a more noticeable deviation in the logarithmic region (\(y^{+}\approx 30-200\)). wOFV findings from \(\lambda=2\) to \(\lambda^{*}=180\), which are not over-regularized, show excellent agreement with DNS throughout each region of the boundary layer. As shown in Fig. 9, the noise associated with the under-regularized \(\lambda=2\) result is mostly washed out when considering the ensemble average velocity \(\langle U_{1}\rangle\). Although minor fluctuations due to under-regularization can be seen in the logarithmic region for \(\lambda=2\) (see Fig. 9c), these fluctuations are smaller than the deviations present for over-regularized wOFV results.
Figure 9 shows that PIV is broadly in good agreement with the DNS. A slight discrepancy in the logarithmic region exists for PIV, but not to the extent of \(\lambda=520,1000\). PIV resolves down to a minimum \(y^{+}=2.21\) in the viscous sublayer. Excluding the over-regularized \(\lambda=520,1000\), wOFV resolves 2 decades in wall units further than PIV, down to \(y^{+}=0.15\) while maintaining agreement with the DNS with an error less than \(0.05~{}\delta u^{+}\) for the final vector at the wall. Assuming a suitable \(\lambda\) is selected, these results show highly encouraging performance characteristics of wOFV in terms of improved accuracy and increased vector density, which enables better interpretation of the viscous sublayer.
## 5 Application to experimental data
While synthetic data is key for quantifying and understanding error characteristics of wOFV, it is essential to further evaluate the performance of the method on a real experimental dataset which departs from the simplicity of synthetic data. In Sect. 4, while it was shown that wOFV provides improvements in accuracy over PIV, PIV
Figure 9: Inner scaled mean velocity profiles. Zoomed regions of the \(\mathbf{b}\) Viscous sublayer and \(\mathbf{c}\) Logarithmic region. Theoretical relations for the viscous sublayer and logarithmic region are shown in the dashed lines.
performed exceptionally well for the synthetic data, which is absent of noise and other imaging artifacts. It should again also be emphasized that the degree of smoothing in the PIV results was optimized for maximum accuracy. This was only possible since the DNS ground truth velocity was available for comparison. True experimental data, on the other hand, does not have such a reference and often suffers from inherent camera noise, laser pulse variation and non-uniform illumination from reflections near the wall, which can present additional difficulties to obtain accurate velocity measurements. While image pre-processing methods can alleviate some of these effects, in practice it is not possible to avoid them entirely.
Applying the knowledge gained from the synthetic data, in this section wOFV is applied to experimental particle images of a developing turbulent boundary layer. wOFV results from a selection of \(\lambda\) values are compared to PIV as well as a PIV + PTV approach, which provides higher spatial resolution than PIV. This comparison demonstrates the advantages of wOFV over PIV and PTV to resolve the turbulent boundary layer flow features with improved vector resolution and accuracy.
### Experimental Setup
Experiments are conducted in a flow facility in which a jet flow impinges onto a parallel wall, creating a developing turbulent boundary layer. The flow facility was originally designed to study flame-wall interactions in a side-wall quenching (SWQ) configuration (Jainski et al., 2018; Kosaka et al., 2018; Kosaka et al., 2020; Zentgraf et al., 2021; Zentgraf et al., 2022). For the purposes of this study, the flow facility operates under non-reacting, cold-flow conditions (i.e., no combustion). This experimental setup was recently presented in (Zentgraf, 2022) for characterizing the nozzle exit velocity profiles of the SWQ-burner. A schematic of the facility is shown in Fig. 10a. The central main flow (fully premixed CH\({}_{4}\)air at \(\phi=1.00\); not ignited) was homogenized by meshes as well as a honeycomb structure and subsequently guided through a converging nozzle. At the square nozzle exit (\(\approx 40\times 40\) mm\({}^{2}\)) the Reynolds number was maintained at 5900 and the inflow conditioning yielded a streamwise (\(\mathbf{x_{1}}\)) velocity profile with a nearly top-hat shape (Zentgraf, 2022). For turbulent conditions at the nozzle exit, a turbulence grid was used, providing a turbulence intensity of 6-7% (Jainski et al., 2018). The outlet flow impinged the sharp leading edge of a stainless-steel wall. The wall's surface has a mild curvature for improved optical access (radius: 300 mm, see top view in Fig. 10b). The central main flow was shielded from the lab environment by a concentric square air co-flow. All flows operated at ambient temperature, which was in agreement with the wall temperature.
A low-speed (10 Hz) PIV setup was used as shown in Fig. 10b. This setup was used previously to characterize the velocity profiles at the nozzle exit as boundary conditions (Zentgraf, 2022) and its optical arrangement closely matched the high-resolution, high-speed realization in (Zentgraf et al., 2021). The main flow was seeded with Al\({}_{2}\)O\({}_{3}\) particles (Zentgraf et al., 2021) which were illuminated using a dual-cavity Nd:YAG PIV
Figure 10: Schematic of **a** Flow facility (SWQ-burner) in a side view **b** Applied laser diagnostics in a side and top view. Numbers without units indicate spatial dimensions in millimeters.
laser (New Wave Research, Gemini PIV, G200, 10 Hz, 532 nm). Laser pulses were separated by \(\Delta t=40\)\(\mu\)s. The laser sheets were guided vertically downward to the wall to minimize reflections at the wall. Measurements were taken in the \(x_{1}x_{2}\)-symmetry plane of the facility, at the wall's centerline (\(x_{3}=0\) mm). The origin of the coordinate system is defined at the leading edge of the wall along its centerline. Optics exposed to seeding were continuously purped by nitrogen during operation.
The resulting Mie-scattering was detected by a sCMOS camera (LaVision GmbH, Imager sCMOS) with an exposure time of 15 \(\mu\)s each frame. The camera was equipped with a 180-mm objective lens (Sigma, APO Macro DG HSM D, \(\ell 78\)) and a bandpass filter (Edmund Optics Inc., \(\#65\)-\(216\), central wavelength 532 nm, FWHM 10 nm) to suppress ambient light. The field of view (FOV) comprises \((\Delta x_{1},\Delta x_{2})\approx(40\text{ mm},47.5\text{ mm})\). For velocimetry, the images are cropped to \((\Delta x_{1},\Delta x_{2})\approx(38\text{ mm},38\text{ mm})\), comprised of 2048 \(\times\) 2048 pixels with the FOV beginning at the wall's leading edge. At the downstream edge of the FOV, the Reynolds numbers based on the momentum thickness and friction velocity are \(Re_{\theta}=100\), \(Re_{\tau}=70\).
### Vector field calculation
For the experimental dataset, wOFV is benchmarked against PIV as well as PIV + PTV, the latter of which is often used in experiments to improve vector resolution over PIV. It is emphasized that experiments were originally optimized for PIV/PTV. The seeding density was optimized to provide 6-8 particles per final interrogation window and particle displacement was within \(\nicefrac{{1}}{{4}}\) of the final interrogation window size in the near-wall region of investigation. Velocity vector fields achieved average cross-correlation values of 0.77. It is therefore emphasized that the PIV quality is not intentionally compromised to exaggerate the advantages of wOFV.
The wOFV and PIV velocity fields were processed similarly to that of the synthetic data in Sect. 4. Mie scattering images were first pre-processed with subtraction of the ensemble minimum image followed by a min-max intensity normalization (Adrian and Westerweel, 2011). PIV vector processing was performed using a multi-pass correlation with an initial 1W size of \(64\times 64\) down to \(16\times 16\) with 75% overlap. The same anisotropic denoising filter used to optimize the PIV results on the synthetic data was applied to the experimental PIV vector fields. \(\text{PIV}+\text{PTV}\) processing was initialized from PIV. PTV was calculated for a particle size range from 1 - 8 pixels and with a correlation window size of 8 pixels. PTV vectors were converted to a structured \(4\times 4\) pixels\({}^{2}\) grid, as performed in previous boundary layer studies (Ding et al., 2019; M. Schmidt et al., 2021). This step was performed in DaVis using a "simple averaging / strong filter" scheme in DaVis, which provided the most reliable PTV results. A \(3\times 3\) Gaussian smoothing filter was applied to remove noise in the PTV vector fields. wOFV was performed as described in Sect. 2.
Each velocimetry method provides a different vector spacing and spatial resolution. For PIV, the spatial resolution is 298 \(\mu\)m, as defined by the final IW size of \(16\times 16\), while 75% overlap provides a vector spacing of \(74.3\)\(\mu\)m (every \(4\times 4\) pixels\({}^{2}\) grid used for PIV + PTV provides a vector spacing of 4 pixels or \(74.3\)\(\mu\)m, which is equivalent to PIV. Since PTV assigns a vector to the centroid of each detected particle, an approximate PIV + PTV spatial resolution is reported as the average particle distance of 5.8 pixels or 107.8 \(\mu\)m. wOFV provides a per-pixel vector spacing of \(18.6\)\(\mu\)m. A conservative estimate of wOFV's spatial resolution is reported as the average particle spacing of \(107.8\)\(\mu\)m. As mentioned in Sect. 4, wOFV's true spatial resolution is likely to be smaller than the average particle spacing since each particle pixel contains a valid vector, which likely makes the particle centroid spacing an upper limit.
The first vectors from the wall are located 279 \(\mu\)m, 204 \(\mu\)m and 149 \(\mu\)m for PIV, PIV + PTV and wOFV respectively. These distances are based on geometric masks used to calculate vector fields that are offset from the wall location to avoid light reflections and reduce the frequency of spurious vectors at the wall for both methods. The wall location is approximated using the maximum intensity of the reflection present at the wall. This is then refined using the no-slip pixel position estimate from PIV and wOFV \(\lambda=0.1\) (\(U_{1}\)) profiles averaged over the downstream distance.
It should be emphasized that the experimental dataset is appreciably different from the synthetic dataset in Sect. 4, and this influences the optimization of wOFV. For example, the near-wall velocity gradient \(\gamma\) is larger and the viscous sublayer is much thinner for the experimental data than the synthetic data. In addition, the image size is \(2048\times 2048\)\(px\) compared to \(1024\times 1024\)\(px\) in the synthetic data. Therefore, not only is the absolute pixel-wise length halved, but the viscous sublayer comprises a smaller and less significant proportion of the full image FOV. Lastly, while the synthetic data has an average freestream particle displacement of 3 pixels, the experimental freestream flow field has a more substantial displacement of 6 pixels. All of these aspects, in addition to different image characteristics, will influence the regularization weighting for wOFV, such that a suitable range of \(\lambda\) values will be significantly different between the experimental and synthetic datasets. This aspect is common within optical flow literature (Kadri-Harouna et al., 2013). In fact, the experimental data are significantly stricter and less forgiving compared to the synthetic data regarding selection of an acceptable \(\lambda\). The absence of ground truth data means determining the true optimal \(\lambda\) is not possible. For the experimental data, wOFV results from
three values of \(\lambda=0.1,1,20\) are presented and the most appropriate \(\lambda\) is justified _a posteriori_ based on physical principles as well as general comparison to PIV.
### Instantaneous velocity fields
Assessment of wOFV first considers the instantaneous velocity field in comparison to PIV and PIV + PTV.
Figure 11 shows an instantaneous velocity magnitude field for PIV, PIV + PTV, and the three wOFV results. An insert is shown for each image, which highlights details of a low-speed streak emerging from the wall.
PIV performs a good job resolving the overall velocity field. However, PIV can often struggle to resolve the velocity near the wall as shown by the pockets of unresolved velocity regions near the wall. PIV + PTV resolves closer to the wall than PIV but resembles a noisier velocity field with similarity to the noise seen in \(\lambda=0.1\). It is noted that converting PTV to a larger grid size of \(8\times 8\) pixels\({}^{2}\) did not reduce the noise level in the PIV + PTV.
All three wOFV results resolve similar general features as PIV, but the quality of the velocity field is determined by the choice of \(\lambda\). wOFV with \(\lambda=0.1\) exhibits the high-frequency, speckle-like noise commonly associated with highly under-regularized findings. While \(\lambda=0.1\) resolves much of the same larger scale features as PIV and PIV + PTV, several artefacts of locally higher and lower velocities exist throughout the image. For wOFV with \(\lambda=1\), the high-frequency noise is removed and the velocity field has strong agreement with PIV and PIV + PTV. The primary differences between \(\lambda=1\) and PIV is that wOFV does not have spurious or missing vectors near the wall and wOFV resolves velocities closer to the wall. In addition, \(\lambda=1\) does not contain the speckle-like noise presented in PIV + PTV. For \(\lambda=20\), the larger flow features are well captured, but the finer scale features present in PIV, PIV + PTV, and \(\lambda=1\) are mostly removed, likely due to over-smoothing.
The fact that noticeable changes in the velocity field occur over a significantly smaller \(\lambda\) range confirm the challenge in selecting the appropriate \(\lambda\) for the experimental dataset. While it is not possible to determine a \(\lambda\) value that provides the highest accuracy, it would appear that \(\lambda=0.1\) is too under-regularized and \(\lambda=20\) is likely over-regularized. Further analysis of the findings within the turbulent boundary layer is performed to evaluate these aspects and to determine the suitability of \(\lambda=1\).
### Mean velocity profiles
The near-wall velocity profiles are shown in Fig. 12a, with the normalized profiles shown in Fig. 12b. The \(\langle U_{1}\rangle\) values are produced from a 100 image mean and the profiles are spatially averaged over a 2mm streamwise \(x_{1}\) distance centered at the location marked by the gray dashed line in Fig. 11. The near-wall gradient \(\gamma\) is calculated from the ensemble average streamwise velocity fields using a linear regression in a similar manner to that described in the Sect. 4.2.2. In the viscous sublayer (\(y^{+}<5\)), there are 15 velocity vectors for wOFV compared to 3 velocity vectors for PIV and 4 for PIV \(+\) PTV. The linear regression for wOFV and PIV \(+\) PTV uses each of the available vectors, while for PIV only 2 out of the 3 available vectors are used since the final PIV vector nearest to the wall is frequently spurious. As described in Sect. 4.2, the estimation of \(\gamma\) has a direct effect on normalized wall units through \(u_{\tau}\). The \(\gamma\) values calculated are \(\gamma_{PIV}=1919\), \(\gamma_{PTV}=2402\), \(\gamma_{\lambda=0.1}=2469\), \(\gamma_{\lambda=1}=2288\), \(\gamma_{\lambda=20}=1701\) 1/s, which provide the corresponding \(u_{\tau}\) values \(u_{\tau,PV}=0.1697\), \(u_{\tau,PV}=0.1989\)\(u_{\tau,\lambda=0.1}=0.1924\), \(u_{\tau,\lambda=1}=0.1853\), \(u_{\tau,\lambda=20}=0.1597\). Incorrect estimates of \(\gamma\) can result in a strong offset from the exact \(u^{+}=y^{+}\) formulation shown by the green dotted line in Fig. 12b. Comparison with this linear relation will be used as an approximate measure to judge the quality of the near-wall vectors in the absence of a ground truth velocity.
In Fig. 12a, the profiles above 1mm from the wall are in excellent agreement. For \(x_{2}<1mm\), the \(\lambda=20\) profile shows increasing deviation from all other profiles as the wall is approached. In particular, velocity gradients are weaker leading to a flatter curve and significantly higher velocities at the wall. These features clearly indicate that \(\lambda=20\) is over-regularized; the excessive smoothing washes out the velocity gradient at the wall, creating an underestimate of \(u_{\tau}\). The resulting normalization creates a strong deviation from \(u^{+}=y^{+}\) as shown in Fig. 12b, and demonstrates that \(\lambda=20\) is not appropriate since the \(\gamma\) estimation is compromised.
In Fig. 12a, good agreement is shown between \(\lambda=0.1,1\), PIV, and PIV \(+\) PTV until \(x_{2}<0.4\ mm\), where PIV shows a milder gradient for \(0.3\leq x_{2}\leq 0.4\ mm\) followed by a sharper velocity gradient at the last
Figure 11: Instantaneous velocity magnitude fields. The insert shows a low-speed velocity streak emanating from the wall. The gray dashed line denotes the location where velocity profiles are extracted and analyzed in Fig. 12.
PIV data point. As will be shown, the last PIV data point is often erroneous, which biases the interpreted flow behavior. In Fig. 12b, PIV is offset from the \(u^{+}=y^{+}\), with an abnormal deviation in the curve for the last data point. PTV stays in closer agreement with \(\lambda=0.1,1\) and is able to resolve closer to the wall than PIV although not to the same extent as wOFV. The resulting normalization to inner variables results in significantly closer alignment with \(u^{+}=y^{+}\) for PIV + PTV, although a slight offset remains. The \(\lambda=1\) result, on the other hand, shows perfect alignment with the \(u^{+}=y^{+}\) relation and remains in good agreement with a discrepancy of 0.04 \(\delta u^{+}\)at the final vector. This suggests that \(\lambda=1\) provides accurate velocity estimates near the wall, as well as an accurate \(\gamma\) estimate. This also indicates that PIV, and to a lesser extent PTV, struggles to correctly estimate \(\gamma\) causing a slight shift in the normalized velocity profile, but not to the same extreme as \(\lambda=20\). The \(\lambda=0.1\) result exhibits the highest \(\gamma\) at the wall, creating a down- and rightward shift in the normalized velocity profile. This shift was not seen for the under-regularized values in the synthetic dataset, which further emphasizes the higher sensitivity of \(\lambda\) for the more challenging experimental dataset compared to the synthetic data.
### Normalized velocity fluctuations
The turbulent velocity fluctuations \(u_{1}=U_{1}-\langle U_{1}\rangle\) are analyzed to further evaluate the capabilities of the velocimetry techniques. Velocity fluctuations provide an assessment of the data quality beyond the ensemble mean and are equally important to evaluate turbulent quantities in the boundary layer. Figure 13 shows the profile of the normalized streamwise velocity fluctuations \(\langle u_{1}u_{1}\rangle^{+}\). The fluctuations and wall-normal coordinate in Fig. 13 are normalized by the \(u_{\tau}\) estimated from \(\lambda=1\) since this \(u_{\tau}\) value provided the strongest agreement with \(u^{+}=y^{+}\). Normalizing each case by \(u_{\tau,\lambda=1}\) removes the biased curve shifts as shown in Fig. 12. Similar to \(\langle U_{1}\rangle\), the fluctuation profiles are spatially averaged across the 2mm streamwise distance with the extent of a single standard deviation of these fluctuations illustrated by the shaded area in Fig. 13. The standard deviation of the fluctuations within this 2 mm distance can be considered indicative of the reliability of the velocity estimate and its susceptibility to error.
In Fig. 13, each curve follows a relatively similar trend from \(y^{+}=200\) to \(y^{+}=10\); \(\langle u_{1}u_{1}\rangle^{+}\) values increase from the freestream region and exhibit a local maximum in the buffer layer at \(y^{+}\approx 10\) as seen in other boundary layer studies (e.g., (Spalart, 1988)). From \(y^{+}=10\) towards the wall, each curve exhibits different trends. For PIV, \(\langle u_{1}u_{1}\rangle^{+}\) values continue to increase quite substantially into the viscous sublayer. This trend is non-physical as the turbulent fluctuations are expected to decrease in the viscous sublayer as the wall is approached. PIV also exhibits a very large standard deviation below \(y^{+}=10\), which is primarily caused by spurious vectors within the last 2-3 PIV vectors. This feature illustrates the challenges of PIV to accurately resolve small-scale fluctuations in the presence of strong velocity gradients. Reliable PIV measurements are often challenging directly near surfaces. While ensemble-average PIV quantities can be represented with sufficient accuracy, higher order velocity statistics and instantaneous velocity fields more clearly reveal challenges with PIV. PIV + PTV shows improvement from PIV; PTV resolves a greater extent of the buffer layer peak and initially shows the expected decrease in \(\langle u_{1}u_{1}\rangle^{+}\) towards the wall. However, PIV + PTV still shows the non-physical increase in \(\langle u_{1}u_{1}\rangle^{+}\)within the final 2-3 vectors at the wall and contains a large standard deviation. Although these artifacts are less severe compared to PIV, they demonstrate that PIV + PTV can still struggle to accurately resolve the flow nearest the wall.
wOFV findings, on the other hand, do not exhibit such large deviations in \(\langle u_{1}u_{1}\rangle^{+}\), indicating that wOFV is less susceptible to the same errors as PIV and PIV + PTV near the wall. Indeed, \(\langle u_{1}u_{1}\rangle^{+}\) values are large for
Figure 12: **a** Mean streamwise velocity profiles, **b** Inner scaled mean profiles. Profiles are spatially averaged over across 2mm streamwise \(x_{1}\) distance at the location marked by the grey dashed line in Fig. 11.
\(\lambda=0.1\) due to the results being under-regularized; however, \(\langle\mathbf{u}_{1}\mathbf{u}_{1}\rangle^{+}\) values and their deviation are significantly lower than those for PIV or PIV + PTV near the wall. Below \(y^{+}=10\), all wOFV findings show the expected decrease in \(\langle\mathbf{u}_{1}\mathbf{u}_{1}\rangle^{+}\). The profile \(\lambda=20\) shows a milder peak near \(y^{+}=10\) and a milder decrease near the wall compared to the other wOFV findings. The \(\lambda=20\) velocity is over-regularized for which excessive smoothing reduces the variation between peak and trough in the curve from \(y^{+}=10\) to \(y^{+}=1\). \(\langle\mathbf{u}_{1}\mathbf{u}_{1}\rangle^{+}\) values for \(\lambda=1\) show the greatest decrease as the wall is approached, which follows the expected trend of turbulent fluctuations being suppressed within the viscous sublayer in close proximity to the wall. In addition, the extent of the shaded region for wOFV remains constant near the wall suggesting that velocity errors are not being influenced by the proximity of the wall. Overall, wOFV with \(\lambda=1\) shows the most promising findings in terms of ensemble-average values as well as behavior of the velocity fluctuations.
### Vorticity and turbulent flow structure
An example is presented which highlights the advantages of wOFV in resolving turbulent flow phenomena within a boundary layer. This example is demonstrated for an instantaneous velocity field comparing the optimized wOFV with \(\lambda=1\), PIV and PIV + PTV.
One of the added benefits of wOFV over PIV or PIV + PTV is the improved vector spacing together with physically-sound smoothing, and with that, the ability to better resolve velocity gradient quantities. Figure 14 shows the instantaneous vorticity field \(\omega\) for PIV, PIV + PTV, and wOFV. The vorticity is calculated using the 8-point circulation approach described in (Raffel et al., 2018). Individual turbulent structures with relatively high vorticity magnitude are generated near the wall's leading edge and are advected downstream within the developing boundary layer. The inlays shown in Fig. 14 highlight a region that captures a prograde vortex that was generated from the wall's leading edge. This is a particularly challenging region because of the vortex' proximity to the wall, where small pixel displacements coupled with the spatially varying sharp velocity gradients present difficulties to velocimetry techniques. Indeed, PIV has been used to resolve small scale vortex structures near walls, but this is often accomplished by using high image magnifications yielding FOVs smaller than 5x5 mm\({}^{2}\)(Jainski et al., 2013), rather than a large FOV that is present in the current work.
The overall vorticity fields calculated from the velocimetry techniques are in good agreement; all methods show similar overall features such as the high vorticity regions extending from the wall's leading edge. PIV + PTV shows higher fluctuations in the vorticity field than PIV and wOFV. This attribute is no doubt due to the higher degree of speckle-like noise present for PTV as shown in Fig. 11. The inserts in Fig. 14 highlight the capabilities of resolving the finer vorticity structures near the wall for each method. Overall, the same spatial distribution of positive/negative vorticity structures are captured by each method, however, the effect of greater vector resolution is immediately seen; in particular, PIV and PIV + PTV images are substantially more pixelated compared to wOFV. PIV can exhibit larger discontinuities in the vorticity field (i.e., larger changes from pixel
Figure 13: Streamwise turbulent fluctuations normalized by \(\mathbf{u}_{\tau,\lambda=1}\). The shaded regions indicate one standard deviation of the \(\langle\mathbf{u}_{1}\mathbf{u}_{1}\rangle^{+}\) values within the 2 mm region centered by the gray dashed line shown in Fig. 11.
to-pixel), which are absent in the \(\text{PIV}+\text{PTV}\) and wOFV results, with wOFV achieving a highly resolved and more continuous vorticity field. The PIV + PTV vorticity field deviates more substantially from PIV and wOFV with several strands of high vorticity extending from the larger vorticity structures. These elevated vorticity strands are likely due to elevated noise levels present in \(\text{PIV}+\text{PTV}\) as discussed in Fig. 11 and 13. wOFV is able to resolve the vorticity much closer to the wall and without troublesome unresolved regions from erroneous vector calculation as in the PIV and PIV + PTV fields. wOFV faithfully preserves the features shown in both the PIV and \(\text{PIV}+\text{PTV}\) results, but achieves a much finer-detailed vorticity field.
Figure 15 shows the corresponding vector field within the green rectangle shown in Fig. 14. The vector field shows all available vectors for PIV, \(\text{PIV}+\text{PTV}\) and wOFV shown in red, blue and black, respectively. wOFV is capable of resolving the prograde vortex in much more detail than the other methods. While the vortex is visible in PIV and \(\text{PIV}+\text{PTV}\), the vortex structure is more difficult to interpret due to sparser vector spacing and the presence of quasi-erroneous vectors that deviate from a vortical flow pattern. In Fig. 15, all vector fields show good agreement above \(x_{2}=0.6\ mm\); most vectors are in alignment and are of the same magnitude. However, closer to the wall there are larger disagreements between wOFV and PIV. In many locations, PIV vectors are aligned orthogonally to wOFV vectors. Some PIV vectors are clearly erroneous as they differ significantly from their neighboring PIV vectors. Additionally, PIV vectors are absent in the upper left corner where spurious vectors are detected and removed during post-processing. Closest to the wall, PIV vectors point inwards towards the wall with a relatively large velocity magnitude, which strongly disagree with the wOFV vectors directed parallel or outward from wall with a velocity magnitude more consistent with the neighboring vectors. \(\text{PIV}+\text{PTV}\) improves on PIV in this regard with suitable quality vectors at the wall and calculates vectors in all regions. However, \(\text{PIV}+\text{PTV}\) exhibits select vectors that disagree with PIV and wOFV. In addition, PIV + PTV vectors near its vortex core center are misaligned with its circulation and struggle to resemble a coherent vortex core. It is likely that \(\text{PIV}+\text{PTV}\) struggles to successfully resolve the strong gradients present in this region. The velocity field features shown in Fig. 15 reveal some challenges cross-correlation-based PIV and combined \(\text{PIV}+\text{PTV}\) experience in resolving small-scale intricate flow dynamics with high velocity gradients in the vicinity of physical boundaries. Assuming a suitable \(\lambda\) is selected, these findings positively indicate that wOFV is better suited to resolve these turbulent flow structures in the boundary layer region.
Figure 14: Instantaneous vorticity calculated for PIV (left), PIV + PTV (middle) and wOFV (right). Inlays show a 0.8x0.8 mm\({}^{2}\) zoomed view of a vortex. The green rectangle indicates the location of the velocity vector field shown in Fig. 15.
### Turbulent energy spectra
Lastly, to assess the potential of resolving fine-scale turbulent velocity fluctuations using wOFV, the normalized streamwise turbulent kinetic energy spectrum (\(E_{11}^{*}(\kappa_{1})\)) is analyzed. This is calculated using the Fourier transform of the streamwise velocity fluctuations (\(u_{1}\)) across the entire field of view. The 1D turbulent kinetic energy spectrum, normalized by its peak value, is presented in Fig. 16 for PIV, PIV \(+\) PTV and the optimized wOFV with \(\lambda=1\). Due to the moderately low turbulence level, there is insufficient separation of scales to produce a significant inertial subrange (-5/3 region). The spectra reveal a high frequency noise present for PIV at increasing wavenumbers. The PIV spectra do not show the classical energy decay at increasing wavenumbers, indicating the velocity measurement noise floor and spectral resolution limit has already been reached. The PIV \(+\) PTV spectra do not show the high frequency noise present in the PIV profile. However, PIV \(+\) PTV spectra show elevated energy at all wavenumbers compared to PIV together with a non-physical modulation after \(\kappa_{1}>2\times 10^{4}\). wOFV is in close agreement with PIV and PIV \(+\) PTV at the low wavenumbers, but shows an energy decay at higher wavenumbers and resolves a significantly greater proportion of the energy spectrum without obvious indications that the measurement is being corrupted by noise or accuracy issues at high wavenumbers.
Figure 15: Vector field for PIV (red), PIV \(+\) PTV (blue) and wOFV (black) within the green rectangle shown in Fig. 14. Vector fields are shown at their original sampling resolutions.
## Conclusions
The performance of a wavelet-based optical flow velocimetry (wOFV) method was assessed in detail on synthetic and experimental particle images of turbulent wall-bounded flows. The ability to extract high-resolution estimates of instantaneous, mean and derived flow properties was evaluated in the vicinity of the wall. This was analyzed in regards to selection of the regularization parameter \(\lambda\), an aspect largely not discussed in other OFV works, and compared to results from correlation-based PIV.
Using synthetic PIV data generated from DNS of a turbulent boundary layer channel flow, a \(\lambda\)-sensitivity analysis was performed over the entire field-of-view to establish a range of under-regularized, over-regularized and optimal wOFV results. A regional \(\lambda\)-sensitivity was investigated to understand the localized error behaviour and considerations necessary to optimize wOFV within each region of the boundary layer. Away from the wall in the logarithmic layer, wOFV is more sensitive to under-regularization, which introduces non-physical noise into the otherwise uniform velocity field. This noise causes significant deviation from the ground truth, leading to unacceptable errors nearly three times greater than PIV. The logarithmic region is less sensitive to over-regularization, since over-smoothing imposed by over-regularization removes noise and produces little deviation to the uniform velocity field. In the buffer layer, wOFV is sensitive to both under- and over-regularization. Over-regularization becomes problematic because over-smoothing washes out velocity gradients present in the buffer layer. In the viscous sublayer, wOFV performs optimally when slightly under-regularized, which better resolves the velocity gradients at the wall in addition to sub-pixel particle displacements. In contrast, over-regularization yields the highest errors as it underestimates the near-wall velocity gradient (\(\gamma\)). This latter aspect is important when evaluating wall units (\(u^{+},y^{+}\)) since an underestimated \(\gamma\) directly yields an over-estimated \(u^{+}\) and under-estimated \(y^{+}\). Although wOFV vectors at all locations cannot be optimal using a single \(\lambda\) value, results confirm a suitable range of \(\lambda\) values exist that outperform PIV in each boundary layer region with wOFV also achieving significant improvement in resolving the viscous sublayer more effectively.
The accuracy and resolution improvement is more pronounced when wOFV is applied to experimental images. Physically motivated selection of \(\lambda\) based on the expected linear relationship in the viscous sublayer allowed for wOFV to better resolve the mean velocity closer to the wall and stay in excellent agreement with \(u^{+}=y^{+}\) down to the final vector. wOFV further provided impressive vector resolution offering 15 vectors in the viscous sublayer, as opposed to PIV and PIV + PTV which respectively offered 3 and 4 vectors in the viscous sublayer with the last vector often being erroneous for PIV. Although PIV performed acceptably when resolving the mean velocity near the wall, evaluation of higher-order velocity statistics and instantaneous flow fields revealed the lower reliability of PIV near walls. In particular, estimates of the turbulent velocity fluctuations from PIV featured a non-physical increase near the wall with unreasonably high standard deviation for the last three vectors closest to the wall. PIV + PTV improved upon such errors, but still exhibited the non-physical increase in turbulent velocity fluctuations and large standard deviation near the wall. wOFV did not exhibit these artifacts. Instantaneous velocity fields further demonstrate the spurious velocity estimations at the wall with PIV. While PIV + PTV exhibited less spurious velocity estimations, noise levels were comparable to the under-regularized
Figure 16: Normalized streamwise turbulent kinetic energy spectrum for PIV, PIV + PTV and wOFV.
wOFV findings, which made it more difficult for PIV + PTV to provide reliable vorticity fields. wOFV does not yield such erroneous velocity estimates, which, together with the improved spatial resolution, allowed for more accurate estimates of derivative quantities detailing complex flow structure in the vicinity of the wall. These findings positively indicate that wOFV is well-suited to estimate the flow dynamics in the presence of physical boundaries.
The authors point out that the wOFV algorithm does not feature direct modifications or explicit constraints for handling physical boundaries within the image. It is expected that such enhancements, although beyond the scope of the current work, would bring further improvement to results and enhance the techniques' performance for velocimetry in more complex geometries.
## Declarations
### Authors' contributions
AN implemented the wOFV code, setup the synthetic test case and performed data analysis. The flow experiment was setup and measurements taken by FZ with PIV/PIV + PTV vector processing jointly done by FZ and AN. BP, ML, AD contributed acquisition of funding, technical expertise, supervision and reviewing. All authors were involved in the preparation of the manuscript.
## Ethics approval and consent to participate
### Consent for publication
Not applicable
### Availability of data and materials
Not applicable
### Competing interests
The authors have no competing interests to declare.
## Acknowledgements
Funding for wOFV from the European Research Council (grant #759456) and Engineering and Physical Science Research Council (EP/V003283/1) is gratefully acknowledged. Funding for PIV and the experimental setup from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Projektnummer 237267381 - TRR 150 is also gratefully acknowledged.
|
2310.03758 | A Unified Framework for Uniform Signal Recovery in Nonlinear Generative
Compressed Sensing | In generative compressed sensing (GCS), we want to recover a signal
$\mathbf{x}^* \in \mathbb{R}^n$ from $m$ measurements ($m\ll n$) using a
generative prior $\mathbf{x}^*\in G(\mathbb{B}_2^k(r))$, where $G$ is typically
an $L$-Lipschitz continuous generative model and $\mathbb{B}_2^k(r)$ represents
the radius-$r$ $\ell_2$-ball in $\mathbb{R}^k$. Under nonlinear measurements,
most prior results are non-uniform, i.e., they hold with high probability for a
fixed $\mathbf{x}^*$ rather than for all $\mathbf{x}^*$ simultaneously. In this
paper, we build a unified framework to derive uniform recovery guarantees for
nonlinear GCS where the observation model is nonlinear and possibly
discontinuous or unknown. Our framework accommodates GCS with 1-bit/uniformly
quantized observations and single index models as canonical examples.
Specifically, using a single realization of the sensing ensemble and
generalized Lasso, {\em all} $\mathbf{x}^*\in G(\mathbb{B}_2^k(r))$ can be
recovered up to an $\ell_2$-error at most $\epsilon$ using roughly
$\tilde{O}({k}/{\epsilon^2})$ samples, with omitted logarithmic factors
typically being dominated by $\log L$. Notably, this almost coincides with
existing non-uniform guarantees up to logarithmic factors, hence the uniformity
costs very little. As part of our technical contributions, we introduce the
Lipschitz approximation to handle discontinuous observation models. We also
develop a concentration inequality that produces tighter bounds for product
processes whose index sets have low metric entropy. Experimental results are
presented to corroborate our theory. | Junren Chen, Jonathan Scarlett, Michael K. Ng, Zhaoqiang Liu | 2023-09-25T17:54:19Z | http://arxiv.org/abs/2310.03758v2 | # A Unified Framework for Uniform Signal Recovery in Nonlinear Generative Compressed Sensing
###### Abstract
In generative compressed sensing (GCS), we want to recover a signal \(\mathbf{x}^{\star}\in\mathbb{R}^{n}\) from \(m\) measurements (\(m\ll n\)) using a generative prior \(\mathbf{x}^{\star}\in G(\mathbb{B}^{k}_{2}(r))\), where \(G\) is typically an \(L\)-Lipschitz continuous generative model and \(\mathbb{B}^{k}_{2}(r)\) represents the radius-\(r\)\(\ell_{2}\)-ball in \(\mathbb{R}^{k}\). Under nonlinear measurements, most prior results are non-uniform, i.e., they hold with high probability for a fixed \(\mathbf{x}^{\star}\) rather than for all \(\mathbf{x}^{\star}\) simultaneously. In this paper, we build a unified framework to derive uniform recovery guarantees for nonlinear GCS where the observation model is nonlinear and possibly discontinuous or unknown. Our framework accommodates GCS with 1-bit/uniformly quantized observations and single index models as canonical examples. Specifically, using a single realization of the sensing ensemble and generalized Lasso, _all_\(\mathbf{x}^{\star}\in G(\mathbb{B}^{k}_{2}(r))\) can be recovered up to an \(\ell_{2}\)-error at most \(\epsilon\) using roughly \(\tilde{O}(k/\epsilon^{2})\) samples, with omitted logarithmic factors typically being dominated by \(\log L\). Notably, this almost coincides with existing non-uniform guarantees up to logarithmic factors, hence the uniformity costs very little. As part of our technical contributions, we introduce the Lipschitz approximation to handle discontinuous observation models. We also develop a concentration inequality that produces tighter bounds for product processes whose index sets have low metric entropy. Experimental results are presented to corroborate our theory.
## 1 Introduction
In compressed sensing (CS) that concerns the reconstruction of low-complexity signals (typically sparse signals) [5; 6; 15], it is standard to employ a random measurement ensemble, i.e., a random sensing matrix and other randomness that produces the observations. Thus, a recovery guarantee involving a single draw of the measurement ensemble could be _non-uniform_ or _uniform_ -- the non-uniform one ensures the accurate recovery of any fixed signal with high probability, while the uniform one states that one realization of the measurements works simultaneously for all structured signals of interest. Uniformity is a highly desired property in CS, since in applications the measurement ensemble is typically fixed and should work for all signals [17]. Besides, the derivation of a uniform guarantee is often significantly harder than a non-uniform one, making uniformity an interesting theoretical problem in its own right.
Inspired by the tremendous success of deep generative models in different applications, it was recently proposed to use a generative prior to replace the commonly used sparse prior in CS [2], which led to numerical success such as a significant reduction of the measurement number. This new perspective for CS, which we call generative compressed sensing (GCS), has attracted a large volume of research interest, e.g., nonlinear GCS [29, 33, 45], MRI applications [24, 46], and information-theoretic bounds [27, 34], among others. This paper focuses on the uniform recovery problem for nonlinear GCS, which is formally stated below. Our main goal is to build a unified framework that can produce uniform recovery guarantees for various nonlinear measurement models.
**Problem:** Let \(\mathbb{B}^{k}_{2}(r)\) be the \(\ell_{2}\)-ball with radius \(r\) in \(\mathbb{R}^{k}\). Suppose that \(G\,:\,\mathbb{B}^{k}_{2}(r)\to\mathbb{R}^{n}\) is an \(L\)-Lipschitz continuous generative model, \(\mathbf{a}_{1},...,\mathbf{a}_{m}\in\mathbb{R}^{n}\) are the sensing vectors, \(\mathbf{x}^{\star}\in\mathcal{K}:=G(\mathbb{B}^{k}_{2}(r))\) is the underlying signal, and we have the observations \(y_{i}=f_{i}(\mathbf{a}_{i}^{\top}\mathbf{x}^{\star}),\ i=1,\ldots,m\), where \(f_{1}(\cdot),\ldots,f_{m}(\cdot)\) are possibly unknown,2 possibly random non-linearities. Given a single realization of \(\{\mathbf{a}_{i},f_{i}\}_{i=1}^{m}\), under what conditions we can _uniformly_ recover all \(\mathbf{x}^{\star}\in\mathcal{K}\) from the corresponding \(\{\mathbf{a}_{i},y_{i}\}_{i=1}^{m}\) up to an \(\ell_{2}\)-norm error of \(\epsilon\)?
Footnote 2: In order to establish a unified framework, our recovery method (2.1) involves a parameter \(T\) that should be chosen according to \(f_{i}\). For the specific single index model with possibly unknown \(f_{i}\), we can follow prior works [33, 43] to assume that \(T\mathbf{x}^{\star}\in\mathcal{K}\), and recover \(\mathbf{x}^{\star}\) without using \(T\). See Remark 5 for more details.
### Related Work
We divide the related works into nonlinear CS (based on traditional structures like sparsity) and nonlinear GCS.
**Nonlinear CS:** Beyond the standard linear CS model where one observes \(y_{i}=\mathbf{a}_{i}^{\top}\mathbf{x}^{\star}\), recent years have witnessed rapidly increasing literature on nonlinear CS. An important nonlinear CS model is 1-bit CS that only retains the sign \(y_{i}=\operatorname{sign}(\mathbf{a}_{i}^{\top}\mathbf{x}^{\star})\)[3, 22, 41, 42]. Subsequent works also considered 1-bit CS with dithering \(y_{i}=\operatorname{sign}(\mathbf{a}_{i}^{\top}\mathbf{x}^{\star}+\tau_{i})\) to achieve norm reconstruction under sub-Gaussian sensing vectors [9, 14, 48]. Besides, the benefit of using dithering was found in uniformly quantized CS with observation \(y_{i}=\mathcal{Q}_{\delta}(\mathbf{a}_{i}^{\top}\mathbf{x}^{\star}+\tau_{i})\), where \(\mathcal{Q}_{\delta}(\cdot)=\delta\big{(}[\lfloor\frac{z}{z}\rfloor+\frac{1}{ 2}\big{)}\) is the uniform quantizer with resolution \(\delta\)[48, 48, 8]. Moreover, the authors of [16, 43, 44] studied the more general single index model (SIM) where the observation \(y_{i}=f_{i}(\mathbf{a}_{i}^{\top}\mathbf{x}^{\star})\) involves (possibly) unknown nonlinearity \(f_{i}\).
While the restricted isometry property (RIP) of the sensing matrix \(\mathbf{A}=[\mathbf{a}_{1},...,\mathbf{a}_{m}]^{\top}\) leads to uniform recovery in linear CS [4, 15, 49], this is not true in nonlinear CS. In fact, many existing results are non-uniform [9, 16, 21, 41, 43, 44, 48], and some uniform guarantees can be found in [7, 8, 14, 17, 41, 42]. Most of these uniform guarantees suffer from a slower error rate.
The most relevant work to this paper is the recent work [17] that described a unified approach to uniform signal recovery for nonlinear CS. The authors of [17] showed that in the aforementioned models with \(k\)-sparse \(\mathbf{x}^{\star}\), a uniform \(\ell_{2}\)-norm recovery error of \(\epsilon\) could be achieved via generalized Lasso using roughly \(k/\epsilon^{4}\) measurements [17, Section 4]. In this work, we build a unified framework for uniform signal recovery in nonlinear GCS. To achieve a uniform \(\ell_{2}\)-norm error of \(\epsilon\) in the above models with the generative prior \(\mathbf{x}^{\star}\in G(\mathbb{B}^{k}_{2}(r))\), our framework only requires a number of samples proportional to \(k/\epsilon^{2}\). Unlike [17] that used the technical results [36] to bound the product process, we develop a concentration inequality that produces a tighter bound in the setting of generative prior, thus allowing us to derive a sharper uniform error rate.
**Nonlinear GCS:** Building on the seminal work by Bora _et al._[2], numerous works have investigated linear or nonlinear GCS [1, 11, 12, 19, 20, 23, 25, 30, 39, 40, 51], with a recent survey [47] providing a comprehensive overview. Particularly for nonlinear GCS, 1-bit CS with generative models has been studied in [26, 31, 45], and generative priors have been used for SIM in [29, 32, 33]. In addition, score-based generative models have been applied to nonlinear CS in [10, 38].
The majority of research for nonlinear GCS focuses on non-uniform recovery, with only a few exceptions [33, 45]. Specifically, under a generative prior, [33, Section 5] presented uniform recovery guarantees for SIM where \(y_{i}=f_{i}(\mathbf{a}_{i}^{\top}\mathbf{x}^{\star})\) with deterministic Lipschitz \(f_{i}\) or \(f_{i}(x)=\operatorname{sign}(x)\). Their proof technique is based on the local embedding property developed in [31], which is a geometric property that is often problem-dependent and currently only known for 1-bit measurements and deterministic Lipschitz link functions. In contrast, our proof technique does not rely on such
geometric properties and yields a unified framework with more generality. Furthermore, [33] did not consider dithering, which limits their ability to estimate the norm of the signal.
The authors of [45] derived a uniform guarantee from dithered 1-bit measurements under bias-free ReLU neural network generative models, while we obtain a uniform guarantee with the comparable rate for more general Lipschitz generative models. Additionally, their recovery program differs from the generalized Lasso approach (_cf._ Section 2.1) used in our work. Specifically, they minimize an \(\ell_{2}\) loss with \(\|\mathbf{x}\|_{2}^{2}\) as the quadratic term, while generalized Lasso uses \(\|\mathbf{A}\mathbf{x}\|_{2}^{2}\) that depends on the sensing vector. As a result, our approach can be readily generalized to sensing vectors with an unknown covariance matrix [33, Section 4.2], unlike [45] that is restricted to isotropic sensing vectors. Under random dithering, while [45] only considered 1-bit measurements, we also present new results for uniformly quantized measurements (also referred to as multi-bit quantizer in some works [13]).
### Contributions
In this paper, we build a unified framework for uniform signal recovery in nonlinear GCS. We summarize the paper structure and our main contributions as follows:
* We present Theorem 1 as our main result in Section 2. Under rather general observation models that can be discontinuous or unknown, Theorem 1 states that the uniform recovery of all \(\mathbf{x}^{\star}\in G(\mathbb{B}_{2}^{k}(r))\) up to an \(\ell_{2}\)-norm error of \(\epsilon\) can be achieved using roughly \(O\big{(}\frac{k\log L}{\epsilon^{2}}\big{)}\) samples. Specifically, we obtain uniform recovery guarantees for 1-bit GCS, 1-bit GCS with dithering, Lipschitz-continuous SIM, and uniformly quantized GCS with dithering.
* We provide a proof sketch in Section 3. Without using the embedding property as in [33], we handle the discontinuous observation model by constructing a Lipschitz approximation. Compared to [17], we develop a new concentration inequality (Theorem 2) to derive tighter bounds for the product processes arising in the proof.
We also perform proof-of-concept experiments on the MNIST [28] and CelebA [35] datasets for various nonlinear models to demonstrate that by using a single realization of \(\{\mathbf{a}_{i},f_{i}\}_{i=1}^{m}\), we can obtain reasonably accurate reconstruction for multiple signals. Due to the page limit, the experimental results and detailed proofs are provided in the supplementary material.
### Notation
We use boldface letters to denote vectors and matrices, while regular letters are used for scalars. For a vector \(\mathbf{x}\), we let \(\|\mathbf{x}\|_{q}\) (\(1\leq q\leq\infty\)) denote its \(\ell_{q}\)-norm. We use \(\mathbb{B}_{q}^{n}(r):=\{\mathbf{z}\in\mathbb{R}^{n}\,:\,\|\mathbf{z}\|_{q}\leq r\}\) to denote the \(\ell_{q}\) ball in \(\mathbb{R}^{n}\), and \((\mathbb{B}_{q}^{n}(r))^{c}\) represents its complement. The unit Euclidean sphere is denoted by \(\mathbb{S}^{n-1}:=\{\mathbf{x}\in\mathbb{R}^{n}\,:\,\|\mathbf{x}\|_{2}=1\}\). We use \(C,C_{i},c_{i},c\) to denote absolute constants whose values may differ from line to line. We write \(A=O(B)\) or \(A\lesssim B\) (resp. \(A=\Omega(B)\) or \(A\gtrsim B\)) if \(A\leq CB\) for some \(C\) (resp. \(A\geq cB\) for some \(c\)). We write \(A\asymp B\) if \(A=O(B)\) and \(A=\Omega(B)\) simultaneously hold. We sometimes use \(\tilde{O}(\cdot)\) to further hide logarithmic factors, where the hidden factors are typically dominated by \(\log L\) in GCS, or \(\log n\) in CS. We let \(\mathcal{N}(\mathbf{\mu},\mathbf{\Sigma})\) be the Gaussian distribution with mean \(\mathbf{\mu}\) and covariance matrix \(\mathbf{\Sigma}\). Given \(\mathcal{K}_{1},\mathcal{K}_{2}\subset\mathbb{R}^{n}\), \(\mathbf{a}\in\mathbb{R}^{n}\) and some \(a\in\mathbb{R}\), we define \(\mathcal{K}_{1}\pm\mathcal{K}_{2}:=\{\mathbf{x}_{1}\pm\mathbf{x}_{2}:\mathbf{x}_{1}\in \mathcal{K}_{1},\mathbf{x}_{2}\in\mathcal{K}_{2}\}\), \(\mathbf{a}+\mathcal{K}_{1}:=\{\mathbf{a}\}+\mathcal{K}_{1}\), and \(a\mathcal{K}_{1}:=\{a\mathbf{x}:\mathbf{x}\in\mathcal{K}_{1}\}\). We also adopt the conventions of \(a\wedge b=\min\{a,b\}\), and \(a\lor b=\max\{a,b\}\).
## 2 Main Results
We first give some preliminaries.
**Definition 1**.: _For a random variable \(X\), we define the sub-Gaussian norm \(\|X\|_{\psi_{2}}:=\inf\{t>0:\mathbb{E}\exp(X^{2}/t^{2})\leq 2\}\) and the sub-exponential norm \(\|X\|_{\psi_{1}}:=\inf\{t>0:\mathbb{E}\exp(|X|/t)\leq 2\}\). \(X\) is sub-Gaussian (resp. sub-exponential) if \(\|X\|_{\psi_{2}}<\infty\) (resp. \(\|X\|_{\psi_{1}}<\infty\)). For a random vector \(\mathbf{x}\in\mathbb{R}^{n}\), we let \(\|\mathbf{x}\|_{\psi_{2}}:=\sup_{\mathbf{v}\in\mathbb{S}^{n-1}}\|\mathbf{v}^{\top}\mathbf{x}\|_{ \psi_{2}}\)._
**Definition 2**.: _Let \(\mathcal{S}\) be a subset of \(\mathbb{R}^{n}\). We say that a subset \(\mathcal{S}_{0}\subset\mathcal{S}\) is an \(\eta\)-net of \(\mathcal{S}\) if every point in \(\mathcal{S}\) is at most \(\eta\) distance away from some point in \(\mathcal{S}_{0}\), i.e., \(\mathcal{S}\subset\mathcal{S}_{0}+\mathbb{B}_{2}^{n}(\eta)\). Given a radius \(\eta\), we
define the covering number \(\mathcal{N}(\mathcal{S},\eta)\) as the minimal cardinality of an \(\eta\)-net of \(\mathcal{S}\). The metric entropy of \(\mathcal{S}\) with respect to radius \(\eta\) is defined as \(\mathscr{H}(\mathcal{S},\eta)=\log\mathcal{N}(\mathcal{S},\eta)\)._
### Problem Setup
We make the following assumptions on the observation model.
**Assumption 1**.: _Let \(\mathbf{a}\sim\mathcal{N}(0,\mathbf{I}_{n})\) and let \(f\) be a possibly unknown, possibly random non-linearity that is independent of \(\mathbf{a}\). Let \((\mathbf{a}_{i},f_{i})_{i=1}^{m}\) be i.i.d. copies of \((\mathbf{a},f)\). With a single draw of \((\mathbf{a}_{i},f_{i})_{i=1}^{m}\), for \(\mathbf{x}^{\star}\in\mathcal{K}=G(\mathbb{B}_{2}^{k}(r))\), where \(G:\mathbb{B}_{2}^{k}(r)\to\mathbb{R}^{n}\) is an \(L\)-Lipschitz generative model, we observe \(\big{\{}y_{i}:=f_{i}(\mathbf{a}_{i}^{\top}\mathbf{x}^{\star})\big{\}}_{i=1}^{m}\). We can express the model more compactly as \(\mathbf{y}=\mathbf{f}(\mathbf{A}\mathbf{x}^{\star})\), where \(\mathbf{A}=[\mathbf{a}_{1},...,\mathbf{a}_{m}]^{\top}\in\mathbb{R}^{m\times n}\), \(\mathbf{f}=(f_{1},...,f_{m})^{\top}\) and \(\mathbf{y}=(y_{1},...,y_{m})^{\top}\in\mathbb{R}^{m}\)._
In this work, we consider the generalized Lasso as the recovery method [16, 33, 43], whose core idea is to ignore the non-linearity and minimize the regular \(\ell_{2}\) loss. In addition, we need to specify a constraint that reflects the low-complexity nature of \(\mathbf{x}^{\star}\), and specifically, we introduce a problem-dependent scaling factor \(T\in\mathbb{R}\) and use the constraint "\(\mathbf{x}\in T\mathcal{K}\)". Note that this is necessary even if the problem is linear; for example, with observations \(\mathbf{y}=2\mathbf{A}\mathbf{x}^{\star}\), one needs to minimize the \(\ell_{2}\) loss over "\(\mathbf{x}\in 2\mathcal{K}\)". Also, when the generative prior is given by \(Tx^{\star}\in\mathcal{K}=G(\mathbb{B}_{2}^{k}(r))\), we should simply use "\(\mathbf{x}\in\mathcal{K}\)" as constraint; this is technically equivalent to the treatment adopted in [33] (see more discussions in Remark 5 below). Taken collectively, we consider
\[\mathbf{\hat{x}}=\arg\min_{\mathbf{x}\in T\mathcal{K}}\|\mathbf{y}-\mathbf{A}\mathbf{x}\|_{2}. \tag{2.1}\]
Importantly, we want to achieve uniform recovery of all \(\mathbf{x}^{\star}\in\mathcal{K}\) with a single realization of \((\mathbf{A},\mathbf{f})\).
### Assumptions
Let \(f\) be the function that characterizes our nonlinear measurements. We introduce several assumptions on \(f\) here, and then verify them for specific models in Section 2.3. We define the set of discontinuities as
\[\mathscr{D}_{f}=\{a\in\mathbb{R}:\text{$f$ is discontinuous at $a$}\}.\]
We define the notion of jump discontinuity as follows.
**Definition 3**.: (Jump discontinuity)_. A function \(f:\mathbb{R}\to\mathbb{R}\) has a jump discontinuity at \(x_{0}\) if both \(L^{-}:=\lim_{x\to x_{0}^{-}}f(x)\) and \(L^{+}:=\lim_{x\to x_{0}^{+}}f(x)\) exist but \(L^{-}\neq L^{+}\). We simply call the oscillation at \(x_{0}\), i.e., \(|L^{+}-L^{-}|\), the jump._
Roughly put, our framework applies to piece-wise Lipschitz continuous \(f_{i}\) with (at most) countably infinite jump discontinuities, which have bounded jumps and are well separated. The precise statement is given below.
**Assumption 2**.: _For some \((B_{0},L_{0},\beta_{0})\), the following statement unconditionally holds true for any realization of \(f\) (specifically, \(f_{1},\dots,f_{m}\) in our observations):_
* \(\mathscr{D}_{f}\) _is one of the following:_ \(\varnothing\)_, a finite set, or a countably infinite set;_
* _All discontinuities of_ \(f\) _(if any) are jump discontinuities with the jump bounded by_ \(B_{0}\)_;_
* \(f\) _is_ \(L_{0}\)_-Lipschitz on any interval_ \((a,b)\) _satisfying_ \((a,b)\cap\mathscr{D}_{f}=\varnothing\)_._
* \(|a-b|\geq\beta_{0}\) _holds for any_ \(a,b\in\mathscr{D}_{f}\)_,_ \(a\neq b\) _(we set_ \(\beta_{0}=\infty\) _if_ \(|\mathscr{D}_{f}|\leq 1\)_)._
_For simplicity, we assume \(f(x_{0})=\lim_{x\to x_{0}^{+}}f(x)\) for \(x_{0}\in\mathscr{D}_{f}\).3_
Footnote 3: This is very mild because the observations are \(f_{i}(\mathbf{a}_{i}^{\top}\mathbf{x})\), while \(\mathbb{P}\left(\mathbf{a}^{\top}\mathbf{x}\in\mathscr{D}_{f_{i}}\right)=0\) (as \(\mathscr{D}_{f_{i}}\) is at most countably infinite and \(\mathbf{a}\sim\mathcal{N}(0,\mathbf{I}_{n})\)).
We note that Assumption 2 is satisfied by \(L\)-Lipschitz \(f\) with \((B_{0},L_{0},\beta_{0})=(0,L,\infty)\), 1-bit quantized observation \(f(\cdot)=\operatorname{sign}(\cdot+\tau)\) (\(\tau\) is the potential dither, similarly below) with \((B_{0},L_{0},\beta_{0})=(2,0,\infty)\), and uniformly quantized observation \(f(\cdot)=\delta\big{(}\lfloor\frac{+\tau}{\delta}\rfloor+\frac{1}{2}\big{)}\) with \((B_{0},L_{0},\beta_{0})=(\delta,0,\delta)\).
Under Asssumption 2, for any \(\beta\in[0,\frac{\beta_{0}}{2})\) we construct \(f_{i,\beta}\) as the Lipschitz approximation of \(f_{i}\) to deal with the potential discontinuity of \(f_{i}\) (i.e., \(\mathscr{D}_{f_{i}}\neq\varnothing\)). Specifically, \(f_{i,\beta}\) modifies \(f_{i}\) in \(\mathscr{D}_{f_{i}}+[-\frac{\beta}{2},\frac{\beta}{2}]\) to be piece-wise linear and Lipschitz continuous; see its precise definition in (3.4).
We develop Theorem 2 to bound certain product processes appearing in the analysis, which produces bounds tighter than [36] when the index sets have low metric entropy. To make Theorem 2 applicable, we further make the following Assumption 3, which can be checked case-by-case by estimating the sub-Gaussian norm and probability tail. Also, \(U_{g}^{(1)}\) and \(U_{g}^{(2)}\) can even be a bit crude because the measurement number in Theorem 1 depends on them in a logarithmic manner.
**Assumption 3**.: _Let \(\boldsymbol{a}\sim\mathcal{N}(0,\boldsymbol{I}_{n})\), under Assumptions 1-2, we define the Lipschitz approximation \(f_{i,\beta}\) as in (3.4). We let_
\[\xi_{i,\beta}(a):=f_{i,\beta}(a)-Ta,\ \varepsilon_{i,\beta}(a):=f_{i,\beta}(a)-f_ {i}(a). \tag{2.2}\]
_For all \(\beta\in(0,\frac{\beta_{0}}{2})\), we assume the following holds with some parameters \((A_{g}^{(1)},U_{g}^{(1)},P_{0}^{(1)})\) and \((A_{g}^{(2)},U_{g}^{(2)},P_{0}^{(2)})\):_
* \(\sup_{\boldsymbol{x}\in\mathcal{K}}\|\xi_{i,\beta}(\boldsymbol{a}^{\top} \boldsymbol{x})\|_{\psi_{2}}\leq A_{g}^{(1)}\)_,_ \(\mathbb{P}\big{(}\sup_{\boldsymbol{x}\in\mathcal{K}}|\xi_{i,\beta}( \boldsymbol{a}^{\top}\boldsymbol{x})|\leq U_{g}^{(1)}\big{)}\geq 1-P_{0}^{(1)};\)__
* \(\sup_{\boldsymbol{x}\in\mathcal{K}}\|\varepsilon_{i,\beta}(\boldsymbol{a}^{ \top}\boldsymbol{x})\|_{\psi_{2}}\leq A_{g}^{(2)}\)_,_ \(\mathbb{P}\big{(}\sup_{\boldsymbol{x}\in\mathcal{K}}|\varepsilon_{i,\beta}( \boldsymbol{a}^{\top}\boldsymbol{x})|\leq U_{g}^{(2)}\big{)}\geq 1-P_{0}^{(2)}.\)__
To build a more complete theory we further introduce two useful quantities. For some \(\boldsymbol{x}\in\mathcal{K}\), we define the target mismatch \(\rho(\boldsymbol{x})\) as in [17, Definition 1]:
\[\rho(\boldsymbol{x})=\big{\|}\mathbb{E}\big{[}f_{i}(\boldsymbol{a}_{i}^{\top} \boldsymbol{x})\boldsymbol{a}_{i}\big{]}-T\boldsymbol{x}\big{\|}_{2}. \tag{2.3}\]
It is easy to see that \(\mathbb{E}\big{[}f_{i}(\boldsymbol{a}_{i}^{\top}\boldsymbol{x})\boldsymbol{a} _{i}\big{]}\) minimizes the expected \(\ell_{2}\) loss \(\mathbb{E}\big{[}\|\boldsymbol{y}-\boldsymbol{A}\boldsymbol{x}\|_{2}^{2}\big{]}\), thus one can roughly understand \(\mathbb{E}\big{[}f_{i}(\boldsymbol{a}_{i}^{\top}\boldsymbol{x})\boldsymbol{a }_{i}\big{]}\) as the expectation of \(\hat{\boldsymbol{x}}\). Since \(T\boldsymbol{x}\) is the desired ground truth, a small \(\rho(\boldsymbol{x})\) is intuitively an important ingredient for generalized Lasso to succeed. Fortunately, in many models, \(\rho(\boldsymbol{x})\) with a suitably chosen \(T\) will vanish (e.g., linear model [2], single index model [33], 1-bit model [31]) or at least be sufficiently small (e.g., 1-bit model with dithering [45]).
As mentioned before, our method to deal with discontinuity of \(f_{i}\) is to introduce its approximation \(f_{i,\beta}\), which differs from \(f_{i}\) only in \(\mathscr{D}_{f_{i}}+[-\frac{\beta}{2},\frac{\beta}{2}]\). This will produce some bias because the actual observation is \(f_{i}(\boldsymbol{a}_{i}^{\top}\boldsymbol{x}^{*})\) rather than \(f_{i,\beta}(\boldsymbol{a}_{i}^{\top}\boldsymbol{x}^{*})\). Hence, for some \(\boldsymbol{x}\in\mathcal{K}\) we define the following quantity to measure the bias induced by \(f_{i,\beta}\):
\[\mu_{\beta}(\boldsymbol{x})=\mathbb{P}\Big{(}\boldsymbol{a}^{\top}\boldsymbol {x}\in\mathscr{D}_{f_{i}}+\Big{[}-\frac{\beta}{2},\frac{\beta}{2}\Big{]} \Big{)},\ \ \boldsymbol{a}\sim\mathcal{N}(0,\boldsymbol{I}_{n}). \tag{2.4}\]
The following assumption can often be satisfied by choosing suitable \(T\) and sufficiently small \(\beta_{1}\).
**Assumption 4**.: _Suppose Assumptions 1-3 hold true with parameters \(B_{0},L_{0},\beta_{0},A_{g}^{(1)},A_{g}^{(2)}\). For the \(T\) used in (2.1), \(\rho(\boldsymbol{x})\) defined in (2.3) satisfies_
\[\sup_{\boldsymbol{x}\in\mathcal{K}}\rho(\boldsymbol{x})\lesssim(A_{g}^{(1)} \lor A_{g}^{(2)})\sqrt{\frac{k}{m}}. \tag{2.5}\]
_Moreover, there exists some \(0<\beta_{1}<\frac{\beta_{0}}{2}\) such that_
\[(L_{0}\beta_{1}+B_{0})\sup_{\boldsymbol{x}\in\mathcal{K}}\sqrt{\mu_{\beta_{1}}( \boldsymbol{x})}\lesssim(A_{g}^{(1)}\lor A_{g}^{(2)})\sqrt{\frac{k}{m}}. \tag{2.6}\]
In the proof, the estimation error \(\|\hat{\boldsymbol{x}}-T\boldsymbol{x}^{*}\|\) is contributed by a concentration term of scaling \(\tilde{O}\big{(}(A_{g}^{(1)}\lor A_{g}^{(2)})\sqrt{k/m}\big{)}\) and some bias terms. The main aim of Assumption 4 is to pull down the bias terms so that the concentration term is dominant.
### Main Theorem and its Implications
We now present our general theorem and apply it to some specific models.
**Theorem 1**.: _Under Assumptions 1-4, given any recovery accuracy \(\epsilon\in(0,1)\), if it holds that \(m\gtrsim(A_{g}^{(1)}\lor A_{g}^{(2)})^{2}\frac{k\mathcal{E}}{\varepsilon^{2}}\), then with probability at least \(1-m(P_{0}^{(1)}+P_{0}^{(2)})-m\exp(-\Omega(n))-C\exp(-\Omega(k))\) on a single realization of \((\mathbf{A},\mathbf{f}):=(\mathbf{a}_{i},f_{i})_{i=1}^{m}\), we have the uniform signal recovery guarantee \(\|\mathbf{\hat{x}}-T\mathbf{x}^{\bullet}\|_{2}\leq\epsilon\) for all \(\mathbf{x}^{\bullet}\in\mathcal{K}\), where \(\mathbf{\hat{x}}\) is the solution to (2.1) with \(\mathbf{y}=\mathbf{f}(\mathbf{A}\mathbf{x}^{\bullet})\), and \(\mathcal{L}=\log\widetilde{P}\) is a logarithmic factor with \(\widetilde{P}\) being polynomial in \((L,n)\) and other parameters that typically scale as \(O(L+n)\). See (C.11) for the precise expression of \(\mathcal{L}\)._
To illustrate the power of Theorem 1, we specialize it to several models to obtain concrete uniform signal recovery results. Starting with Theorem 1, the remaining work is to select parameters that justify Assumptions 2-4. We summarize the strategy as follows: (i) Determine the parameters in Assumption 2 by the measurement model; (ii) Set \(\widetilde{T}\) that verifies (2.5) (see Lemmas 8-11 for the following models); (iii) Set the parameters in Assumption 3, for which bounding the norm of Gaussian vector is useful; (iv) Set \(\beta_{1}\) to guarantee (2.6) based on some standard probability argument. We only provide suitable parameters for the following concrete models due to space limit, while leaving more details to Appendix E.
**(A) 1-bit GCS.** Assume that we have the 1-bit observations \(y_{i}=\operatorname{sign}(\mathbf{a}_{i}^{\top}\mathbf{x}^{\bullet})\); then \(f_{i}(\cdot)=f(\cdot)=\operatorname{sign}(\cdot)\) satisfies Assumption 2 with \((B_{0},L_{0},\beta_{0})=(2,0,\infty)\). In this model, it is hopeless to recover the norm of \(\|\mathbf{x}^{\bullet}\|_{2}\); as done in previous work, we assume \(\mathbf{x}^{\bullet}\in\mathcal{K}\subset\mathbb{S}^{n-1}\)[31, Remark 1]. We set \(T=\sqrt{2/\pi}\) and take the parameters in Assumption 3 as \(A_{g}^{(1)}\asymp 1,U_{g}^{(1)}\asymp\sqrt{n},P_{0}^{(1)}\asymp\exp(-\Omega(n)),A_{g}^{(2 )}\asymp 1,U_{g}^{(2)}\asymp 1,P_{0}^{(2)}=0\). We take \(\beta=\beta_{1}\asymp\frac{k}{m}\) to guarantee (2.6). With these choices, Theorem 1 specializes to the following:
**Corollary 1**.: _Consider Assumption 1 with \(f_{i}(\cdot)=\operatorname{sign}(\cdot)\) and \(\mathcal{K}\subset\mathbb{S}^{n-1}\), let \(\epsilon\in(0,1)\) be any given recovery accuracy. If \(m\gtrsim\frac{k}{\epsilon^{2}}\log\left(\frac{Lr\sqrt{m}n}{\epsilon\wedge(k/m )}\right)\),4 then with probability at least \(1-2m\exp(-cn)-m\exp(-\Omega(k))\) on a single draw of \((\mathbf{a}_{i})_{i=1}^{m}\), we have the uniform signal recovery guarantee \(\left\|\mathbf{\hat{x}}-\sqrt{\frac{2}{\pi}}\mathbf{x}^{\bullet}\right\|_{2}\leq\epsilon\) for all \(\mathbf{x}^{\bullet}\in\mathcal{K}\), where \(\mathbf{\hat{x}}\) is the solution to (2.1) with \(\mathbf{y}=\operatorname{sign}(\mathbf{A}\mathbf{x}^{\bullet})\) and \(T=\sqrt{\frac{2}{\pi}}\)._
Footnote 4: Here and in other similar statements, we implicitly assume a large enough implied constant.
**Remark 1**.: _A uniform recovery guarantee for generalized Lasso in 1-bit GCS was obtained in [33, Section 5]. Their proof relies on the local embedding property in [31]. Note that such geometric property is often problem-dependent and highly nontrivial. By contrast, our argument is free of geometric properties of this kind._
**Remark 2**.: _For traditional 1-bit CS, [17, Corollary 2] requires \(m\gtrsim\tilde{O}(k/\epsilon^{4})\) to achieve uniform \(\ell_{2}\)-accuracy of \(\epsilon\) for all \(k\)-sparse signals, which is inferior to our \(\tilde{O}(k/\epsilon^{2})\). This is true for all remaining examples. To obtain such a sharper rate, the key technique is to use our Theorem 2 (rather than [36]) to obtain tighter bound for the product processes, as will be discussed in Remark 8._
**(B) 1-bit GCS with dithering.** Assume that the \(\mathbf{a}_{i}^{\top}\mathbf{x}^{\bullet}\) is quantized to 1-bit with dither5\(\tau_{i}\stackrel{{ iid}}{{\sim}}\mathscr{U}\left[-\lambda, \lambda\right]\) for some \(\lambda\) to be chosen, i.e., we observe \(y_{i}=\operatorname{sign}(\mathbf{a}_{i}^{\top}\mathbf{x}^{\bullet}+\tau_{i})\). Following [45] we assume \(\mathcal{K}\subset\mathbb{B}_{2}^{n}(R)\) for some \(R>0\). Here, using dithering allows the recovery of signal norm \(\|\mathbf{x}^{\bullet}\|_{2}\), so we do not need to assume \(\mathcal{K}\subset\mathbb{S}^{n-1}\) as in Corollary 1. We set \(\lambda=CR\sqrt{\log m}\) with sufficiently large \(C\), and \(T=\lambda^{-1}\). In Assumption 3, we take \(A_{g}^{(1)}\asymp 1,\ U_{g}^{(1)}\asymp\sqrt{n},\ P_{0}^{(1)}\asymp\exp(-\Omega(n)),\ A_{g}^{(2 )}\asymp 1,\ U_{g}^{(2)}\asymp 1\), and \(P_{0}^{(2)}=0\). Moreover, we take \(\beta=\beta_{1}=\frac{\lambda k}{m}\) to guarantee (2.6). Now we can invoke Theorem 1 to get the following.
Footnote 5: Throughout this work, the random dither is independent of the \(\{\mathbf{a}_{i}\}_{i=1}^{m}\).
**Corollary 2**.: _Consider Assumption 1 with \(f_{i}(\cdot)=\operatorname{sign}(\cdot+\tau_{i})\), \(\tau_{i}\sim\mathscr{U}[-\lambda,\lambda]\) and \(\mathcal{K}\subset\mathbb{B}_{2}^{n}(R)\), and \(\lambda=CR\sqrt{\log m}\) with sufficiently large \(C\). Let \(\epsilon\in(0,1)\) be any given recovery accuracy. If \(m\gtrsim\frac{k}{\epsilon^{2}}\log\left(\frac{Lr\sqrt{m}n}{\lambda(\epsilon \wedge(k/m))}\right)\), then with probability at least \(1-2m\exp(-cn)-m\exp(-\Omega(k))\) on a single draw of \((\mathbf{a}_{i},\tau_{i})_{i=1}^{m}\), we have the uniform signal recovery guarantee \(\|\mathbf{\hat{x}}-\lambda^{-1}\mathbf{x}^{\bullet}\|_{2}\leq\epsilon\) for all \(\mathbf{x}^{\bullet}\in\mathcal{K}\), where \(\mathbf{\hat{x}}\) is the solution to (2.1) with \(\mathbf{y}=\operatorname{sign}(\mathbf{A}\mathbf{x}^{\bullet}+\mathbf{\tau})\) (here, \(\mathbf{\tau}=[\tau_{1},...,\tau_{m}]^{\top}\)) and \(T=\lambda^{-1}\)._
**Remark 3**.: _To our knowledge, the only related prior result is in [45, Theorem 3.2]. However, their result is restricted to ReLU networks. By contrast, we deal with the more general Lipschitz generative models; by specializing our result to the ReLU network that is typically \((n^{\Theta(d)})\)-Lipschitz [2] (\(d\) is
the number of layers), our error rate coincides with theirs up to a logarithmic factor. Additionally, as already mentioned in the Introduction Section, our result can be generalized to a sensing vector with an unknown covariance matrix, unlike theirs which is restricted to isotropic sensing vectors. The advantage of their result is in allowing sub-exponential sensing vectors._
**(C) Lipschitz-continuous SIM with generative prior.** Assume that any realization of \(f\) is unconditionally \(\hat{L}\)-Lipschitz, which implies Assumption 2 with \((B_{0},L_{0},\beta_{0})=(0,\hat{L},\infty)\). We further assume \(\mathbb{P}(f(0)\leq\hat{B})\geq 1-P_{0}^{\prime}\) for some \((\hat{B},P_{0}^{\prime})\). Because the norm of \(\mathbf{x}^{\bullet}\) is absorbed into the unknown \(f(\cdot)\), we assume \(\mathcal{K}\subset\mathbb{S}^{n-1}\). We set \(\beta=0\) so that \(f_{i,\beta}=f_{i}\). We introduce the quantities \(\mu=\mathbb{E}[f(g)g],\psi=\|f(g)\|_{\psi_{2}},\text{ where }g\sim\mathcal{N}(0,1)\). We choose \(T=\mu\) and set parameters in Assumption 3 as \(A_{g}^{(1)}\asymp\psi+\mu,\;U_{g}^{(1)}\asymp(\hat{L}+\mu)\sqrt{n}+\hat{B},\;P_ {0}^{(1)}\asymp P_{0}^{\prime}+\exp(-\Omega(n)),\;A_{g}^{(2)}\asymp\psi+\mu, \;U_{g}^{(2)}=0,\;P_{0}^{(2)}=0\). Now we are ready to apply Theorem 1 to this model. We obtain:
**Corollary 3**.: _Consider Assumption 1 with \(\hat{L}\)-Lipschitz \(f\), suppose that \(\mathbb{P}\left(f(0)\leq\hat{B}\right)\geq 1-P_{0}^{\prime}\), and define the parameters \(\mu=\mathbb{E}[f(g)g]\), \(\psi=\|f(g)\|_{\psi_{2}}\) with \(g\sim\mathcal{N}(0,1)\). Let \(\epsilon\in(0,1)\) be any given recovery accuracy. If \(m\gtrsim\frac{(\mu+\epsilon)\psi}{\epsilon^{2}}\log\left(\frac{Lr\sqrt{m}\ln( \mu+\epsilon)(\hat{L}+\mu)+\sqrt{n}\mu\hat{B}+\psi]}{\epsilon^{2}}\right)\), then with probability at least \(1-2m\exp(-cn)-mP_{0}^{\prime}-c_{1}\exp(-\Omega(k))\) on a single draw of \((\mathbf{a}_{i},f_{i})_{i=1}^{m}\), we have the uniform signal recovery guarantee \(\|\mathbf{\hat{x}}-\mu\mathbf{x}^{\bullet}\|_{2}\leq\epsilon\) for all \(\mathbf{x}^{\bullet}\in\mathcal{K}\), where \(\mathbf{\hat{x}}\) is the solution to (2.1) with \(\mathbf{y}=\mathbf{f}(\mathbf{A}\mathbf{x}^{\bullet})\) and \(T=\mu\)._
**Remark 4**.: _While the main result of [33] is non-uniform, it was noted in [33, Section 5] that a similar uniform error rate can be established for any deterministic \(1\)-Lipschitz \(f\). Our result here is more general in that the \(\hat{L}\)-Lipschitz \(f\) is possibly random. Note that randomness on \(f\) is significant because it provides much more flexibility (e.g., additive random noise)._
**Remark 5**.: _For SIM with unknown \(f_{i}\) it may seem impractical to use (2.1) as it requires \(\mu=\mathbb{E}\left[f(g)g\right]\) where \(g\sim\mathcal{N}(0,1)\). However, by assuming \(\mu\mathbf{x}^{\bullet}\in\mathcal{K}=G(\mathbb{B}_{2}^{k}(r))\) as in [33], which is natural for sufficiently expressive \(G(\cdot)\), we can simply use \(\mathbf{x}\in\mathcal{K}\) as constraint in (2.1). Our Corollary 3 remains valid in this case under some inessential changes of \(\log\mu\) factors in the sample complexity._
**(D) Uniformly quantized GCS with dithering.** The uniform quantizer with resolution \(\delta>0\) is defined as \(\mathcal{Q}_{\delta}(a)=\delta\big{(}\big{[}\frac{a}{\delta}\big{]}+\frac{1}{2} \big{)}\) for \(a\in\mathbb{R}\). Using dithering \(\tau_{i}\sim\mathscr{U}[-\frac{\delta}{2},\frac{\delta}{2}]\), we suppose that the observations are \(y_{i}=\mathcal{Q}_{\delta}(\mathbf{a}_{i}^{\top}\mathbf{x}^{\bullet}+\tau_{i})\). This satisfies Assumption 2 with \((B_{0},L_{0},\beta_{0})=(\delta,0,\delta)\). We set \(T=1\) and take parameters for Assumption 3 as follows: \(A_{g}^{(1)},U_{g}^{(1)},A_{g}^{(2)},U_{g}^{(2)}\asymp\delta\), and \(P_{0}^{(1)}=P_{0}^{(2)}=0.\) We take \(\beta=\beta_{1}\asymp\frac{k\delta}{m}\) to confirm (2.6). With these parameters, we obtain the following from Theorem 1.
**Corollary 4**.: _Consider Assumption 1 with \(f(\cdot)=\mathcal{Q}_{\delta}(\cdot+\tau)\), \(\tau\sim\mathscr{U}[-\frac{\delta}{2},\frac{\delta}{2}]\) for some quantization resolution \(\delta>0\). Let \(\epsilon>0\) be any given recovery accuracy. If \(m\gtrsim\frac{\delta^{2}k}{\epsilon^{2}}\log\Big{(}\frac{Lr\sqrt{mn}}{\epsilon \wedge[k\delta/(m\sqrt{n})]}\Big{)}\), then with probability at least \(1-2m\exp(-cn)-c_{1}\exp(-\Omega(k))\) on a single draw of \((\mathbf{a}_{i},\tau_{i})_{i=1}^{m}\), we have the uniform recovery guarantee \(\|\mathbf{\hat{x}}-\mathbf{x}^{\bullet}\|_{2}\leq\epsilon\) for all \(\mathbf{x}^{\bullet}\in\mathcal{K}\), where \(\mathbf{\hat{x}}\) is the solution to (2.1) with \(\mathbf{y}=\mathcal{Q}_{\delta}(\mathbf{A}\mathbf{x}+\mathbf{\tau})\) and \(T=1\) (here, \(\mathbf{\tau}=[\tau_{1},\ldots,\tau_{m}]^{\top}\))._
**Remark 6**.: _While this dithered uniform quantized model has been widely studied in traditional CS (e.g., non-uniform recovery [48, 8], uniform recovery [17, 52]), it has not been investigated in GCS even for non-uniform recovery. Thus, this is new to the best of our knowledge._
A simple extension to the noisy model \(\mathbf{y}=\mathbf{f}(\mathbf{A}\mathbf{x}^{\bullet})+\mathbf{\eta}\) where \(\mathbf{\eta}\in\mathbb{R}^{m}\) has i.i.d. sub-Gaussian entries can be obtained by a fairly straightforward extension of our analysis; see Appendix F.
## 3 Proof Sketch
To provide a sketch of our proof, we begin with the optimality condition \(\|\mathbf{y}-\mathbf{A}\mathbf{\hat{x}}\|_{2}^{2}\leq\|\mathbf{y}-\mathbf{A}(T\mathbf{x}^{\bullet})\|_{2}^ {2}\). We expand the square and plug in \(\mathbf{y}=\mathbf{f}(\mathbf{A}\mathbf{x}^{\bullet})\) to obtain
\[\left\|\frac{\mathbf{A}}{\sqrt{m}}(\mathbf{\hat{x}}-T\mathbf{x}^{\bullet})\right\|_{2}^{2} \leq\frac{2}{m}\big{\langle}\mathbf{f}(\mathbf{A}\mathbf{x}^{\bullet})-T\mathbf{A}\mathbf{x}^{ \bullet},\mathbf{A}(\mathbf{\hat{x}}-T\mathbf{x}^{\bullet})\big{\rangle}. \tag{3.1}\]
For the final goal \(\|\mathbf{\hat{x}}-T\mathbf{x}^{\bullet}\|_{2}\leq\epsilon\), up to rescaling, it is enough to prove \(\|\mathbf{\hat{x}}-T\mathbf{x}^{\bullet}\|_{2}\leq 3\epsilon\). We assume for convenience that \(\|\mathbf{\hat{x}}-T\mathbf{x}^{\bullet}\|_{2}>2\epsilon\), without loss of generality. Combined with
\(\hat{\mathbf{x}},Tx^{\bullet}\in T\mathcal{K}\), we know \(\hat{\mathbf{x}}-Tx^{\bullet}\in\mathcal{K}^{-}_{\epsilon}\), where \(\mathcal{K}^{-}_{\epsilon}:=(T\mathcal{K}^{-})\cap\left(\mathbb{B}_{2}^{n}(2 \epsilon)\right)^{c},\ \mathcal{K}^{-}=\mathcal{K}-\mathcal{K}\). We further define
\[(\mathcal{K}^{-}_{\epsilon})^{*}:=\left\{\mathbf{z}/\|\mathbf{z}\|_{2}:\mathbf{z}\in \mathcal{K}^{-}_{\epsilon}\right\} \tag{3.2}\]
where the normalized error lives, i.e. \(\frac{\hat{\mathbf{x}}-Tx^{\bullet}}{\|\mathbf{z}-T^{\bullet}\mathbf{z}^{\bullet}\|_{2}} \in(\mathcal{K}^{-}_{\epsilon})^{*}\). Our strategy is to establish a uniform lower bound (resp., upper bound) for the left-hand side (resp., the right-hand side) of (3.1). We emphasize that these bounds must hold uniformly for all \(\mathbf{x}^{\bullet}\in\mathcal{K}\).
It is relatively easy to use set-restricted eigenvalue condition (S-REC) [2] to establish a uniform lower bound for the left-hand side of (3.1), see Appendix B.1 for more details. It is significantly more challenging to derive an upper bound for the right-hand side of (3.1). As the upper bound must hold uniformly for all \(\mathbf{x}^{\bullet}\), we first take the supremum over \(\mathbf{x}^{\bullet}\) and \(\hat{\mathbf{x}}\) and consider bounding the following:
\[\mathscr{R}:=\frac{1}{m}\big{\langle}\mathbf{f}(\mathbf{Ax}^{\bullet})-T \mathbf{Ax}^{\bullet},\mathbf{A}(\hat{\mathbf{x}}-T\mathbf{x}^{\bullet})\big{\rangle} \tag{3.3}\] \[=\frac{1}{m}\sum_{i=1}^{m}\big{(}f_{i}(\mathbf{a}_{i}^{\top}\mathbf{x}^{ \bullet})-T\mathbf{a}_{i}^{\top}\mathbf{x}^{\bullet}\big{)}\cdot\big{(}\mathbf{a}_{i}^{ \top}[\hat{\mathbf{x}}-Tx^{\bullet}]\big{)}\] \[\leq\|\hat{\mathbf{x}}-T\mathbf{x}^{\bullet}\|_{2}\cdot\sup_{\mathbf{x}\in \mathcal{K}}\sup_{\mathbf{v}\in(\mathcal{K}^{-}_{\epsilon})^{*}}\frac{1}{m}\sum_{ i=1}^{m}\big{(}f_{i}(\mathbf{a}_{i}^{\top}\mathbf{x})-T\mathbf{a}_{i}^{\top}\mathbf{x} \big{)}\cdot\big{(}\mathbf{a}_{i}^{\top}\mathbf{v}\big{)}:=\|\hat{\mathbf{x}}-T\mathbf{x}^{ \bullet}\|_{2}\cdot\mathscr{R}_{u},\]
where \((\mathcal{K}^{-}_{\epsilon})^{*}\) is defined in (3.2). Clearly, \(\mathscr{R}_{u}\) is the supremum of a product process, whose factors are indexed by \(\mathbf{x}\in\mathcal{K}\) and \(\mathbf{v}\in(\mathcal{K}^{-}_{\epsilon})^{*}\). It is, in general, challenging to control a product process, and existing results often require both factors to satisfy a certain "sub-Gaussian increments" condition (e.g., [36; 37]). However, the first factor of \(\mathscr{R}_{u}\) (i.e., \(f_{i}(\mathbf{a}_{i}^{\top}\mathbf{x}^{\bullet})-T\mathbf{a}_{i}^{\top}\mathbf{x}^{\bullet}\)) does not admit such a condition when \(f_{i}\) is not continuous (e.g., the 1-bit model \(f_{i}=\operatorname{sign}(\cdot)\)). We will construct the Lipschitz approximation of \(f_{i}\) to overcome this difficulty shortly in Section 3.1.
**Remark 7**.: _We note that these challenges stem from our pursuit of uniform recovery. In fact, a non-uniform guarantee for SIM was presented in [33, Theorem 1]. In its proof, the key ingredient is [33, Lemma 3] that bounds \(\mathscr{R}_{u}\) without the supremum on \(\mathbf{x}\). This can be done as long as \(f_{i}(\mathbf{a}_{i}^{\top}\mathbf{x}^{\bullet})\) is sub-Gaussian, while the potential discontinuity of \(f_{i}\) is totally unproblematic._
### Lipschitz Approximation
For any \(x_{0}\in\mathscr{D}_{f_{i}}\) we define the one-sided limits as \(f_{i}^{-}(x_{0})=\lim_{x\to x_{0}^{-}}f_{i}(x)\) and \(f_{i}^{+}(x_{0})=\lim_{x\to x_{0}^{+}}f_{i}(x)\), and write their average as \(f_{i}^{a}(x_{0})=\frac{1}{2}(f_{i}^{-}(x_{0})+f_{i}^{+}(x_{0}))\). Given any approximation accuracy \(\beta\in(0,\frac{\beta\alpha}{2})\), we construct the Lipschitz continuous function \(f_{i,\beta}\) as:
\[f_{i,\beta}(x)=\begin{cases}f_{i}(x)&,\quad\text{if }x\notin\mathscr{D}_{f_{i}}+[- \frac{\beta}{2},\frac{\beta}{2}]\\ f_{i}^{a}(x_{0})-\frac{2[f_{i}^{a}(x_{0})-f_{i}(x_{0}-\frac{\beta}{2})](x_{0}-x )}{\beta},\quad\text{if }\exists x_{0}\in\mathscr{D}_{f_{i}}\text{ s.t. }x\in[x_{0}-\frac{\beta}{2},x_{0}]\\ f_{i}^{a}(x_{0})+\frac{2[f_{i}(x_{0}+\frac{\beta}{2})-f_{i}^{a}(x_{0})](x-x_{0} )}{\beta},\quad\text{if }\exists x_{0}\in\mathscr{D}_{f_{i}},\text{ s.t. }x\in[x_{0},x_{0}+\frac{\beta}{2}]\end{cases}\quad. \tag{3.4}\]
We have defined the approximation error \(\varepsilon_{i,\beta}(\cdot)=f_{i,\beta}(\cdot)-f_{i}(\cdot)\) in Assumption 3. An important observation is that both \(f_{i,\beta}\) and \(|\varepsilon_{i,\beta}|\) are Lipschitz continuous (see Lemma 1 below). Here, it is crucial to consider \(|\varepsilon_{i,\beta}|\) rather than \(\varepsilon_{i,\beta}\) as the latter is not continuous; see Figure 1 for an intuitive graphical illustration and more explanations in Appendix B.2.
**Lemma 1**.: _With \(B_{0},L_{0},\beta_{0}\) given in Assumption 2, for any \(\beta\in(0,\frac{\beta_{0}}{2})\), \(f_{i,\beta}\) is \(\big{(}L_{0}+\frac{B_{0}}{\beta}\big{)}\)-Lipschitz over \(\mathbb{R}\), and \(|\varepsilon_{i,\beta}|\) is \(\big{(}2L_{0}+\frac{B_{0}}{\beta}\big{)}\)-Lipschitz over \(\mathbb{R}\)._
### Bounding the product process
We now present our technique to bound \(\mathscr{R}_{u}\). Recall that \(\xi_{i,\beta}(a)\) and \(\varepsilon_{i,\beta}(a)\) were defined in (2.2). By Lemma 1, \(\xi_{i,\beta}\) is \(\big{(}L_{0}+T+\frac{B_{0}}{\beta}\big{)}\)-Lipschitz. Now we use \(f_{i}(a)-Ta=\xi_{i,\beta}(a)-\varepsilon_{i,\beta}\) to decompose \(\mathscr{R}_{u}\) (in the following, we sometimes shorten "\(\sup_{\mathbf{x}\in\mathcal{K}}\sup_{\mathbf{v}\in(\mathcal{K}^{-}_{\epsilon})^{*}}\)" as "\(\sup_{\mathbf{x},\mathbf{v}}\)"):
\[\mathscr{R}_{u}\leq\underbrace{\sup_{\mathbf{x},\mathbf{v}}\frac{1}{m}\sum_{i=1}^{m}\xi_ {i,\beta}(\mathbf{a}_{i}^{\top}\mathbf{x})\cdot\big{(}\mathbf{a}_{i}^{\top}\mathbf{v})}_{ \mathscr{R}_{u1}}+\underbrace{\sup_{\mathbf{x},\mathbf{v}}\frac{1}{m}\sum_{i=1}^{m}| \varepsilon_{i,\beta}(\mathbf{a}_{i}^{\top}\mathbf{x})|\,\big{|}\mathbf{a}_{i}^{\top}\mathbf{v}| }_{\mathscr{R}_{u2}}. \tag{3.5}\]
It remains to control \(\mathscr{R}_{u1}\) and \(\mathscr{R}_{u2}\). By the Lipschitz continuity of \(\xi_{i,\beta}\) and \(|\varepsilon_{i,\beta}|\), the factors of \(\mathscr{R}_{u1}\) and \(\mathscr{R}_{u2}\) admit sub-Gaussian increments, so it is natural to first center them and then invoke the concentration inequality for product process due to Mendelson [36, Theorem 1.13], which we restate in Lemma 5 (Appendix A). However, this does not produce a tight bound and would eventually require \(\tilde{O}(k/\epsilon^{4})\) to achieve a uniform \(\ell_{2}\)-error of \(\epsilon\), as is the case in [17, Section 4].
In fact, Lemma 5 is based on _Gaussian width_ and hence blind to the fact that \(\mathcal{K},(\mathcal{K}_{e}^{-})^{*}\) here have low _metric entropy_ (Lemma 6). By characterizing the low intrinsic dimension of index sets via metric entropy, we develop the following concentration inequality that can produce tighter bound for \(\mathscr{R}_{u1}\) and \(\mathscr{R}_{u2}\). This also allows us to derive uniform error rates sharper than those in [17, Section 4].
**Theorem 2**.: _Let \(g_{\mathbf{x}}=g_{\mathbf{x}}(\mathbf{a})\) and \(h_{\mathbf{v}}=h_{\mathbf{v}}(\mathbf{a})\) be stochastic processes indexed by \(\mathbf{x}\in\mathcal{X}\subset\mathbb{R}^{p_{1}},\mathbf{v}\in\mathcal{V}\subset \mathbb{R}^{p_{2}}\), both defined with respect to a common random variable \(\mathbf{a}\). Assume that:_
* _(A1.)_ \(g_{\mathbf{x}}(\mathbf{a}),\ h_{\mathbf{v}}(\mathbf{a})\) _are sub-Gaussian for some_ \((A_{g},A_{h})\) _and admit sub-Gaussian increments regarding_ \(\ell_{2}\) _distance for some_ \((M_{g},M_{h})\)_:_ \[\begin{split}&\|g_{\mathbf{x}}(\mathbf{a})-g_{\mathbf{x}^{\prime}}(\mathbf{a}) \|_{\psi_{2}}\leq M_{g}\|\mathbf{x}-\mathbf{x}^{\prime}\|_{2},\ \|g_{\mathbf{x}}(\mathbf{a})\|_{\psi_{2}}\leq A_{g},\ \forall\ \mathbf{x},\mathbf{x}^{\prime}\in \mathcal{X};\\ &\|h_{\mathbf{v}}(\mathbf{a})-h_{\mathbf{v}^{\prime}}(\mathbf{a})\|_{\psi_{2}} \leq M_{h}\|\mathbf{v}-\mathbf{v}^{\prime}\|_{2},\ \|h_{\mathbf{v}}(\mathbf{a})\|_{\psi_{2}}\leq A_{h},\ \forall\ \mathbf{v},\mathbf{v}^{\prime}\in \mathcal{V}.\end{split}\] (3.6)
* _(A2.)_ _On a single draw of_ \(\mathbf{a}\)_, for some_ \((L_{g},U_{g},L_{h},U_{h})\) _the following events simultaneously hold with probability at least_ \(1-P_{0}\)_:_ \[\begin{split}&|g_{\mathbf{x}}(\mathbf{a})-g_{\mathbf{x}^{\prime}}(\mathbf{a})| \leq L_{g}\|\mathbf{x}-\mathbf{x}^{\prime}\|_{2},\ |g_{\mathbf{x}}(\mathbf{a})|\leq U_{g},\ \forall\ \mathbf{x},\mathbf{x}^{\prime}\in \mathcal{X};\\ &|h_{\mathbf{v}}(\mathbf{a})-h_{\mathbf{v}^{\prime}}(\mathbf{a})|\leq L_{h}\|\bm {v}-\mathbf{v}^{\prime}\|_{2},\ |h_{\mathbf{v}}(\mathbf{a})|\leq U_{h},\ \forall\ \mathbf{v},\mathbf{v}^{\prime}\in \mathcal{V}.\end{split}\] (3.7)
_Let \(\mathbf{a}_{1},...,\mathbf{a}_{m}\) be i.i.d. copies of \(\mathbf{a}\), and introduce the shorthand \(S_{g,h}=L_{g}U_{h}+M_{g}A_{h}\) and \(T_{g,h}=L_{h}U_{g}+M_{h}A_{g}\). If \(m\gtrsim\mathscr{H}\left(\mathcal{X},\frac{A_{g}A_{h}}{\sqrt{m}S_{g,h}}\right) +\mathscr{H}\left(\mathcal{V},\frac{A_{g}A_{h}}{\sqrt{m}T_{g,h}}\right)\), where \(\mathscr{H}(\cdot,\cdot)\) is the metric entropy defined in Definition 2, then with probability at least \(1-mP_{0}-2\exp\big{[}-\Omega\big{(}\mathscr{H}(\mathcal{X},\frac{A_{g}A_{h}}{ \sqrt{m}S_{g,h}})+\mathscr{H}(\mathcal{V},\frac{A_{g}A_{h}}{\sqrt{m}T_{g,h}}) \big{)}\big{]}\) we have \(I\lesssim\frac{A_{g}A_{h}}{\sqrt{m}}\sqrt{\mathscr{H}(\mathcal{X},\frac{A_{g} A_{h}}{\sqrt{m}S_{g,h}})+\mathscr{H}(\mathcal{V},\frac{A_{g}A_{h}}{\sqrt{m}T_{g,h}})}\), where \(I:=\sup_{\mathbf{x}\in\mathcal{X}}\sup_{\mathbf{v}\in\mathcal{V}}\big{|}\frac{1}{m} \sum_{i=1}^{m}\big{(}g_{\mathbf{x}}(\mathbf{a}_{i})h_{\mathbf{v}}(\mathbf{a}_{i})-\mathbb{E} [g_{\mathbf{x}}(\mathbf{a}_{i})h_{\mathbf{v}}(\mathbf{a}_{i})]\big{)}\big{|}\) is the supremum of a product process._
**Remark 8**.: _We use \(\mathscr{R}_{u2}\) as an example to illustrate the advantage of Theorem 2 over Lemma 5. The key step is on bounding the centered process_
\[\mathscr{R}_{u2,c}:=\sup_{\mathbf{x}\in\mathcal{K}}\sup_{\mathbf{v}\in(\mathcal{K}_{e }^{-})^{*}}\big{\{}|\varepsilon_{i,\beta}(\mathbf{a}_{i}^{\top}\mathbf{x})||\mathbf{a}_{i} ^{\top}\mathbf{v}|-\mathbb{E}[|\varepsilon_{i,\beta}(\mathbf{a}_{i}^{\top}\mathbf{x})||\mathbf{ a}_{i}^{\top}\mathbf{v}|]\big{\}}.\]
_Let \(g_{\mathbf{x}}(\mathbf{a}_{i})=|\varepsilon_{i,\beta}(\mathbf{a}_{i}^{\top}\mathbf{x})|\) and \(h_{\mathbf{v}}(\mathbf{a}_{i})=|\mathbf{a}_{i}^{\top}\mathbf{v}|\), then one can use Theorem 2 or Lemma 5 to bound \(\mathscr{R}_{u2,c}\). Note that \(||\mathbf{a}_{i}^{\top}\mathbf{v}||_{\psi_{2}}=O(1)\) justifies the choice \(A_{h}=O(1)\), and both \(\mathscr{H}(\mathcal{K},\eta)\) and \(\mathscr{H}((\mathcal{K}_{e}^{-})^{*},\eta)\) depend linearly on \(k\) but only logarithmically on \(\eta\) (Lemma 6), so Theorem 2 could bound \(\mathscr{R}_{u2,c}\) by \(\tilde{O}\big{(}A_{g}\sqrt{k/m}\big{)}\) that depends on \(M_{g}\) in a logarithmic manner. However, the bound produced by Lemma 5 depends linearly on \(M_{g}\); see term \(\frac{M_{g}A_{h}\omega(\mathcal{K})}{\sqrt{m}}\) in (A.1). From (3.6), \(M_{g}\) should be proportional to the Lipschitz constant of \(|\varepsilon_{i,\beta}|\), which scales as \(\frac{1}{\beta}\) (Lemma 1). The issue is that in many cases we need to take extremely small \(\beta\) to guarantee that (2.6) holds true (e.g., we take \(\beta\asymp k/m\) in 1-bit GCS). Thus, Lemma 5 produces a worse bound compared to our Theorem 2._
Figure 1: (Left): \(f_{i}\) and its approximation \(f_{i,0.5}\); (Right): approximation error \(\varepsilon_{i,0.5},|\varepsilon_{i,0.5}|\).
Conclusion
In this work, we built a unified framework for uniform signal recovery in nonlinear generative compressed sensing. We showed that using generalized Lasso, a sample size of \(\tilde{O}(k/\epsilon^{2})\) suffices to uniformly recover all \(\mathbf{x}\in G(\mathbb{B}_{2}^{k}(r))\) up to an \(\ell_{2}\)-error of \(\epsilon\). We specialized our main theorem to 1-bit GCS with/without dithering, single index model, and uniformly quantized GCS, deriving uniform guarantees that are new or exhibit some advantages over existing ones. Unlike [33], our proof is free of any non-trivial embedding property. As part of our technical contributions, we constructed the Lipschitz approximation to handle potential discontinuity in the observation model, and also developed a concentration inequality to derive tighter bound for the product processes arising in the proof, allowing us to obtain a uniform error rate faster than [17]. Possible future directions include extending our framework to handle the adversarial noise and representation error.
**Acknowledgment.** J. Chen was supported by a Hong Kong PhD Fellowship from the Hong Kong Research Grants Council (RGC). J. Scarlett was supported by the Singapore National Research Foundation (NRF) under grant A-0008064-00-00. M. K. Ng was partially supported by the HKRGC GRF 17201020, 17300021, CRF C7004-21GF and Joint NSFC-RGC N-HKU76921.
|
2309.09470 | Face-Driven Zero-Shot Voice Conversion with Memory-based Face-Voice
Alignment | This paper presents a novel task, zero-shot voice conversion based on face
images (zero-shot FaceVC), which aims at converting the voice characteristics
of an utterance from any source speaker to a newly coming target speaker,
solely relying on a single face image of the target speaker. To address this
task, we propose a face-voice memory-based zero-shot FaceVC method. This method
leverages a memory-based face-voice alignment module, in which slots act as the
bridge to align these two modalities, allowing for the capture of voice
characteristics from face images. A mixed supervision strategy is also
introduced to mitigate the long-standing issue of the inconsistency between
training and inference phases for voice conversion tasks. To obtain
speaker-independent content-related representations, we transfer the knowledge
from a pretrained zero-shot voice conversion model to our zero-shot FaceVC
model. Considering the differences between FaceVC and traditional voice
conversion tasks, systematic subjective and objective metrics are designed to
thoroughly evaluate the homogeneity, diversity and consistency of voice
characteristics controlled by face images. Through extensive experiments, we
demonstrate the superiority of our proposed method on the zero-shot FaceVC
task. Samples are presented on our demo website. | Zheng-Yan Sheng, Yang Ai, Yan-Nian Chen, Zhen-Hua Ling | 2023-09-18T04:08:02Z | http://arxiv.org/abs/2309.09470v1 | # Face-Driven Zero-Shot Voice Conversion with Memory-based Face-Voice Alignment
###### Abstract.
This paper presents a novel task, zero-shot voice conversion based on face images (zero-shot FaceVC), which aims at converting the voice characteristics of an utterance from any source speaker to a newly coming target speaker, solely relying on a single face image of the target speaker. To address this task, we propose a face-voice memory-based zero-shot FaceVC method. This method leverages a memory-based face-voice alignment module, in which slots act as the bridge to align these two modalities, allowing for the capture of voice characteristics from face images. A mixed supervision strategy is also introduced to mitigate the long-standing issue of the inconsistency between training and inference phases for voice conversion tasks. To obtain speaker-independent content-related representations, we transfer the knowledge from a pretrained zero-shot voice conversion model to our zero-shot FaceVC model. Considering the differences between FaceVC and traditional voice conversion tasks, systematic subjective and objective metrics are designed to thoroughly evaluate the homogeneity, diversity and consistency of voice characteristics controlled by face images. Through extensive experiments, we demonstrate the superiority of our proposed method on the zero-shot FaceVC task. Samples are presented on our demo website1.
voice conversion, zero-shot, face-voice alignment +
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
by reconstructing face images. Another study (Kumar et al., 2017) employed a three-stage training strategy, including face-voice reparameterization and facial-to-audio transformation, to achieve better performance.
The most critical challenge in FaceVC is face-voice alignment, i.e., to derive corresponding voice representations given face representations. In previous studies, face representations were estimated by either relying on the supervision of mel-spectrum reconstruction (Kumar et al., 2017) or minimizing the mean square error (MSE) between speaker embeddings and face embeddings (Kumar et al., 2017). The former strategy (Kumar et al., 2017) can't be extented to zero-shot FaceVC since the recordings of target speakers are unavailable. The latter (Kumar et al., 2017) adopted the simple MSE loss, assuming that the distribution of voice representations given face representations was unimodal. Both of them failed to describe the complex mapping relationship between voice and face spaces, and can't fulfill the requirement of the zero-shot FaceVC task.
Therefore, this paper proposes a face-voice memory-based zero-shot FaceVC (FVMVC) method. In this method, a memory-based face-voice alignment (MFVA) module is developed that utilizes trainable slots to quantize the common characteristics between face and voice spaces. At the training stage, the slot values in MFVA are optimized by not only minimizing the reconstruction loss of speaker embeddings, but also reducing the Kullback-Leibler divergence between the slot weight distributions in both spaces. At the inference stage, given a face image of an unseen target speaker, a recalled face embedding is calculated using the slot weights estimated from the reference image and the slot values in the voice space.
In addition, Zero-shot VC usually adopts an auto-encoder framework (Kumar et al., 2017; Kumar et al., 2017; Kumar et al., 2017; Li et al., 2017), which suffers from the inconsistency between the training and inference phases. More specifically, the speaker representations and content representations are from the same speaker at the training stage, while they are from different speaker at the conversion stage. To mitigate this problem for zero-shot FaceVC, we propose a mixed supervision strategy, introducing a simple yet effective inter-speaker supervision in addition to the intra-speaker supervision in traditional auto-encoder frameworks. The inter-speaker supervision is achieved by creating pseudo-parallel training data using the speaker embeddings extracted from the recordings of another speaker in the training set. Besides, in order to obtain speaker-independent content representations, we initially pretrain a zero-shot VC model and transfer the knowledge from zero-shot VC to zero-shot FaceVC.
We have noticed that it should be impossible to recover the exact voice of target speakers from only face images. Instead, we focus on three properties of the speech generated by zero-shot FaceVC. The first one is the homogeneity among the voice characteristics of the speech converted using different face images of the same target speaker. The second is the diversity of the voice characteristics converted using the face images of different target speakers. And the third is the consistency between the voice characteristics of the converted utterances and their corresponding face images in some important aspects, e.g., gender. Therefore, a series of subjective and objective metrics are designed in this paper to evaluate these properties mentioned above.
In summary, our main contributions are as follows. First, we propose a new task named zero-shot voice conversion based on face images (zero-shot FaceVC). Second, we propose a face-voice memory-based zero-shot FaceVC (FVMVC) method for this task, which contains a memory-based face-voice alignment module, a mixed supervision strategy and zero-shot VC pretraining. Third, we design a series of metrics to evaluate the proposed task and conduct extensive experiments to demonstrate the effectiveness of our proposed method.
## 2. Related Work
### Voice Conversion
VC is a task that automatically converts the speech from a source speaker to a speech sound like being spoken by a target speaker while preserving the linguistic content (Kumar et al., 2017; Li et al., 2017). This task can be categorized into two categories: parallel and non-parallel conditions. Since parallel data are not always available, many non-parallel VC techniques have been proposed, including the methods based on variational auto-encoder (VAE) (Kumar et al., 2017; Li et al., 2017), generative adversarial network (GAN) (Kumar et al., 2017; Li et al., 2017; Li et al., 2017; Li et al., 2017), recognition-synthesis (Li et al., 2017; Li et al., 2017; Li et al., 2017) and disentanglement (Li et al., 2017).
As a special case of non-parallel VC, zero-shot VC has attracted widespread attention in recent years. Zero-shot VC methods usually follow auto-encoder frameworks, where the encoder extracts content and speaker representations from speech respectively, and the decoder reconstructs speech by combining the above representations. Hence, speech representation disentanglement is crucial for this task (Li et al., 2017; Li et al., 2017). Recently, several zero-shot VC methods (Li et al., 2017; Li et al., 2017; Li et al., 2017) based on information theory have emerged, with the aim of disentangling the content-related and speaker identity-related information. IDE-VC (Li et al., 2017) employed mutual information (MI) with speaker labels as supervision for disentanglement. VQMIVC (Li et al., 2017) combined vector quantization with contrastive predictive coding (VQCP) (Li et al., 2017; Li et al., 2017) and MI for fully unsupervised training.
### Learning Voice-Face Association
In recent years, learning the voice-face association has aroused the interest of researchers. As face and voice are inherently correlated, various cross-modal generation tasks involving both face and voice have been proposed, in addition to voice conversion supported by face images. Examples of such tasks include generating the talking face video from the audio (Li et al., 2017; Li et al., 2017; Li et al., 2017; Li et al., 2017), synthesizing speech from the talking face images (Li et al., 2017; Li et al., 2017; Li et al., 2017), reconstructing the face image from the corresponding voice (Li et al., 2017; Li et al., 2017; Li et al., 2017; Li et al., 2017) and synthesizing the speaker's voice with a face image during text-to-speech (FaceTTS) (Kumar et al., 2017; Li et al., 2017; Li et al., 2017; Li et al., 2017; Li et al., 2017).
The most relevant task to voice conversion based on face images is FaceTTS, as they both utilize face images to extract speaker identities for controlling voice characteristics. As far as we know, Face2Speech (Kumar et al., 2017) was the first work to address FaceTTS, which pretrained a face encoder with the supervisedly generalized end-to-end (GE2E) (Li et al., 2017) loss and then replaced the speaker encoder with the face encoder in a multi-speaker TTS model. Following the Face2Speech framework, more elaborate model structures and training strategies (Li et al., 2017; Li et al., 2017; Li et al., 2017) have been proposed to promote the quality of synthetic speech. Recently, 3D face shapes and refined face attributes have also been utilized to generate speech (Li et al., 2017; Li et al., 2017), which provided a referable approach to voice editing. However, the voice-face alignment in FaceTTS has yet to be explored. This paper
elaborately designs a memory-based module for the alignment between these two modalities for zero-shot FaceVC, which can also be inserted into the FaceTTS framework for voice control.
## 3. Method
As shown in Figure 1, our proposed FVMVC follows the standard auto-encoder paradigm, consisting of a content encoder, a speaker encoder, a face encoder, a pitch extractor, a decoder, and a memory-based face-voice alignment (MFVA) module.
During the inference phase, our proposed FVMVC utilizes three inputs, including a face image from the target speaker, together with the waveforms and the mel-spectrograms of the utterance to be converted from the source speaker. By processing the face image through a face encoder and an MFVA module sequentially, the voice characteristics representation based on the face image (i.e., the recalled face embedding) is obtained. Similar to zero-shot VC, the mel-spectrograms of the source speaker provide a speaker-independent content representation, while waveforms are used for extracting normalized fundamental frequencies. All the representations mentioned above are ultimately sent into the decoder, which generates mel-spectrograms of the converted utterance. These mel-spectrograms are then converted to waveforms through the vocoder. During the training phase, we incorporate speaker embeddings, which are extracted from mel-spectrograms via the speaker encoder, to supervise the training of the MFVA module.
### Memory-based Face-Voice Alignment
The alignment between face and voice is intended to retrieve the corresponding speaker embedding when only a face image is available. The retrieved speaker embedding from a face image is referred to as the recalled face embedding. We introduce the MFVA module to improve the modeling of voice-face alignment, thereby promoting the performance of zero-shot FaceVC.
As shown in Figure 2, during the training phase, the MFVA module takes a pair of face embedding \(\mathbf{h}\in\mathbb{R}^{D}\) and speaker embedding \(\mathbf{s}\in\mathbb{R}^{D}\) as input, and generates a recalled face embedding \(\mathbf{h}\in\mathbb{R}^{D}\)for voice control, where the face embedding \(\mathbf{h}\) and the speaker embedding \(\mathbf{s}\) are extracted by the face encoder and the speaker encoder respectively, and \(D\) represents the dimension of the projected face or speech embedding. MFVA is composed of a voice-value memory \(\mathbf{M}_{voice}=[\mathbf{m}_{1}^{1}\mathbf{m}_{2}^{2},\cdots,\mathbf{m}_{N}^{D}]\tau\in \mathbb{R}^{N\times D}\) and a face-key memory \(\mathbf{M}_{face}=[\mathbf{m}_{f}^{1}\mathbf{m}_{f}^{2},\cdots,\mathbf{m}_{f}^{N}]\tau\in \mathbb{R}^{N\times D}\), where \(N\) denotes the number of the slots and \(D\) is the dimension for each slot, which equals to the dimension of the projected speaker embedding or face embedding. The training of the MFVA module contains two objectives, i.e., (1) storing sufficient voice characteristics information in voice-value
Figure 1. The overall flowchart of our proposed FVMVC, where Rec. Loss represents reconstruct loss. During the training phase, two pairs of utterances and the corresponding face images from speaker A and speaker B are used for training simultaneously. Speaker A is used for intra-speaker training and is selected as the source speaker for inter-speaker training, while speaker B is chosen as the target speaker for inter-speaker training.
Figure 2. The architecture of the MFVA module, where Rec. Loss and KL Loss represent reconstruct loss and Kullback-Leibler divergence loss, respectively.
memory, and (2) minimizing the distance between the distributions of two modalities.
**The sufficiency of the voice characteristics information.** Voice-value memory \(\mathbf{M}_{voice}\) is made up of a bank of trainable slots \(\{\mathbf{m}_{0}^{i}\}_{i=1}^{N}\), where \(\mathbf{m}_{0}^{i}\in\mathbb{R}^{D}\) is the \(i\)-th slot. The voice-value memory is designed to exclusively capture the voice-related information and expected to generate any voice. Specifically, when we take a speaker embedding \(\mathbf{s}\) as a query, the attention weight between the query and each slot is computed with cosine similarity and softmax normalization function as follows,
(1) \[\mathbf{w}_{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{ }}}}}}}}}}}}}^{i}=softmax( \frac{\mathbf{s}^{\intercal}\mathbf{m}_{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\bmbm{ }}}}}}}}}}}}}}}}{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{ }}}}}}}}}}}}}}}}}{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}}}{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}}}{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}}}{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}}}{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}}}{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}}}{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}}}{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}}}{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}}{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}}{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}}{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}}{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}}{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}}{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}}{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{{\mathbf{\mathbf{\mathbf{\mathbf{{\mathbf{\mathbf{\mathbf{}}}}}}}}}}}}}}}}}{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{{\mathbf{\mathbf{\mathbf{{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{}}}}}}}}}}}}}}}}}{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{}}}}}}}}}}}}}}}}}{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{{\mathbf{\mathbf{{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{}}}}}}}}}}}}}}}}{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{}}}}}}}}}}}}}}}}{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{}}}}}}}}}}}}}}}}{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{{\mathbf{\mathbf{\mathbf{{\mathbf{\mathbf{\mathbf{}}}}}}}}}}}}}}}}{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{{\mathbf{{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{}}}}}}}}}}}}}}}}{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{{\mathbf{{\mathbf{\mathbf{{\mathbf{\mathbf{\mathbf{}}}}}}}}}}}}}}}}{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{{\mathbf{{\mathbf{\mathbf{{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\bm
mel-spectrograms of an extra speaker \(B\) is as input, and the speaker embedding \(\mathbf{s}_{B}\) and the recalled face embedding \(\hat{\mathbf{h}}_{B}\) are obtained in the same way as inference phase. Then speaker A is treated as the source speaker and speaker \(B\) is viewed as the target speaker to convert voice as follows,
\[\mathbf{X}_{speech}=D(\mathbf{c}_{A},\mathbf{s}_{B},\mathbf{f}_{A}), \tag{10}\] \[\mathbf{X}_{face}=D(\mathbf{c}_{A},\hat{\mathbf{h}}_{B},\mathbf{f}_{A}), \tag{9}\]
where \(\mathbf{X}_{speech}\) refers to the converted mel-spectrograms that are supported by the speaker embedding \(\mathbf{s}_{B}\), and \(\mathbf{X}_{face}\) denotes the converted mel-spectrograms obtained using the recalled face embedding \(\hat{\mathbf{h}}_{B}\). Then, we optimize the MFVA by minimizing the reconstruction loss,
\[\mathcal{L}_{Inter}=\|\mathbf{X}_{speech}-\mathbf{X}_{face}\|_{2}^{2}+\|\mathbf{X}_{speech}-\mathbf{X}_{face}\|_{1}. \tag{11}\]
In summary, the final loss function during the training phase of zero-shot FaceVC is as follows,
\[\mathcal{L}=\lambda_{1}\mathcal{L}_{store}+\lambda_{2}\mathcal{L}_{align}+ \lambda_{3}\mathcal{L}_{Inter}+\mathcal{L}_{Intra}, \tag{12}\]
where \(\lambda_{1},\lambda_{2},\) and \(\lambda_{3}\) are constant weights to control how the importance of each term, and \(\mathcal{L}_{store}\) and \(\mathcal{L}_{align}\) are described in Section 3.1.
### Pretraining Strategy
It is widely recognized that the content encoder may encode the speaker identity-related information. Only when the speaker and content representations are disentangled, the voice characteristic of the utterance can be converted by changing the speaker-identity representation (Song et al., 2016; Wang et al., 2017). Hence speech representation disentanglement is a critical factor that significantly impacts the performance of the zero-shot FaceVC. Taking this into considerations, we first pretrain a zero-shot VC model and then transfer the content encoder, speaker encoder, and decoder to the zero-shot FaceVC for better performance. Specifically, during the training phase of the zero-shot FaceVC, the pretrained content encoder and the pretrained speaker encoder are fixed, while the pretrained decoder is further optimized with other modules.
In order to achieve speech representation disentanglement, mutual information (MI) is introduced to evaluate the dependency between different representations. We minimize the MI between content embeddings, speaker embeddings and fundamental frequencies utilizing the variational contrastive log-ratio upper bound (vCLUB) (Chen et al., 2016) during the training phase of zero-shot VC. Except for the MI loss, reconstruction loss, InfoNCE loss (Wang et al., 2017) and VQ loss (Wang et al., 2018) are used to optimize the zero-shot VC model. For more information on these functions, please refer to the VQMIVC (Wang et al., 2018) paper.
## 4. Experiments
### Datasets
Zero-shot FaceVC tasks place high demands on datasets, requiring not only a large volume of background-noise-free speech from various speakers but also clear face images of the corresponding individuals. Based on the above considerations, we conducted the experiments on the LRS3-TED (Chen et al., 2016) dataset to evaluate the zero-shot FaceVC task. This dataset includes 5,594 TED and TEDx talks in English, totaling over 400 hours of video content. The cropped face tracks in the video are provided at a resolution of 224\(\times\)224 with a frame rate of 25 frames per second. The audio tracks are also available in a single-channel 16-bit 16 kHz format. In this dataset, the duration of speech from different speakers follows a long-tail distribution, which means that the majority of speakers have only a small number of utterances. To address the issue of uneven video distribution among speakers, we opted to use the top 200 speakers with the highest number of videos as our training set and validation set. During inference, we randomly selected a total of 12 newly coming speakers not in the training and validation sets: 8 target speakers (4 female and 4 male) and 4 source speakers (2 female and 2 male) for evaluation.
### Implementation Details
To extract acoustic features, we first extracted the audio from the video clips with the FFmpeg tool (Wang et al., 2017). Then 80-dim mel-spectrograms and normalized fundamental frequencies were calculated with a 25ms Hanning window, a 10ms frame shift and a 400-point short-time Fourier transform (STFT). We extracted a 512-dimensional face embedding from each frame of the video using MTCNN (Wang et al., 2017) and FaceNet (Wang et al., 2017) subsequently. The dimension of face embeddings was projected into \(D=256\), which was the same as the dimension of speaker embeddings and the slot dimension. The number of slots in MFVA was \(N=96\). We applied the pretrained Parallel WaveGAN (PWG) vocoder (Wang et al., 2017) to convert the mel-spectrograms to waveforms.
For the pretraining strategy, the zero-shot VC model was trained for 1000 epochs using a batch size of 256. The mini-batch Adam optimizer was initialized with a learning rate of 1e-6 and was warmed up to 1e-3 after 2000 iterations. The learning rate was then decayed by a factor of 0.5 at epochs 300, 400, and 500. At the zero-shot FaceVC training stage, the model was updated for 2000 epochs using a batch size of 256. Similar to pretraining the zero-shot VC model, the learning rate of the Adam optimizer was initialized as 1e-6 and warmed up to 2.5e-4 after 3000 iterations. The learning rate was then decayed by a factor of 0.5 at epochs 800, 1200, and 1600. The constant weights \(\lambda_{1}\), \(\lambda_{2}\), and \(\lambda_{3}\) in Equation 12 were respectively set to be 1, 10, and 0.2.
### Comparison systems
As we are the first to attempt the zero-shot FaceVC task and there are no existing comparable methods, we compared our proposed method with the following systems to evaluate its performance:
(1) **Ground Truth**: This method transferred the natural mel-spectrograms of target speakers to waveforms using the pretrained PWG vocoder. Since there are no parallel utterances between source and target speakers, the ground truth results can't be compared directly with the converted results, and are just used for indicating the upperbound of various metrics.
(2) **SpeechVC**: This is our pretrained zero-shot VC model using natural reference utterances of target speakers for inference.
(3) **Auto-FaceVC**(Wang et al., 2017): This method originally adopted AutoVC (Wang et al., 2017) as the backbone. To better adapt the model to the zero-shot FaceVC task, we altered its backbone AutoVC to VQMIVC while preserving its original training strategy.
(4) **attentionCVAE**(Wang et al., 2017): This method used the face attributes to control the voice characteristics in the multi-speaker text-to-speech
task. To adapt it to our zero-shot FaceVC task, we inserted its face attributes-based voice control module to replace the face encoder and the MFVA module in our proposed model.
### Metrics
We developed several objective metrics to evaluate the homogeneity, diversity and consistency of the converted voice. The underlying motivation for measuring homogeneity is that the zero-shot FaceVC system should generate homogeneous voice characteristics with different face images from the same target speaker, regardless of the image's shooting angle and background. We applied the well-known open-source speaker verification toolkit, _Resemblyyzer_3, to extract the speaker embedding from converted utterances of the same speaker and calculate the cosine similarity between them. The greater the value of cosine similarity, the higher the homogeneity between different utterances. Based on the descriptions above, we employed two methods to match the utterances and calculate the cosine similarity between them. (1) We employed a randomized approach to match utterances converted by different face images of the same target speaker. To avoid chance, we shuffled the utterances 500 times and calculated the average cosine similarity between all pairs, which we referred to as the speaker homogeneity score by random matching (**SHR**). (2) In addition to the random matching, we also conducted one-to-one matching of all utterances converted from the same utterance to the same target speaker by different face images. We then averaged cosine similarity between these pairs to obtain the speaker homogeneity score by one-to-one matching (**SHO**).
Footnote 3: [https://github.com/research-ai/Resemblyzer](https://github.com/research-ai/Resemblyzer)
Apart from homogeneity, it is crucial for the voice characteristics converted from different target speakers to be diverse rather than uniform and indistinguishable. Similar to homogeneity, we obtained the speaker embedding of converted utterances from the same source speaker using different target speakers' face images by _Resemblyyzer_ and calculated the cosine similarity between them. We hypothesize that a lower similarity indicates higher diversity in the voice characteristics between target speakers. To measure speaker diversity, we also matched the utterance in two ways. (1) We randomly matched the converted utterances from the same source speaker to different target speakers and averaged the cosine similarity between them in 100 shuffles, which we referred to as the speaker diversity score by random matching (**SDR**). (2) We conducted a one-to-one matching of utterances converted from the same utterance to different target speakers and averaged cosine similarity between them to obtain the speaker diversity score by one-to-one matching (**SDO**).
When assessing the consistency between the voice characteristics and the corresponding face images, the gender attribute is the primary factor to consider. Hence, we used the open source speech segments toolkit, _inaSpeechSegmenter_4, to calculate gender accuracy (**GA**) for each converted utterance. Specifically, a speech segmenter (Chen et al., 2017) was firstly used to discard segments that did not contain any speech. Next, the remaining speech segments were classify into either male or female using the convolutional neural networks. The gender of a converted utterance was consistent with the target speaker if all the speech segments in this utterance were classified as the gender of the target speaker.
Footnote 4: [https://github.com/ina-foss/inaSpeechSegmenter](https://github.com/ina-foss/inaSpeechSegmenter)
For subjective evaluation, we adopted two mean opinion scores in terms of face-voice consistent degree (**MOS-FVC**) and speech naturalness (**MOS-SN**). MOS-FVC was used to evaluate whether the face image and the voice characteristics were consistent with each other, e.g., a middle-aged man's face with the little girl's voice would be considered inconsistent. MOS-SN was used to quantitatively measure the naturalness of the converted voice. The listeners were asked to score each converted utterance on a scale from 1 (completely unnatural or completely inconsistent) to 5 (completely natural or completely consistent) for two metrics.
### Evaluation Results
We chose 6 utterances from each of the 4 source speakers and randomly selected one face frame in 3 videos from each of the 8 target speakers for inference. Then we matched them pairwise and converted a total of 576 utterances for objective evaluation. Two subjective metrics were evaluated on the Amazon Mechanical Turk platform5. 20 converted utterances were randomly selected from each system and a total of 20 listeners participated in the test. All objective and subjective evaluation results are reported in Table 1.
Footnote 5: [https://www.mturk.com/](https://www.mturk.com/)
We can observe that the proposed **FVMVC** outperformed the **Auto-FaceVC** and **attentionCVAE** systems on all objective metrics significantly (\(p<0.05\) in paired t-tests). Compared with **Auto-FaceVC**, the slots in MFVA quantify the voice characteristics space, which makes the voice control via face images more homogeneous. Additionally, our proposed **FVMVC** incorporates the MFVA module to alleviate the problem of over-smoothing, resulting in a more
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{2}{c}{Homogeneity} & \multicolumn{2}{c}{Diversity} & \multicolumn{2}{c}{Consistency} & Quality \\ \cline{2-7} & SHR \(\uparrow\) & SHO \(\uparrow\) & SDR \(\downarrow\) & SDO \(\downarrow\) & GA \(\uparrow\) & MOS-FVC \(\uparrow\) & MOS-SN \(\uparrow\) \\ \hline
**Ground Truth** & 0.8245 & 1.0000 & 0.5524 & 0.5524 & 1.0000 & 3.7042 & 4.2183 \\
**SpeechVC** & 0.7267 & 0.8229 & 0.5890 & 0.6408 & 0.9895 & 3.5917 & 3.6022 \\ \hline
**Auto-FaceVC**(Kumar et al., 2017) & 0.7186 & 0.8132 & 0.6351 & 0.7042 & 0.9239 & 3.4289 & 3.5969 \\
**attentionCVAE**(Kumar et al., 2017) & 0.7153 & 0.8081 & 0.6874 & 0.7789 & 0.9166 & 3.4292 & 3.5982 \\
**FVMVC** & **0.7313** & **0.8692** & **0.6188** & **0.6781** & **0.9791** & **3.5417** & **3.5993** \\ \hline \hline \end{tabular}
\end{table}
Table 1. Objective and subjective evaluation results of comparison systems. The definitions of all metrics can be found in Section 4.4.
diverse range of voice characteristics. In **attentionCVAE**, facial attributes can only provide limited information such as gender, age, and ethnicity. As a result, while this method ensures relatively consistent voice characteristics and accurate gender, it also tends to generate very similar voices for different target speakers, resulting in a loss of diversity. In addition, following the method described in **attentionCVAE**(Wang et al., 2018), we found that the facial attributes may vary across different face images of the same speaker, which can lead to heterogeneity among the voice characteristics of the same target speakers. With regard to GA, our proposed **FVMVC** has demonstrated a significant improvement, increasing from 90.85% and 91.66% in the two aforementioned methods to 97.91%. The MOS-FVC has a strong correlation with the GA. In cases where the voice and the face image display a clear gender mismatch, the consistency score between them tends to be significantly low. Since the three methods employed the same backbone, their performance in terms of MOS-SN is quite comparable.
We can observe that our proposed **FVMVC** performs better than **SpeechVC** with respect to SHR and SHO. This could be attributed to the presence of stable identity information in face images and several speaker-independent factors included in natural reference speech, such as prosody and emotions. These factors can influence the homogeneity between the converted utterances of the same target speakers. In terms of SDR, SDO and MOS-FVC, our proposed method is less effective than **SpeechVC**, which is caused by the limited amount of voice characteristics information contained in the face image compared to the one contained in the natural reference speech. Additionally, for GA and MOS-SN, the performance of our proposed **FVMVC** and **SpeechVC** is essentially similar.
### Visual Analysis
We utilized _Resemblyzer_ to extract speaker embeddings from the utterances generated by the **Ground Truth** and those converted by three other systems, i.e., **SpeechVC**, **Auto-FaceVC**, and **FVMVC**. We present their t-SNE (Beng et al., 2017) visualization in Figure 4. For the **Auto-FaceVC** system, some embedding clusters contained both male and female target speakers, as shown in the red box of Figure 4(c). This led to that a single converted utterance may contain both male and female voices in different segments. On the other hand, our proposed **FVMVC** model produced embeddings with a clear boundary between two genders, which further demonstrates the effectiveness of our method on GA and MOS-FVC metrics. In addition, the embeddings of different target speakers overlapped a lot for the **Auto-FaceVC** system. While similar to **SpeechVC**, our proposed **FVMVC** model had much less embedding overlap across target speakers, which also further confirms the better speaker diversity achieved by our method.
### Case Study
We selected 2 target speakers' 6 face images taken from different perspectives for voice conversion, as shown in Figure 3. The first
Figure 4. The t-SNE visualization of the speaker embeddings extracted from 576 utterances converted by different systems. Each point corresponds to a single utterance, with the colour of each point indicating the identity of the target speaker, \(\bullet\) represents the male target speakers and \(\times\) represents female target speakers.
Figure 3. Target face images and their corresponding slots weights calculated by the MFVA module at the inference stage. The third row represents the mel-spectrograms of the converted utterance by FVMVC.
three columns belong to the first target speaker, and the last three columns belong to the second target speaker. The slot weights and corresponding mel-spectrograms of converted utterances based on the face images are visualized. We discover that the distributions of slot weights remain consistent across different face images of the same speaker, regardless of the angle or expression displayed in the images. This finding suggests that the speaker's facial features are of decisive importance in the process of aligning face and voice, and are minimally affected by external factors such as camera position, background, and other sources of noise. As we can see from the third row in Figure 3, with the aid of stable recalled face embeddings, the mel-spectrograms converted by different face images exhibit a high level of uniformity.
Additionally, we attempted to achieve voice characteristics interpolation by manipulating the slot weights in the MVFA module. We chose a male and a female target speakers for creating new voices by interpolation, as depicted in Figure 5. Specifically, we blended the slot weight of two face images with distinct weights to obtain the new recalled face embedding. From left to right, as the slot weights of the male speaker \(B\) increases, the voice characteristics gradually shift from female to male, and the fundamental frequencies gradually decreases. This further validates the effectiveness of the MFVA module for face-based voice control.
### Ablation Study
In this section, we conducted ablation experiments on our proposed **FVMVC** to explore the effectiveness of each module. As shown in Table 2, we conducted experiments by removing the inter-speaker supervision, the MFVA module and the pretraining strategy from the proposed **FVMVC**, respectively. The results show that when inter-speaker supervision was removed, the model's performance on SDR, SDO and GA decreased. This suggests that alleviating the inconsistency between training and inference phases can help to fit the recalled face embedding to the decoder, resulting in more diverse voice generation. After removing the MFVA module, the output of the face encoder was directly fed into the decoder without any constraints imposed by speaker embeddings. We find that the model's performance decreased in all aspects, highlighting the crucial role of alignment between face and voice in the model's performance. We also attempted to train the model from scratch without pretrained strategy. Our results show that the pretraining on the zero-shot VC task has a significant positive impact on the proposed **FVMVC** model.
## 5. Conclusion
In this paper, we propose the FVMVC model to tackle a novel task of zero-shot FaceVC. The slots in the MFVA module act as a link between face and voice, promoting the performance of voice control based on face images of unseen speakers. In addition, we have implemented a mixed supervision strategy to alleviate the long-standing issue of inconsistency between training and inference in VC tasks. As a result, based on the face images of newly coming speakers, the proposed FVMVC is able to generate a more consistent and diverse voice.
As a future direction, we aim to explore the unified pre-training face-voice alignment,with a specific emphasis on voice control within text-to-speech, voice conversion, and singing synthesis tasks. Additionally, we plan to compile a comprehensive, large-scale video dataset featuring multiple speakers, ensuring its cleanliness and incorporating speaker details like age, gender, race, and physical appearance descriptions.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline Method & SHR\(\uparrow\) & SHO\(\uparrow\) & SDR\(\downarrow\) & SDO\(\downarrow\) & GA\(\uparrow\) \\ \hline
**FVMVC** & **0.7313** & **0.8692** & **0.6188** & **0.6781** & **0.9791** \\ \hline w/o Inter-speaker & 0.7301 & 0.8629 & 0.6262 & 0.6908 & 0.9444 \\ w/o MFVA & 0.7124 & 0.8257 & 0.6321 & 0.7111 & 0.9167 \\ w/o Pretraining & 0.7013 & 0.8113 & 0.6436 & 0.7140 & 0.8925 \\ \hline \hline \end{tabular}
\end{table}
Table 2. Objective evaluation results of Ablation Studies. The definitions of all metrics can be found in Section 4.4.
Figure 5. Voice characteristics interpolation by mixing the slot weights of a female speaker \(A\) and a male speaker \(B\). From left to right, the slot weights of the male speaker increases sequentially. Specifically, \(0.6A+0.4B\) means that we combine the slot weights of speaker \(A\) and speaker \(B\) with a weight of 6:4 to obtain a new recalled face embedding. The third row is the mel-spectrograms of the utterance converted by the corresponding slot weights. |
2309.14347 | Continuous-time control synthesis under nested signal temporal logic
specifications | In this work, we propose a novel approach for the continuous-time control
synthesis of nonlinear systems under nested signal temporal logic (STL)
specifications. While the majority of existing literature focuses on control
synthesis for STL specifications without nested temporal operators, addressing
nested temporal operators poses a notably more challenging scenario and
requires new theoretical advancements. Our approach hinges on the concepts of
signal temporal logic tree (sTLT) and control barrier function (CBF).
Specifically, we detail the construction of an sTLT from a given STL formula
and a continuous-time dynamical system, the sTLT semantics (i.e., satisfaction
condition), and the equivalence or under-approximation relation between sTLT
and STL. Leveraging the fact that the satisfaction condition of an sTLT is
essentially keeping the state within certain sets during certain time
intervals, it provides explicit guidelines for the CBF design. The resulting
controller is obtained through the utilization of an online CBF-based program
coupled with an event-triggered scheme for online updating the activation time
interval of each CBF, with which the correctness of the system behavior can be
established by construction. We demonstrate the efficacy of the proposed method
for single-integrator and unicycle models under nested STL formulas. | Pian Yu, Xiao Tan, Dimos V. Dimarogonas | 2023-09-17T13:29:40Z | http://arxiv.org/abs/2309.14347v2 | # Continuous-time control synthesis under nested signal temporal logic specifications
###### Abstract
Signal temporal logic (STL) has gained popularity in robotics for expressing complex specifications that may involve timing requirements or deadlines. While the control synthesis for STL specifications without nested temporal operators has been studied in the literature, the case of nested temporal operators is substantially more challenging and requires new theoretical advancements. In this work, we propose an efficient continuous-time control synthesis framework for nonlinear systems under nested STL specifications. The framework is based on the notions of signal temporal logic tree (sTLT) and control barrier function (CBF). In particular, we detail the construction of an sTLT from a given STL formula and a continuous-time dynamical system, the sTLT semantics (i.e., satisfaction condition), and the equivalence or under-approximation relation between STLT and STL. Leveraging the fact that the satisfaction condition of an sTLT is essentially keeping the state within certain sets during certain time intervals, it provides explicit guidelines for the CBF design. The resulting controller is obtained through the utilization of an online CBF-based program coupled with an event-triggered scheme for online updating the activation time interval of each CBF, with which the correctness of the system behavior can be established by construction. We demonstrate the efficacy of the proposed method for single-integrator and unicycle models under nested STL formulas.
Signal temporal logic, control barrier function, control synthesis, continuous-time nonlinear systems
## I Introduction
High level formal languages, originated from computer science for the specification and verification of computer programs [1], have attracted increasing attention to a wider audience over the last decades, ranging from biological networks [2, 3] to robotics [4, 5]. Temporal logics, such as Linear Temporal Logic (LTL) and Signal Temporal Logic (STL) [6], provide a rigorous, mathematical language characterizing the expected behaviors of the systems. LTL focuses on the Boolean satisfaction of events over a discrete-time state series. As a comparison, STL allows for characterizing system properties over dense time, and thus more favorable for continuous-time dynamical systems, e.g., robotic and cyber-physical system applications [7, 8].
Designing control strategies for systems to satisfy high level specifications is known as the control synthesis problem. For LTL specifications, the classic automaton-based control synthesis scheme has been well-studied [9, 10] for hybrid and discrete-time dynamical systems. In recent years, several different control synthesis schemes are proposed for STL specifications. One popular approach is to evaluate the satisfaction of the STL specification over the sampled time instants, encode it as a mixed-integer program (MIP), and then solve it in a model predictive control framework [11, 12, 13]. However, the exponential computational complexity with respect to the number of integer variables makes this approach difficult to be applied to STL formulas with long time horizons even for small dimensional dynamical systems. To address the exponential complexity of integer-based optimization, recent work proposes to smoothly approximate the robustness metric of STL, and then sequential quadratic programming [14] or convex-concave programming [15] is proposed to find a solution. In [16], STL formulas are interpreted over stochastic processes and the STL synthesis is reformulated as a probabilistic inference problem. Nevertheless, all these results are restricted to discrete-time systems.
There are some endeavours in recent years on the continuous-time control synthesis problem for STL specifications, including, to name a few, the control barrier function-based [17, 18, 19], automaton-based [20, 21], heuristic-based [22], sampling-based [23], and learning-based [24, 25, 26] methods. Different from the discrete-time control synthesis methods, most of the aforementioned approaches only can handle STL formulas with non-nested temporal operators (we will refer to these formulas as non-nested STL for simplicity in the following). To be more specific, the CBF-based method [17] deals with a fragment of non-nested STL formulas and linear predicates. The recent work in [19] considers a richer STL fragment and provides heuristics on the decomposition and the ordering of sub-tasks which are then used to construct CBFs. In [21], the sampling-based automaton-guided control synthesis approach allows the consideration of nonlinear predicates, yet it is still restricted to non-nested STL formulas. In [20], a fragment of signal interval temporal logic formulas is considered for the automaton-based control synthesis. Moreover, the timed abstraction of the dynamical system is needed, which is based on the assumption of existing feedback control laws. The case of STL formulas with nested temporal operators is substantially more challenging and requires new theoretical advancements. To the best of our knowledge, the continuous-time control synthesis for STL specifications with nested temporal operators is still an open problem.
In this work, we aim to develop an efficient control synthesis framework for continuous-time dynamical systems under STL specifications with nested temporal operators, e.g., \(\mathsf{G}_{[a_{1},b_{1}]}\mathsf{F}_{[a_{2},b_{2}]}\mu\), \(\mathsf{F}_{[a_{1},b_{1}]}\mathsf{G}_{[a_{2},b_{2}]}\mu\), \(\mathsf{F}_{[a_{1},b_{1}]}(\mu_{1}\mathsf{I}_{[a_{2},b_{2}]}(\mathsf{F}_{[a_ {3},b_{3}]}\mu_{2}))\). Compared to previous CBF-based control synthesis works [17, 18, 19], we provide
a tangible tool, coined as the _signal temporal logic tree (sTLT)_, that explicitly transforms the satisfaction of an STL formula to a series of set invariance conditions, which naturally guides the design of corresponding CBFs. The main contributions of this work are summarized as follows.
1) We introduce a notion of sTLT, detail its construction from a given STL formula, its semantics (i.e., satisfaction condition), and establish equivalence or under-approximation relation between sTLT and STL. 2) We show how to design CBFs and online update their activation time intervals under the guidance of the sTLT. The control synthesis scheme is given by an online CBF-based program. 3) We deduce the correctness of the system behavior under certain assumptions.
The remainder of this paper is organized as follows. In Sec. II, we give some technical preliminaries and introduce the continuous-time control synthesis problem. In Sec. III, the notion of sTLT is introduced as well as its semantics. Then, we derive the equivalence or under-approximation relation between an STL formula and its constructed sTLT. Finally, we show how to design the CBFs, online update their activation time intervals, and the overall control synthesis scheme. Case studies with single integrator and unicycle dynamics are presented in Sec. IV. The work is then concluded in Sec. V.
## II Preliminaries and Problem Formulation
**Notation.** Let \(\mathbb{R}:=(-\infty,\infty)\), \(\mathbb{R}_{\geq 0}:=[0,\infty)\), and \(\mathbb{N}:=\{0,1,2,\ldots\}\). Denote \(\mathbb{R}^{n}\) as the \(n\) dimensional real vector space, \(\mathbb{R}^{n\times m}\) as the \(n\times m\) real matrix space. Throughout this paper, vectors are denoted in italics, \(x\in\mathbb{R}^{n}\), and boldface \(\mathbf{x}\) is used for continuous-time signals. Let \(\|x\|\) and \(\|A\|\) be the Euclidean norm of vector \(x\) and matrix \(A\). Given a set \(S\subset\mathbb{R}^{n}\), \(\overline{S}\) denotes its complement and \(\partial S\) denotes its boundary. Given a point \(x\in\mathbb{R}^{n}\) and a set \(S\subset\mathbb{R}^{n}\), the distance function is defined as \(\texttt{dist}(x,S):=\inf_{y\in S}\|x-y\|\). The signed distance function \(\texttt{sdist}(x,S)\) is defined as
\[\texttt{sdist}(x,S)=\begin{cases}-\texttt{dist}(x,\overline{S}),&\text{if }x \in S,\\ \texttt{dist}(x,S),&\text{if }x\notin S.\end{cases}\]
Consider a continuous-time dynamical system of the form
\[\Sigma:\dot{x}=f(x,u), \tag{1}\]
where \(x\in\mathbb{R}^{n}\) and \(u\in U\subseteq\mathbb{R}^{m}\) are respectively the state and input of the system, the function \(f:\mathbb{R}^{n}\times\mathbb{R}^{m}\rightarrow\mathbb{R}^{n}\) is locally Lipschitz continuous in \(x\) and \(u\).
Let \(\mathcal{U}\) be the set of all measurable functions that take their values in \(U\) and are defined on \(\mathbb{R}_{\geq 0}\). A curve \(\mathbf{x}:\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}^{n}\) is said to be a trajectory of (1) if there exists an input signal \(\mathbf{u}\in\mathcal{U}\) satisfying (1) for almost all \(t\in\mathbb{R}_{\geq 0}\). We use \(\mathbf{x}_{x_{0}}^{\mathbf{u}}(t)\) to denote the trajectory point reached at time \(t\) under the input signal \(\mathbf{u}\) from the initial state \(x_{0}\).
### _Signal temporal logic_
Signal temporal logic (STL) [6] is a predicate logic based on continuous-time signals. When \(\mathbf{x}:\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}^{n}\) is considered, the predicate \(\mu\) at time \(t\) is obtained after evaluation of a predicate function \(g_{\mu}:\mathbb{R}^{n}\rightarrow\mathbb{R}\) as follows
\[\mu:=\begin{cases}\top,&\text{if}\quad g_{\mu}(\mathbf{x}(t))\geq 0\\ \bot,&\text{if}\quad g_{\mu}(\mathbf{x}(t))<0.\end{cases}\]
In [13], it was shown that each STL formula has an equivalent STL formula in positive normal form (PNF), _i.e._, negations only occur adjacent to predicates. The syntax of the PNF STL is given by
\[\begin{split}\varphi::=\top\mid\mu\mid&\neg\mu\mid\varphi_{1} \wedge\varphi_{2}\mid\varphi_{1}\vee\varphi_{2}\\ &\mid\varphi_{1}\mathsf{U}_{[a,b]}\varphi_{2}\mid\mathsf{F}_{[a,b]} \varphi\mid\mathsf{G}_{[a,b]}\varphi,\end{split} \tag{2}\]
where \(\varphi,\varphi_{1},\varphi_{2}\) are STL formulas and \([a,b],0\leq a\leq b<\infty\), denotes a time interval. Here, \(\wedge\) and \(\vee\) are logic operators "conjunction" and "disjunction", \(\mathsf{U}_{[a,b]}\), \(\mathsf{F}_{[a,b]}\), and \(\mathsf{G}_{[a,b]}\) are temporal operators "until", "eventually", and "always", respectively.
**Definition 1** (STL semantics [27]).: _The validity of an STL formula \(\varphi\) with respect to a continuous-time signal \(\mathbf{x}\) evaluated at time \(t\), is defined inductively as follows:_
\[(\mathbf{x},t)\vDash\mu \Leftrightarrow g_{\mu}(\mathbf{x}(t))\geq 0,\] \[(\mathbf{x},t)\vDash\neg\mu \Leftrightarrow \neg((\mathbf{x},t)\vDash\mu),\] \[(\mathbf{x},t)\vDash\varphi_{1}\wedge\varphi_{2} \Leftrightarrow (\mathbf{x},t)\vDash\varphi_{1}\wedge(\mathbf{x},t)\vDash\varphi_{2},\] \[(\mathbf{x},t)\vDash\varphi_{1}\vee\varphi_{2} \Leftrightarrow (\mathbf{x},t)\vDash\varphi_{1}\vee(\mathbf{x},t)\vDash\varphi_{2},\] \[(\mathbf{x},t)\vDash\varphi_{1}\mathsf{U}_{[a,b]}\varphi_{2} \Leftrightarrow \exists t^{\prime}\in[t+a,t+b]\text{ s.t. }\] \[(\mathbf{x},t^{\prime})\vDash\varphi_{2}\wedge\] \[\forall t^{\prime\prime}\in[t,t^{\prime}],(\mathbf{x},t^{\prime\prime}) \vDash\varphi_{1},\] \[(\mathbf{x},t)\vDash\mathsf{F}_{[a,b]}\varphi \Leftrightarrow \exists t^{\prime}\in[t+a,t+b]\text{ s.t. }\] \[(\mathbf{x},t^{\prime})\vDash\varphi,\] \[(\mathbf{x},t)\vDash\mathsf{G}_{[a,b]}\varphi \Leftrightarrow \forall t^{\prime}\in[t+a,t+b]\text{ s.t. }\] \[(\mathbf{x},t^{\prime})\vDash\varphi.\]
**Definition 2**.: _Consider the dynamical system \(\Sigma\) in (1) and the STL formula \(\varphi\) in (2). We say \(\varphi\) is satisfiable from the initial state \(x_{0}\) if there exists a control signal \(\mathbf{u}\in\mathcal{U}\) such that \((\mathbf{x}_{x_{0}}^{\mathbf{u}},0)\vDash\varphi\)._
Given an STL formula \(\varphi\), the set of initial states from which \(\varphi\) is satisfiable is denoted by
\[\mathbb{S}_{\varphi}:=\{x_{0}\in\mathbb{R}^{n}|\varphi\text{ is satisfiable from }x_{0}\}. \tag{3}\]
For simplicity, we will refer to \(\mathbb{S}_{\varphi}\) as _the satisfying set_ for \(\varphi\) in the following. Please be aware that the computation of the set \(\mathbb{S}_{\varphi}\) is tailored to the dynamical system \(\Sigma\) under consideration. Here we omit it for notation simplicity.
### _Reachability operators_
In this section, we define two reachability operators.
**Definition 3**.: _Consider the system (1), a set \(\mathcal{S}\subseteq\mathbb{R}^{n}\), and a time interval \([a,b]\). The maximal reachable set \(\mathcal{R}^{M}(\mathcal{S},[a,b])\) is defined as_
\[\mathcal{R}^{M}(\mathcal{S},[a,b])=\Big{\{}x_{0}\in\mathbb{R}^{n}: \exists\mathbf{u}\in\mathcal{U},\exists t^{\prime}\in[a,b],\] _s.t._ \[\mathbf{x}_{x_{0}}^{\mathbf{u}}(t^{\prime})\in\mathcal{S}\Big{\}}.\]
**Definition 4**.: _Consider the system (1), the set \(\mathcal{S}\subseteq\mathbb{R}^{n}\), and a time interval \([a,b]\). The minimal reachable set \(\mathcal{R}^{m}(\mathcal{S},[a,b])\) is defined as_
\[\mathcal{R}^{m}(\mathcal{S},[a,b])=\Big{\{}x_{0}\in\mathbb{R}^{n}: \forall\mathbf{u}\in\mathcal{U},\exists t^{\prime}\in[a,b],\] _s.t._ \[\mathbf{x}_{x_{0}}^{\mathbf{u}}(t^{\prime})\in\mathcal{S}\Big{\}}.\]
The set \(\mathcal{R}^{M}(\mathcal{S},[a,b])\) collects all states in \(\mathbb{R}^{n}\) from which there exists an input signal \(\mathbf{u}\in\mathcal{U}\) that drives the system to target set \(\mathcal{S}\) at some time instant \(t^{\prime}\in[a,b]\). The set \(\mathcal{R}^{m}(\mathcal{S},[a,b])\) collects all states in \(\mathbb{R}^{n}\) from which no matter what input signal \(\mathbf{u}\in\mathcal{U}\) is applied, the system can reach the target set \(\mathcal{S}\) at some time instant \(t^{\prime}\in[a,b]\).
Let \(\mathcal{S}\) be represented by the zero superlevel set of a continuous function: \(\mathcal{S}=\{x\in\mathbb{R}^{n}:h_{\mathcal{S}}(x)\geq 0\}\). Similarly, let \(\mathcal{R}^{M}(\mathcal{S},[a,b])\) and \(\mathcal{R}^{m}(\mathcal{S},[a,b])\) be represented by the zero superlevel set of some continuous functions, i.e.,
\[\mathcal{R}^{M}(\mathcal{S},[a,b]):=\{x:h_{\mathcal{R}^{M}( \mathcal{S},[a,b])}(x)\geq 0\},\] \[\mathcal{R}^{m}(\mathcal{S},[a,b]):=\{x:h_{\mathcal{R}^{m}( \mathcal{S},[a,b])}(x)\geq 0\}.\]
As shown in [28], the calculation of maximal and minimal reachable sets can be casted as an optimal control problem, given below:
\[h_{\mathcal{R}^{M}(\mathcal{S},[a,b])}(x) =\max_{\mathbf{u}\in\mathcal{U}}\max_{s\in[a,b]}h_{\mathcal{S}}(\mathbf{x }_{x}^{\mathbf{u}}(s)),\] \[h_{\mathcal{R}^{m}(\mathcal{S},[a,b])}(x) =\min_{\mathbf{u}\in\mathbf{U}}\max_{s\in[a,b]}h_{\mathcal{S}}(\mathbf{x}_{x} ^{\mathbf{u}}(s)).\]
In the following, relations are established between the STL temporal operators \(\mathsf{F}_{[a,b]}\) and \(\mathsf{G}_{[a,b]}\) and the maximal/minimal reachable sets.
**Lemma 1** ([28]).: _Given the system (1) and the STL predicate \(\mu_{1}\), one has_
1. \(\mathsf{S}_{\mathsf{F}_{[a,b]}\mu_{1}}=\mathcal{R}^{M}(\mathsf{S}_{\mu_{1}},[a,b])\)_, and_
2. \(\mathsf{S}_{\mathsf{G}_{[a,b]}\mu_{1}}=\mathcal{R}^{m}(\overline{\mathsf{S}_{ \mu_{1}}},[a,b])\)_,_
_where \(\mathsf{S}_{\mathsf{F}_{[a,b]}\mu_{1}}\) and \(\mathsf{S}_{\mathsf{G}_{[a,b]}\mu_{1}}\) are the satisfying sets for \(\mathsf{F}_{[a,b]}\mu_{1}\) and \(\mathsf{G}_{[a,b]}\mu_{1}\), respectively._
**Definition 5**.: _Given any STL formula \(\varphi\), let its satisfying set \(\mathbb{S}_{\varphi}\) in (3) be represented by the zero superlevel set of a function \(h_{\mathbb{S}_{\varphi}}:\mathbb{R}^{n}\to\mathbb{R}\), i.e., \(\mathbb{S}_{\varphi}=\{x\in\mathbb{R}^{n}:h_{\mathbb{S}_{\varphi}}(x)\geq 0\}\). We refer to such function \(h_{\mathbb{S}_{\varphi}}\) as a value function associated with the STL formula \(\varphi\)._
Note that given a predicate \(\mu\), \(g_{\mu}\) is a value function associated with \(\mu\). In the general case, we can use the signed distance function to denote a value function, i.e., \(h_{\mathbb{S}_{\varphi}}(x)=-\operatorname{\mathtt{sdist}}(x,\mathbb{S}_{ \varphi})\).
### _Time-varying control barrier functions_
Define a differentiable function \(\mathfrak{b}:X\times[t_{0},t_{1}]\to\mathbb{R}\) and the associated set
\[\mathcal{C}(t):=\{x\in X|\mathfrak{b}(x,t)\geq 0\}. \tag{4}\]
Then, we have the following definition.
**Definition 6** (CBF [29]).: _A function \(\mathfrak{b}:X\times[t_{0},t_{1}]\to\mathbb{R}\) is called a valid control barrier function (vCBF) for (1) if there exists a locally Lipschitz continuous class \(\mathcal{K}\) function \(\alpha\) such that, for all \((x,t)\in\mathcal{C}(t)\times[t_{0},t_{1}]\),_
\[\sup_{u\in U}\left\{\frac{\partial\mathfrak{b}(x,t)}{\partial x }f(x,u)+\frac{\partial\mathfrak{b}(x,t)}{\partial t}\right\}\geq-\alpha( \mathfrak{b}(x,t)). \tag{5}\]
If \(x_{0}\in\mathcal{C}(t_{0})\) and \(\mathfrak{b}(x,t)\) is a vCBF, then any locally Lipschitz control \(\mathbf{u}\) satisfying (5) guarantees \(\mathbf{x}_{x_{0}}^{\mathbf{u}}(t)\in\mathcal{C}(t)\) for all \(t\in[t_{0},t_{1}]\). This can be shown by, for example, invoking the Comparison Lemma [30].
### _Problem formulation_
Before moving on, we first introduce the notion of nested STL formulas.
**Definition 7** (Nested STL formula).: _We call an STL formula \(\varphi\) nested if it can be written in one of the following forms:_
\[\varphi=\mathsf{F}_{[a,b]}\varphi_{1}, \tag{6}\] \[\varphi=\mathsf{G}_{[a,b]}\varphi_{1},\] (7) \[\varphi=\varphi_{1}\mathsf{U}_{[a,b]}\varphi_{2}, \tag{8}\]
_where \(\varphi_{1}\) in (6)-(7) and at least one of \(\varphi_{1}\) and \(\varphi_{2}\) in (8) include temporal operators. In addition, \(\varphi_{1}\) in (6)-(7) and \(\varphi_{1},\varphi_{2}\) in (8) are called the argument(s) of the STL formula \(\varphi\)._
Examples of nested STL formulas include \(\mathsf{F}_{[a_{1},b_{1}]}\mathsf{G}_{[a_{2},b_{2}]}\mu,\mathsf{G}_{[a_{1},b_{1 }]}\mathsf{F}_{[a_{2},b_{2}]}\mu\), and \(\mu_{1}\mathsf{U}_{[a_{1},b_{1}]}(\mathsf{G}_{[a_{2},b_{2}]}\mu_{2}\wedge \mathsf{F}_{[a_{3},b_{3}]}\mu_{3})\), etc.
In [17, 18], continuous-time control-affine system of the form
\[\dot{x}=f(x)+g(x)u \tag{9}\]
is considered, and appropriate CBFs are designed for a fragment of non-nested STL formulas, which we briefly recap here:
* For \(\mathsf{G}_{[a,b]}\mu_{1}\), select \(\mathfrak{b}(x,t)\) s.t. \(\mathfrak{b}(x,t^{\prime})\leq g_{\mu_{1}}(x)\) for all \(t^{\prime}\in[a,b]\),
* For \(\mathsf{F}_{[a,b]}\mu_{1}\), select \(\mathfrak{b}(x,t)\) s.t. \(\mathfrak{b}(x,t^{\prime})\leq g_{\mu_{1}}(x)\) for some \(t^{\prime}\in[a,b]\).
* For \(\mu_{1}\mathsf{U}_{[a,b]}\mu_{2}\), it is encoded as \(\mathsf{G}_{[0,b]}\mu_{1}\wedge\mathsf{F}_{[a,b]}\mu_{2}\),
where \(g_{\mu_{1}}\) and \(g_{\mu_{2}}\) are the predicate functions of \(\mu_{1}\) and \(\mu_{2}\), respectively. Once an vCBF is obtained (the explicit CBF construction is investigated in [18]) for the non-nested STL formula, then the control strategy for (9) is given by solving a quadratic program (QP)
\[\min_{u\in U}\quad u^{T}Qu\] \[\text{s.t.}\ \frac{\partial b(x,t)}{\partial x}(f(x)+g(x)u)+\frac{ \partial b(x,t)}{\partial t}\geq-\alpha(b(x,t)). \tag{10}\]
In this work, we consider the continuous-time control synthesis for nested STL formulas as per Definition 7. Formally, the problem is stated as follows.
**Problem 1**.: _Consider the dynamical system in (1) and a nested STL formula \(\varphi\). Derive a continuous-time control strategy \(\mathbf{u}\) such that the resulting trajectory \(\mathbf{x}\) of (1) with initial state \(x_{0}\) satisfies \(\varphi\), i.e., \((\mathbf{x}_{x_{0}}^{\mathbf{u}},0)\models\varphi\)._
## III Solving the Control Synthesis Problem
In this work, we aim to formulate the continuous-time control synthesis problem for a nested STL specification \(\varphi\) as a CBF-based program as in [17]. Here, the difficulty is: how to encode the task satisfaction constraint (i.e., \((\mathbf{x}_{a_{0}}^{\mathbf{u}},0)\models\varphi\)) as a set of constraints on the system input \(u\) when \(\varphi\) is nested? Appropriate CBFs have been proposed in [17] for control-affine systems under non-nested STL formula \(\varphi\), e.g., \(\varphi=\mathsf{F}_{[a,b]}\mu\). However, when the STL formula \(\varphi\) is nested, extending the CBF design methodology in [17] to nested STL formulas is nontrivial.
To tackle this problem, in this work, we propose the notion of sTLT. This tree structure serves as a tool for guiding the design of CBFs for nested STL formulas.
This section is structured as follows. First, we introduce the notion of sTLT and its construction in Section III. A. Then we define sTLT semantics in Section III.B. The equivalence or under-approximation relation between STL and sTLT is discussed in Section III. C. Then, we explain the design of the CBFs based on the sTLT in Section III. D. In Section III. E, we show the overall algorithm. Finally in Section III. F, the computational complexity of the overall approach is discussed.
### _sTLT and its construction_
An sTLT refers to a tree with linked set nodes and operator nodes. The formal definition is given as follows.
**Definition 8** (sTLT).: _An sTLT is a tree for which the next holds:_
* _each node is either a set node that is a subset of_ \(\mathbb{R}^{n}\) _or an_ operator _node that belongs to_ \(\{\wedge,\vee,\mathsf{U}_{[a,b]},\mathsf{F}_{[a,b]},\mathsf{G}_{[a,b]}\}\)_;_
* _the root node and the leaf nodes are set nodes;_
* _if a set node is not a leaf node, its unique child is an_ operator _node;_
* _the children of any_ operator _node are_ set _nodes._
The sTLT is motivated by the notion of TLT defined in [31] for LTL formulas. Although graphically similar, the sTLT construction and its satisfaction condition are substantially different from TLT in [31]. We will provide additional clarification regarding the differences later in Remark 1.
_Construct an sTLT from an STL formula \(\varphi\):_ Before presenting the construction procedure of such an sTLT from a given STL formula \(\varphi\) and a continuous-time dynamical system \(\Sigma\), we give the following definition.
**Definition 9** (Desired form).: _Given an STL formula \(\varphi\) in Definition 2, we say \(\varphi\) is in desired form if i) it contains no "until" operators and ii) the argument of every "always" operator contains no "disjunction" operator._
Next we detail the construction of sTLT from an STL formula \(\varphi\) using the reachability operators \(\mathcal{R}^{M}\) and \(\mathcal{R}^{m}\), which can be completed in 3 steps.
_Step 1:_ Rewrite the STL formula \(\varphi\) into the desired form \(\hat{\varphi}\) as per Definition 9. That is, i) if \(\varphi\) contains "until" operator, e.g., \(\varphi=\varphi_{1}\mathsf{U}_{[a,b]}\varphi_{2}\), it is encoded as \(\hat{\varphi}=\mathsf{G}_{[0,b]}\varphi_{1}\wedge\mathsf{F}_{[a,b]}\varphi_{2}\) and ii) if the argument of a temporal operator contains a "disjunction" operator, e.g., \(\varphi=\Theta_{[a,b]}(\varphi_{1}\vee\varphi_{2})\), it is encoded as \(\hat{\varphi}=\Theta_{[a,b]}\varphi_{1}\vee\Theta_{[a,b]}\varphi_{2}\), where \(\Theta\in\{\mathsf{G},\mathsf{F}\}\). After this step, one has that \(\hat{\varphi}\) contains no "until" operator and the "disjunction" operator, if it exists, appears in the form of \(\hat{\varphi}=\varphi_{1}\vee\varphi_{2}\vee\ldots\vee\varphi_{N}\), and \(\varphi_{i},i=1,2,\ldots,N,\) contain no "disjunction" operator. We call the fragment of STL formulas \(\hat{\varphi}\), identified by Definition 9, _desired form_. This is because we will later observe that the constructed sTLT \(\mathcal{T}_{\hat{\varphi}}\) is equivalent to \(\hat{\varphi}\) in the sense that every trajectory that satisfies the sTLT \(\mathcal{T}_{\hat{\varphi}}\) also satisfies the STL formula \(\hat{\varphi}\), and conversely.
_Step 2:_ For each predicate \(\mu\) or its negation \(\neg\mu\), construct the sTLT with only a single set node \(\mathbb{X}_{\mu}=\mathbb{S}_{\mu}=\{x:g_{\mu}(x)\geq 0\}\) or \(\mathbb{X}_{\neg\mu}=\mathbb{S}_{\neg\mu}=\{x:-g_{\mu}(x)\geq 0\}\). The sTLT of \(\top\) or \(\bot\) has only a single set node, which is \(\mathbb{R}^{n}\) or \(\emptyset\), respectively.
_Step 3:_ Construct the sTLT \(\mathcal{T}_{\hat{\varphi}}\) inductively. More specifically, for given STL formulas \(\varphi_{1}\) and \(\varphi_{2}\) and their corresponding constructed sTLTs \(\mathcal{T}_{\varphi_{1}},\mathcal{T}_{\varphi_{2}}\), the sTLT from a) \(\varphi_{1}\wedge\varphi_{2}\), b) \(\varphi_{1}\vee\varphi_{2}\), c) \(\mathsf{F}_{[a,b]}\varphi_{1}\), and d) \(\mathsf{G}_{[a,b]}\varphi_{1}\) can be constructed following the rules detailed below. Denote by \(\mathbb{X}_{\varphi_{1}}:=\{x:h_{\mathbb{X}_{\varphi_{1}}}\geq 0\}\) and \(\mathbb{X}_{\varphi_{2}}:=\{x:h_{\mathbb{X}_{\varphi_{2}}}\geq 0\}\) the root nodes of \(\mathcal{T}_{\varphi_{1}}\) and \(\mathcal{T}_{\varphi_{2}}\), respectively.
Case a): Boolean operator \(\wedge\). The sTLT \(\mathcal{T}_{\varphi_{1}\wedge\varphi_{2}}\) can be constructed by connecting \(\mathbb{X}_{\varphi_{1}}\) and \(\mathbb{X}_{\varphi_{2}}\) through the operator node \(\wedge\) and taking
\[\mathbb{X}_{\varphi_{1}\wedge\varphi_{2}}:=\{x:(h_{\mathbb{X}_{\varphi_{1}}} \geq 0)\wedge(h_{\mathbb{X}_{\varphi_{2}}}\geq 0)\}\]
to be the root node. An illustrative diagram for \(\varphi_{1}\vee\varphi_{2}\) is given in Fig. 1(a).
Case b): Boolean operator \(\vee\). The sTLT \(\mathcal{T}_{\varphi_{1}\vee\varphi_{2}}\) can be constructed by connecting \(\mathbb{X}_{\varphi_{1}}\) and \(\mathbb{X}_{\varphi_{2}}\) through the operator node \(\vee\) and taking
\[\mathbb{X}_{\varphi_{1}\vee\varphi_{2}}:=\{x:(h_{\mathbb{X}_{\varphi_{1}}} \geq 0)\vee(h_{\mathbb{X}_{\varphi_{2}}}\geq 0)\}\]
to be the root node. An illustrative diagram for \(\varphi_{1}\vee\varphi_{2}\) is given in Fig. 1(b).
Case c): Eventually operator \(\mathsf{F}_{[a,b]}\). The sTLT \(\mathcal{T}_{\mathsf{F}_{[a,b]}\varphi_{1}}\) can be constructed by connecting \(\mathbb{X}_{\varphi_{1}}\) through the operator \(\mathsf{F}_{[a,b]}\) and making the set \(\mathcal{R}^{M}(\mathbb{X}_{\varphi_{1}},\mathbb{R}^{n},[a,b])\) the root node. An illustrative diagram for \(\mathsf{F}_{[a,b]}\varphi_{1}\) is given in Fig. 1(c).
Case d): Always operator \(\mathsf{G}_{[a,b]}\). The sTLT \(\mathcal{T}_{\mathsf{G}_{[a,b]}\varphi_{1}}\) can be constructed by connecting \(\mathbb{X}_{\varphi_{1}}\) through the operator \(\mathsf{G}_{[a,b]}\) and making the set \(\overline{\mathcal{R}^{m}(\mathbb{X}_{\varphi_{1}},[a,b])}\) the root node. An illustrative diagram for \(\mathsf{G}_{[a,b]}\varphi_{1}\) is given in Fig. 1(d).
Thus we complete the construction of an sTLT from a STL formula \(\varphi\). In what follows, if not stated otherwise, we will use \(\hat{\varphi}\) as the desired form of \(\varphi\) obtained from Step 1 and \(\mathcal{T}_{\hat{\varphi}}\) as the constructed sTLT for brevity.
Let us use the following example to show how to construct an sTLT from a nested STL formula.
**Example 1**.: _Consider the nested STL formula \(\varphi=\mathsf{F}_{[0,15]}(\mathsf{G}_{[2,10]}\mu_{1}\vee\mu_{2}\mathsf{U}_{[5,1 0]}\mu_{3})\), where \(\mu_{i},i=\{1,2,3\}\) are predicates. Following Step 1, we can rewrite \(\varphi\) into the desired form \(\hat{\varphi}=\mathsf{F}_{[0,15]}(\mathsf{G}_{[2,10]}\mu_{1}\vee\mathsf{F}_{[0,1 5]}(\mathsf{G}_{[0,10]}\mu_{2}\wedge\mathsf{F}_{[5,10]}\mu_{3})\). The constructed sTLT \(\mathcal{T}_{\hat{\varphi}}\) is plotted in Fig. 2. Recall that the sTLT is constructed in a bottom-up manner, i.e., we first construct the leaf nodes corresponding to the three predicates, i.e., \(\mathbb{X}_{5}=\mathbb{S}_{\mu_{1}}\), \(\mathbb{X}_{8}=\mathbb{S}_{\mu_{2}}\), \(\mathbb{X}_{9}=\mathbb{S}_{\mu_{3}}\), and then build upon them
one can compute_
\[\mathbb{X}_{3} =\overline{\mathcal{R}^{m}(\overline{\mathbb{X}_{5}},[2,10])},\] \[\mathbb{X}_{1} =\mathcal{R}^{M}(\mathbb{X}_{3},[0,15]),\] \[\mathbb{X}_{6} =\overline{\mathcal{R}^{m}(\overline{\mathbb{X}_{8}},[0,10])},\] \[\mathbb{X}_{7} =\mathcal{R}^{M}(\mathbb{X}_{9},[5,10]),\] \[\mathbb{X}_{4} =\{x:h_{\mathbb{X}_{6}}(x)\geq 0\wedge h_{\mathbb{X}_{7}}(x)\geq 0\},\] \[\mathbb{X}_{2} =\mathcal{R}^{M}(\mathbb{X}_{4},[0,15]),\] \[\mathbb{X}_{0} =\{x:h_{\mathbb{X}_{1}}(x)\geq 0\lor h_{\mathbb{X}_{2}}(x)\geq 0\}.\]
### _sTLT semantics_
Before define the _sTLT semantics_, i.e., the satisfaction relation between a trajectory \(\mathbf{x}\) and an sTLT \(\mathcal{T}\), the definitions of complete path, temporal fragment, and time interval coding for an sTLT \(\mathcal{T}\) are needed.
**Definition 10** (Complete path).: _A complete path \(\mathbf{p}\) of an sTLT is a path that starts from the root node and ends at a leaf node. It can be encoded in the form of \(\mathbf{p}=\mathbb{X}_{0}\Theta_{1}\mathbb{X}_{1}\Theta_{2}\ldots\Theta_{N_{f}} \mathbb{X}_{N_{f}}\), where \(N_{f}\) is the number of operator nodes contained in the complete path, \(\mathbb{X}_{i},i\in\{0,1,\ldots,N_{f}\}\) represent set nodes, and \(\Theta_{j},\forall j\in\{1,\ldots,N_{f}\}\) represent operator nodes._
**Definition 11** (Temporal fragment).: _A temporal fragment of a complete path is a fragment that starts from one temporal operator node, i.e., the node \(\mathsf{U}_{[a,b]},\mathsf{F}_{[a,b]}\) or \(\mathsf{G}_{[a,b]}\), and ends at its child set node._
**Definition 12** (Time interval coding).: _A time interval coding of a complete path involves assigning a time interval \([\underline{t}_{i},\overline{t}_{i}],0\leq\underline{t}_{i}\leq\overline{t}_ {i}\) to each set node \(\mathbb{X}_{i}\) in the complete path._
Given a time instant \(\hat{t}\) and two time intervals \([a_{1},b_{1}],[a_{2},b_{2}]\), define
\[\hat{t}+[a_{1},b_{1}]:=[\hat{t}+a_{1},\hat{t}+b_{1}],\] \[[a_{1},b_{1}]+[a_{2},b_{2}]:=[a_{1}+a_{2},b_{1}+b_{2}].\]
Now, we further define the satisfaction relation between a trajectory \(\mathbf{x}\) and a complete path of the sTLT.
**Definition 13**.: _Consider a trajectory \(\mathbf{x}\) and a complete path \(\mathbb{X}_{0}\Theta_{1}\mathbb{X}_{1}\Theta_{2}\ldots\Theta_{N_{f}} \mathbb{X}_{N_{f}}\). We say \(\mathbf{x}\) satisfies \(\mathbf{p}\) from time \(t\), denoted by \((\mathbf{x},t)\cong\mathbf{p}\), if there exists a time interval coding for \(\mathbf{p}\) such that \(\underline{t}_{0}=\overline{t}_{0}=t\) and, for \(i=1,2,\ldots,N_{f}\),_
1. _if_ \(\Theta_{i}\in\{\wedge,\vee\}\)_, then_ \([\underline{t}_{i},\overline{t}_{i}]=[\underline{t}_{i-1},\overline{t}_{i-1}]\)_;_
2. _if_ \(\Theta_{i}\in\{\mathsf{U}_{[a,b]},\mathsf{F}_{[a,b]}\}\)_, then_ \(\exists t^{\prime}\in[a,b]\) _s.t._ \([\underline{t}_{i},\overline{t}_{i}]=t^{\prime}+[\underline{t}_{i-1},\overline {t}_{i-1}]\)_;_
3. _if_ \(\Theta_{i}=\mathsf{G}_{[a,b]}\)_, then_ \([\underline{t}_{i},\overline{t}_{i}]=[a,b]+[\underline{t}_{i-1},\overline{t}_ {i-1}]\)_;_
_and, for \(i=0,1,\ldots,N_{f}\),_
1. \(\mathbf{x}(t)\in\mathbb{X}_{i},\forall t\in[\underline{t}_{i},\overline{t}_{i}]\)_._
With Definition 13, the sTLT semantics, i.e., the satisfaction relation between a trajectory \(\mathbf{x}\) and an sTLT, can be defined as follows.
**Definition 14** (sTLT semantics).: _Consider a trajectory \(\mathbf{x}\) and an sTLT \(\mathcal{T}\). We say \(\mathbf{x}\) satisfies \(\mathcal{T}\) from time \(t\), denoted by \((\mathbf{x},t)\cong\mathcal{T}\), if the output of Algorithm 1 is \(\mathrm{true}\)._
Algorithm 1 takes as inputs a trajectory \(\mathbf{x}\) and an sTLT \(\mathcal{T}\). The output is \(\mathrm{true}\) or false. It works as follows. Given the sTLT \(\mathcal{T}\), we first remove all its temporal fragments (line 1). When removing a temporal fragment, we reconnect the parent node of the corresponding temporal operator node and the child of the corresponding set node. In this way the resulting compressed tree \(\mathcal{T}^{c}\) contains only Boolean operator nodes and set nodes. For the sTLT \(\mathcal{T}_{\hat{\varphi}}\) shown in Fig. 2, the compressed tree \(\mathcal{T}^{c}\) is depicted in Fig. 3. Then for each complete path \(\mathbf{p}\) of \(\mathcal{T}\), if \((\mathbf{x},0)\cong\mathbf{p}\), one sets the corresponding leaf node of \(\mathbf{p}\) in \(\mathcal{T}^{c}\) (note that \(\mathcal{T}^{c}\) and \(\mathcal{T}\) have the same set of leaf nodes)
to \(\mathrm{true}\). Otherwise, one sets the corresponding leaf node of \(\mathbf{p}\) in \(\mathcal{T}^{c}\) to \(\mathrm{false}\) (lines 2-8). After that, we set all the non-leaf set nodes of \(\mathcal{T}^{c}\) to \(\mathrm{false}\) (line 9) and the resulting tree becomes a Boolean tree (a tree with Boolean operator and Boolean variable nodes). Finally, we backtrack the Boolean tree \(\mathcal{T}^{c}\) using Algorithm _Backtracking_, given in Algorithm 2, and return the root node (lines 10-11).
```
0: a tree \(\mathcal{T}^{c}\) with Boolean operator and Boolean variable nodes.
0: the updated \(\mathcal{T}^{c}\).
1:for each operator node \(\Theta\) of \(\mathcal{T}^{c}\) through a bottom-up traversal, do
2:if\(\Theta=\wedge\), then
3:\(\mathrm{PA}(\Theta)\leftarrow\mathrm{PA}(\Theta)\vee(\mathrm{CH}_{1}(\Theta) \wedge\mathrm{CH}_{2}(\Theta))\),
4:else
5:\(\mathrm{PA}(\Theta)\leftarrow\mathrm{PA}(\Theta)\vee(\mathrm{CH}_{1}(\Theta) \wedge\mathrm{CH}_{2}(\Theta))\),
6:endif
7:endfor
```
**Algorithm 1**_sTLT Satisfaction_
In Algorithm _Backtracking_, \(\mathrm{PA}(\Theta)\) and \(\mathrm{CH}_{1}(\Theta),\mathrm{CH}_{2}(\Theta)\) represent the parent and two children nodes of the Boolean operator node \(\Theta\in\{\wedge,\vee\}\), respectively.
**Remark 1**.: _In [31], the TLT is introduced for the model checking and control synthesis of discrete-time systems under LTL tasks. In this work, the sTLT is designed to guide the design of CBFs for continuous-time dynamical systems under nested STL formulas. The much more complex time constraints encoded in STL formulas have naturally led to different construction procedures and semantics of sTLT when compared to TLT in [31], which we highlight as follows. First, the construction of sTLT is largely different from TLT. To incorporate the time constraints encoded in an STL formula, our construction of the sTLT relies on the finite time reachability analysis, i.e., the maximal and minimal reachability operators \(\mathcal{R}^{M}\) and \(\mathcal{R}^{m}\) given respectively in Definitions 3 and 4. In [31], the TLT construction relies on the infinite time controlled invariant set and robust controlled invariant set. Second, the sTLT semantics is largely different from TLT semantics. In order to monitor the time constraint satisfaction in an STL formulas, we introduce in this work the definition of \(\mathrm{time}\) interval coding (cf. Definition 12) for a complete path of an sTLT. On one hand, we show in Definitions 13 and 14 that the satisfaction of an sTLT can be characterized by the existence of a well-defined time interval coding. On the other hand, it will become clear later that the time interval coding is also vital in control synthesis. In [31], the TLT semantics is much simpler as it only requires an assignment of ascending integers as time indices for each complete path of TLT._
To better understand the sTLT semantics, i.e., Definition 14, the following definitions are needed.
**Definition 15**.: _We say an sTLT\(\mathcal{T}\) contains \(\vee\) operator nodes only at its top layers if for every complete path \(\mathbf{p}=\mathbb{X}_{0}\Theta_{1}\mathbb{X}_{1}\Theta_{2}\ldots\Theta_{N_{f}} \mathcal{X}_{N_{f}}\) of \(\mathcal{T}\) that contains \(\vee\) operator nodes, there exists a \(1\leq k\leq N_{f}\) such that_
\[\Theta_{j}\in\left\{\begin{aligned} &\{\vee\}& j\in\{1,\ldots,k\},\\ &\{\wedge,F_{[a,b]},G_{[a,b]}\},& j\in\{k+1,\ldots,N_{f}\}.\end{aligned}\right. \tag{11}\]
**Remark 2**.: _For any nested STL formula \(\varphi\), the operator nodes \(\vee\), if it exists, only appears in the top layers of the constructed sTLT\(\mathcal{T}_{\hat{\varphi}}\). This can be seen from the fact that \(\hat{\varphi}\) is in the form of \(\hat{\varphi}=\varphi_{1}\vee\varphi_{2}\vee\ldots\vee\varphi_{N}\), and \(\varphi_{i},i=1,2,\ldots,N,\) contain no \(\vee\) operator as discussed in Step 1._
**Definition 16**.: _Let \(\mathbf{p}_{l}=\mathbb{X}_{0}\Theta_{1}^{l}\mathbb{X}_{1}^{l}\Theta_{2}^{l}\ldots \Theta_{N_{f}}^{l}\mathbb{X}_{N_{f}}^{l}\) and \(\mathbf{p}_{f}=\mathbb{X}_{0}\Theta_{1}^{l}\mathbb{X}_{1}^{l}\Theta_{2}^{l}\ldots \Theta_{N_{f}}^{l}\mathbb{X}_{N_{f}}^{l}\) be two complete paths of an sTLT\(\mathcal{T}\). Denote by \(k_{l}=\arg\max_{k}\{\Theta_{k}^{l}=\vee\}\) and \(k_{f}=\arg\max_{k}\{\Theta_{k}^{f}=\vee\}\). We say \(\mathbf{p}_{l}\) and \(\mathbf{p}_{f}\) belong to the same branch of \(\mathcal{T}\) if \(k_{l}=k_{f}\) and \(\mathbb{X}_{j}^{l}=\mathbb{X}_{j}^{f}\), \(\Theta_{j}^{l}=\Theta_{j}^{f},\forall j=1,\ldots k_{l}\)._
**Remark 3**.: _Definition 14 can be interpreted as follows:_
1. _Consider the case where the sTLT_ \(\mathcal{T}\) _contains no_ \(\vee\) _operator. Then Definition_ 14 _dictates that_ \((\mathbf{x},t)\cong\mathcal{T}\) _if and only if_ \((\mathbf{x},t)\) _satisfies every complete path of_ \(\mathcal{T}\)_._
2. _Consider the case where the sTLT_ \(\mathcal{T}\) _contains_ \(\vee\) _operator nodes only at its top layers. Then Definition_ 14 _dictates that_ \((\mathbf{x},t)\cong\mathcal{T}\) _if and only if_ \((\mathbf{x},t)\) _satisfies at least one branch of complete paths._
**Example** (continued).: Let us continue with Example 1. According to Definition 10, the sTLT\(\mathcal{T}_{\hat{\varphi}}\) (see Fig. 2) has in total
3 complete paths, i.e.,
\[\mathbf{p}_{1} =\mathbb{X}_{0}\vee\mathbb{X}_{1}\mathsf{F}_{[0,15]}\mathbb{X}_{3} \mathsf{G}_{[2,10]}\mathbb{X}_{5},\] \[\mathbf{p}_{2} =\mathbb{X}_{0}\vee\mathbb{X}_{2}\mathsf{F}_{[0,15]}\mathbb{X}_{4} \wedge\mathbb{X}_{6}\mathsf{G}_{[0,10]}\mathbb{X}_{8},\] \[\mathbf{p}_{3} =\mathbb{X}_{0}\vee\mathbb{X}_{2}\mathsf{F}_{[0,15]}\mathbb{X}_{4} \wedge\mathbb{X}_{7}\mathsf{F}_{[5,10]}\mathbb{X}_{9},\]
and 5 temporal fragments, which are encircled by the red dashed rectangles in Fig. 2.
The sTLT \(\mathcal{T}_{\hat{\varphi}}\) contains \(\vee\) operator nodes only at its top layers since one has \(k_{1}=k_{2}=k_{3}=1\) according to Definition 15. On one hand, one observes that \(\Theta_{1}^{2}=\Theta_{1}^{3}=\vee\) and \(\mathbb{X}_{1}^{2}=\mathbb{X}_{1}^{3}=\mathbb{X}_{2}\). Therefore, \(\mathbf{p}_{2}\) and \(\mathbf{p}_{3}\) belong to the same branch. On the other hand, since \(\mathbb{X}_{1}^{1}=\mathbb{X}_{1}\neq\mathbb{X}_{2}=\mathbb{X}_{1}^{2}=\mathbb{ X}_{1}^{3}\), neither \(\mathbf{p}_{1}\) and \(\mathbf{p}_{2}\) nor \(\mathbf{p}_{1}\) and \(\mathbf{p}_{3}\) belong to the same branch. A trajectory \((\mathbf{x},t)\cong\mathcal{T}_{\hat{\varphi}}\) if and only if either of the following 2 conditions is satisfied: (1) \((\mathbf{x},t)\cong\mathbf{p}_{1}\), (2) \((\mathbf{x},t)\cong\mathbf{p}_{2}\) and \((\mathbf{x},t)\cong\mathbf{p}_{3}\).
### _Relations between \(\mathcal{T}_{\hat{\varphi}}\) and \(\hat{\varphi}\) (\(\varphi\))_
In this section, we derive the relations between an STL formula \(\hat{\varphi}\) (\(\varphi\)) and its constructed sTLT \(\mathcal{T}_{\hat{\varphi}}\). First, we show the result for STL formulas in desired form, i.e., \(\hat{\varphi}\).
**Theorem 1**.: _Consider the system (1) and an STL task \(\hat{\varphi}\) in desired form as per Definition 9. The sTLT \(\mathcal{T}_{\hat{\varphi}}\) is equivalent to \(\hat{\varphi}\) in the sense that_
\[(\mathbf{x},t)\cong\mathcal{T}_{\hat{\varphi}}\Leftrightarrow(\mathbf{x},t)\models\hat {\varphi}. \tag{12}\]
Proof.: For \(\top\), predicate \(\mu\), its negation \(\neg\mu\), \(\mu_{1}\wedge\mu_{2}\), and \(\mu_{1}\vee\mu_{2}\), it is trivial to verify that \((\mathbf{x},t)\cong\mathcal{T}_{\hat{\varphi}}\Leftrightarrow(\mathbf{x},t)\models\hat {\varphi}\).
Next, we follow the induction rule to show that if \((\mathbf{x},t)\cong\mathcal{T}_{\hat{\varphi}_{1}}\Leftrightarrow(\mathbf{x},t) \models\hat{\varphi}_{1}\) and \((\mathbf{x},t)\cong\mathcal{T}_{\hat{\varphi}_{2}}\Leftrightarrow(\mathbf{x},t) \models\hat{\varphi}_{2}\), then the constructed sTLT \(\mathcal{T}_{\hat{\varphi}}\) satisfies \((\mathbf{x},t)\cong\mathcal{T}_{\hat{\varphi}}\Leftrightarrow(\mathbf{x},t)\models \hat{\varphi}\) for a) \(\hat{\varphi}=\hat{\varphi}_{1}\wedge\hat{\varphi}_{2}\), b) \(\hat{\varphi}_{1}\vee\hat{\varphi}_{2}\), c)\(\mathsf{F}_{[a,b]}\hat{\varphi}_{1}\), and d) \(\mathsf{G}_{[a,b]}\hat{\varphi}_{1}\).
Case a): \(\hat{\varphi}=\hat{\varphi}_{1}\wedge\hat{\varphi}_{2}\). Assume that a trajectory \((\mathbf{x},t)\simeq\mathcal{T}_{\hat{\varphi}_{1}}\). According to Definition 14, we have \((\mathbf{x},t)\cong\mathcal{T}_{\hat{\varphi}_{1}}\) and \((\mathbf{x},t)\cong\mathcal{T}_{\hat{\varphi}_{2}}\). Under the assumption that \((\mathbf{x},t)\cong\mathcal{T}_{\hat{\varphi}_{1}}\Leftrightarrow(\mathbf{x},t) \models\hat{\varphi}_{1}\) and \((\mathbf{x},t)\cong\mathcal{T}_{\hat{\varphi}_{2}}\Leftrightarrow(\mathbf{x},t) \models\hat{\varphi}_{2}\) and \((\mathbf{x},t)\cong\mathcal{T}_{\hat{\varphi}_{2}}\Leftrightarrow(\mathbf{x},t) \models\hat{\varphi}_{2}\), one can get that \((\mathbf{x},t)\models\hat{\varphi}_{1}\) and \((\mathbf{x},t)\models\hat{\varphi}_{2}\), which implies \((\mathbf{x},t)\models\hat{\varphi}\). Thus, \((\mathbf{x},t)\cong\mathcal{T}_{\hat{\varphi}}\Rightarrow(\mathbf{x},t)\models\hat {\varphi}\). The proof of the other direction is similar and hence omitted.
Case b): \(\hat{\varphi}=\hat{\varphi}_{1}\vee\hat{\varphi}_{2}\). The proof is similar to Case a) and hence omitted.
Case c): \(\hat{\varphi}=\mathsf{F}_{[a,b]}\hat{\varphi}_{1}\). Assume that a trajectory \((\mathbf{x},t)\cong\mathcal{T}_{\hat{\varphi}_{[a,b]}\hat{\varphi}_{1}}\). As depicted in Fig.1(c), we know that each complete path of \(\mathcal{T}_{\hat{\varphi}}\) can be written in the form of \(\mathbf{p}=\mathbb{X}_{0}\Theta_{1}\mathbf{p}^{\prime}\), where \(\Theta_{1}=\mathsf{F}_{[a,b]}\) and \(\mathbf{p}^{\prime}\) is a complete path of \(\mathcal{T}_{\hat{\varphi}_{1}}\). According to Definitions 13 and 14, we have \(\exists t^{\prime}\in[t+a,t+b],\mathbf{x}(t^{\prime})\in\mathbb{S}_{\hat{\varphi}_{1}}\) and \((\mathbf{x},t^{\prime})\cong\mathcal{T}_{\hat{\varphi}_{1}}\). Under the assumption that \((\mathbf{x},t)\cong\mathcal{T}_{\hat{\varphi}_{1}}\Leftrightarrow(\mathbf{x},t) \models\hat{\varphi}_{1}\), one can get that \(\exists t^{\prime}\in[a,b],(\mathbf{x},t^{\prime})\models\hat{\varphi}_{1}\), which implies \((\mathbf{x},t)\models\mathsf{F}_{[a,b]}\hat{\varphi}_{1}\) by Definition 1. Thus, \((\mathbf{x},t)\not\models\mathsf{F}_{[a,b]}\hat{\varphi}_{1}\Rightarrow(\mathbf{x},t) \models\mathsf{F}_{[a,b]}\hat{\varphi}_{1}\). Assume now that \((\mathbf{x},t)\models\mathsf{F}_{[a,b]}\hat{\varphi}_{1}\). Then one has from Definition 1 that \(\exists t^{\prime}\in[t+a,t+b],\mathbf{x}(t^{\prime})\in\mathbb{S}_{\hat{\varphi}_{1}}\), which implies \(\mathbf{x}(t)\in\mathcal{R}^{M}(\mathbb{S}_{\hat{\varphi}_{1}},[a,b])=\mathbb{X}_{0}\). According to Definitions 13 and 14, it means that \((\mathbf{x},t)\cong\mathcal{T}_{\hat{\varphi}}\). Therefore, \((\mathbf{x},t)\models\mathsf{F}_{[a,b]}\hat{\varphi}_{1}\Rightarrow(\mathbf{x},t) \cong\mathcal{T}_{\hat{\varphi}_{1}\Rightarrow(\mathbf{x},t)\models\mathsf{G}_{[a,b]} \hat{\varphi}_{1}}\). As depicted in Fig.1(d), we know that each complete path of \(\mathcal{T}_{\hat{\varphi}}\) can be written in the form of \(\mathbf{p}=\mathbb{X}_{0}\Theta_{1}\mathbf{p}^{\prime}\), where \(\Theta_{1}=\mathsf{G}_{[a,b]}\) and \(\mathbf{p}^{\prime}\) is a complete path of \(\mathcal{T}_{\hat{\varphi}_{1}}\). According to Definitions 13 and 14, we have \((\mathbf{x},t^{\prime})\cong\mathcal{T}_{\hat{\varphi}_{1}}\), \(\forall t^{\prime}\in[t+a,t+b]\). Under the assumption that \((\mathbf{x},t)\cong\mathcal{T}_{\hat{\varphi}_{1}}\Leftrightarrow(\mathbf{x},t) \models\hat{\varphi}_{1}\), one can get that \((\mathbf{x},t^{\prime})\models\hat{\varphi}_{1},\forall t^{\prime}\in[t+a,t+b]\), which implies \((\mathbf{x},t)\models\mathsf{G}_{[a,b]}\hat{\varphi}_{1}\) by Definition 1. Thus, \((\mathbf{x},t)\not\models\mathsf{G}_{[a,b]}\hat{\varphi}_{1}\). Assume now that \((\mathbf{x},t)\not\models\mathsf{G}_{[a,b]}\hat{\varphi}_{1}\). Then one has from one has that \(\mathbf{x}(t^{\prime})\in\mathcal{R}^{M}(\mathbb{S}_{\hat{\varphi}_{1}},[a,b])= \mathbb{X}_{0}\). According to Definition 13 and 14, it means that \((\mathbf{x},t)\cong\mathcal{
#### Iv-C1 Time encoding for the sTLT
Before proceeding, the following notations are needed. Given an STL operator \(\Theta\in\{\wedge,\vee,\mathsf{F}_{[a,b]},\mathsf{G}_{[a,b]}\}\), define the possible start time (interval) of \(\Theta\) (i.e., time to evaluate the satisfaction of \(\varphi_{1}\Theta\varphi_{2}\) or \(\Theta\varphi\)) as
\[[\underline{t}(\Theta),\bar{t}(\Theta)]:=\left\{\begin{array}{ll}[0,0],&\text{ if }\Theta\in\{\wedge,\vee\},\\ [a,b],&\text{if }\Theta\in\{\mathsf{F}_{[a,b]}\},\\ [a,a],&\text{if }\Theta\in\{\mathsf{G}_{[a,b]}\}.\end{array}\right. \tag{13}\]
The start time for logic operators \(\wedge\) and \(\vee\) is 0. For the temporal operator \(\mathsf{G}_{[a,b]}\), the start time is \(a\). Note that for the temporal operator \(\mathsf{F}_{[a,b]}\), any time instant in the interval \([a,b]\) fulfills item ii) of Definition 13. To accommodate this uncertainty, we set the start time for \(\mathsf{F}_{[a,b]}\) to be the interval \([a,b]\).
In addition, we define the duration of \(\Theta\) as
\[\mathcal{D}(\Theta):=\left\{\begin{array}{ll}0,&\text{if }\Theta\in\{ \wedge,\vee,\mathsf{F}_{[a,b]}\},\\ b-a,&\text{if }\Theta\in\{\mathsf{G}_{[a,b]}\}.\end{array}\right. \tag{14}\]
The root node of \(\mathcal{T}_{\Downarrow}\) is denoted by \(\mathbb{X}_{\text{root}}\). Let \(\mathbb{X}\) be the set which collects all the set nodes of the sTLT \(\mathcal{T}_{\Downarrow}\). For a set node \(\mathbb{X}_{i}\in\mathbb{X}\), define \([\underline{t}_{s}(\mathbb{X}_{i}),\bar{t}_{s}(\mathbb{X}_{i})]\) and \(\mathcal{D}(\mathbb{X}_{i})\) as the possible start time (interval) and the duration of \(\mathbb{X}_{i}\), respectively. \(\mathrm{PA}(\mathbb{X}_{i})\) denotes the parent of node \(\mathbb{X}_{i}\). Therefore, one has that \(\mathrm{PA}(\mathbb{X}_{i})\) is an operator node and \(\mathrm{PA}(\mathrm{PA}(\mathbb{X}_{i}))\) is a set node.
Now, the calculation of the start time (interval) for each set node \(\mathbb{X}_{i}\) (which is needed for ensuring the satisfaction of the sTLT \(\mathcal{T}_{\Downarrow}\) as shown in Theorem 1) is outlined in Algorithm 3.
```
0: The sTLT \(\mathcal{T}_{\Downarrow}\).
0:\(\underline{t}_{s}(\mathbb{X}_{i}),\bar{t}_{s}(\mathbb{X}_{i}),\mathcal{D}( \mathbb{X}_{i}),\forall\mathbb{X}_{i}\).
1:\(\underline{t}_{s}(\mathbb{X}_{\text{root}})\gets 0,\overline{t}_{s}( \mathbb{X}_{\text{root}})\gets 0,\mathcal{D}(\mathbb{X}_{\text{root}})\gets 0\)
2:for each non-root node \(\mathbb{X}_{i}\) of \(\mathcal{T}_{\varphi}\) through a top-down traversal, do
3:\(\underline{t}_{s}(\mathbb{X}_{i})\leftarrow\underline{t}_{s}(\mathrm{PA}( \mathrm{PA}(\mathbb{X}_{i})))+\underline{t}(\mathrm{PA}(\mathbb{X}_{i})),\)
4:\(\overline{t}_{s}(\mathbb{X}_{i})\leftarrow\bar{t}_{s}(\mathrm{PA}(\mathrm{ PA}(\mathbb{X}_{i})))+\bar{t}(\mathrm{PA}(\mathbb{X}_{i})),\)
5:\(\mathcal{D}(\mathbb{X}_{i})\leftarrow\mathcal{D}(\mathrm{PA}(\mathrm{PA}( \mathbb{X}_{i})))+\mathcal{D}(\mathrm{PA}(\mathbb{X}_{i})),\)
6:endfor
```
**Algorithm 3**_calculateStartTimeInterval_
Due to the uncertainty of the start time for temporal operator \(\mathsf{F}_{[a,b]}\), one can see that the start times of some set nodes \(\mathbb{X}_{i}\) may be unknown and belong to an interval after running Algorithm 3. In the following, we show how to update the start times of such set nodes \(\mathbb{X}_{i}\) online.
We develop an event-triggered scheme to update the start times. For each set node \(\mathbb{X}_{i}\) such that \(\underline{t}_{s}(\mathbb{X}_{i})\neq\bar{t}_{s}(\mathbb{X}_{i})\), an event is triggered at time \(t\) if:
\[t\in[\underline{t}_{s}(\mathbb{X}_{i})),\bar{t}_{s}(\mathbb{X}_{i}))]\ \wedge\ \boldsymbol{x}(t)\in\mathbb{X}_{i}. \tag{15}\]
Once an event is triggered, we run Algorithm 4 to update the start times of the set nodes. Note that once an event is triggered for a set node, its start time is fixed.
**Example** (continued).: Let us continue with Example 1 to demonstrate the event-triggered online update scheme.
First, one can calculate the start time intervals for each set node \(\mathbb{X}_{i},i=\{0,1,\cdots,9\}\) in the sTLT \(\mathcal{T}_{\Downarrow}\) (see Fig. 2) according to Algorithm 3, which give \([\underline{t}_{s}(\mathbb{X}_{0}),\bar{t}_{s}(\mathbb{X}_{0})]=[\underline{t}_ {s}(\mathbb{X}_{1}),\bar{t}_{s}(\mathbb{X}_{1})]=[\underline{t}_{s}(\mathbb{ X}_{2}),\bar{t}_{s}(\mathbb{X}_{2})]=[0,0],\ [\underline{t}_{s}(\mathbb{X}_{3}),\bar{t}_{s}(\mathbb{X}_{3})]=[ \underline{t}_{s}(\mathbb{X}_{4}),\bar{t}_{s}(\mathbb{X}_{4})]=[0,15],\)\([\underline{t}_{s}(\mathbb{X}_{5}),\bar{t}_{s}(\mathbb{X}_{5})]=[2,17],\ [\underline{t}_{s}(\mathbb{X}_{6}),\bar{t}_{s}(\mathbb{X}_{6})]=[ \underline{t}_{s}(\mathbb{X}_{7}),\bar{t}_{s}(\mathbb{X}_{7})]=[0,15],\ [\underline{t}_{s}(\mathbb{X}_{8}),\bar{t}_{s}(\mathbb{X}_{8})]=[0,15],\) and \([\underline{t}_{s}(\mathbb{X}_{9}),\bar{t}_{s}(\mathbb{X}_{9})]=[5,25]\). Note that due to the 'eventually' operator \(\mathsf{F}_{[0,15]}\) which appears at the outermost layer of the nested STL formula \(\varphi=\mathsf{F}_{[0,15]}(\mathsf{G}_{[2,10]}\mu_{1}\vee\mu_{2}\mathsf{U}_{[ 5,10]}\mu_{3})\), the start times of all the set nodes that belong to temporal fragments (i.e., \(\mathbb{X}_{i},i\in\{3,4,5,8,9\}\)) are uncertain (i.e., belong to an interval). To reduce conservatism, we update the start time intervals of these set nodes online using the event-triggered scheme (15).
Assume that at time instant \(t=5s\), the event-triggered condition (15) is satisfied for set node \(\mathbb{X}_{4}\), i.e., \(5\in[\underline{t}_{s}(\mathbb{X}_{4}),\bar{t}_{s}(\mathbb{X}_{4})]=[0,15]\) and \(\boldsymbol{x}(5)\in\mathbb{X}_{4}\), then Algorithm 4 is activated. Following lines 1-3 of Algorithm 4, one has that \(\underline{t}_{s}(\mathbb{X}_{4})=\bar{t}_{s}(\mathbb{X}_{4})=5\) (i.e., the start time of set node \(\mathbb{X}_{4}\) is fixed). Then one can further fix the start times of the set nodes \(\mathbb{X}_{6}=\mathbb{X}_{7}=\mathbb{X}_{8}\) (which are \(5s\)) and update the start time interval of the set node \(\mathbb{X}_{9}\) as \([\underline{t}_{s}(\mathbb{X}_{9}),\bar{t}_{s}(\mathbb{X}_{9})]=[10,15]\).
#### Iv-C2 CBF design for each temporal fragment
First, we have the following definition.
**Definition 17**.: _We call a temporal fragment \(f_{j}\) the predecessor of another temporal fragment \(f_{i}\) (or \(f_{i}\) the successor of \(f_{j}\)) if there exists a complete path \(\boldsymbol{p}\) such that \(\boldsymbol{p}=...f_{j}\boldsymbol{p}^{\prime}f_{i}...\) where \(\boldsymbol{p}^{\prime}\) does not contain any temporal fragments. We call \(f_{i}\) a top-layer temporal fragment if \(f_{i}\) has no predecessor temporal fragment._
Given the sTLT \(\mathcal{T}_{\Downarrow}\) for a nested STL formula \(\varphi\), we need to design one CBF for each temporal fragment \(f_{i}\) in view of the item iv) of Definition 13. Denote by \(f_{i}=\Theta_{f_{i}}\mathbb{X}_{f_{i}}\), where \(\Theta_{f_{i}}\) and \(\mathbb{X}_{f_{i}}\) are the temporal operator node and the set node contained in \(f_{i}\). Note that \(\mathbb{X}_{f_{i}}\) is represented by its value function \(\mathbb{X}_{f_{i}}=\{x:h_{\mathbb{X}_{f_{j}}}(x)\geq 0\}\). We require the corresponding CBF \(\mathfrak{b}_{i}(x,t)\) to satisfy the following conditions:
1. \(\mathfrak{b}_{i}(x,t)\) is continuously differentiable and is defined over \(\mathcal{C}(t)\times[\min\{t_{e}(\mathrm{PA}(\mathrm{PA}(\mathbb{X}_{f_{i}})))
Define the _time domain_ of the CBF \(\mathfrak{b}_{i}(x,t)\) as
\[[\underline{t}_{\mathfrak{b}_{i}},\bar{t}_{\mathfrak{b}_{i}}]:=[\min\{t_{e}( \text{PA}(\text{PA}(\text{PA}(\mathbb{X}_{f_{i}}))),\underline{t}_{s}(\mathbb{X }_{f_{i}})\},t_{e}(\mathbb{X}_{f_{i}})]. \tag{16}\]
This is to guarantee that the CBF \(\mathfrak{b}_{i}\), which corresponds to the temporal fragment \(f_{i}\), is activated at \(t_{e}(\text{PA}(\text{PA}(\mathbb{X}_{f_{i}})))\), for which the activation of the predecessor of \(f_{i}\) ends, or at \(\underline{t}_{s}(\mathbb{X}_{f_{i}})\), for which \(f_{i}\) becomes active at its earliest, whichever comes earlier. A formal statement on this is given in Lemma 2.
**Lemma 2**.: _Let \(f_{i}\) be a non top-layer temporal fragment, and \(f_{j}\) be the predecessor of \(f_{i}\) in the constructed sTLT. Denote their respective CBFs \(\mathfrak{b}_{j}(x,t),\mathfrak{b}_{i}(x,t)\). Then \(\underline{t}_{\mathfrak{b}_{j}}\leq\underline{t}_{\mathfrak{b}_{i}}\leq \bar{t}_{\mathfrak{b}_{j}}\leq\bar{t}_{\mathfrak{b}_{j}}\)._
Proof:: It can be deduced from the tree structure that the predecessor of a non top-layer temporal fragment is unique. Denote the set nodes in the fragments \(f_{j}\) and \(f_{i}\) are \(\mathbb{X}_{f_{j}},\mathbb{X}_{f_{i}}\), respectively. The inequalities can be obtained as follows: 1) in view of (16) and Algorithm 3, \(\underline{t}_{\mathfrak{b}_{j}}\leq t_{e}(\mathbb{X}_{f_{j}})\) and \(\underline{t}_{\mathfrak{b}_{j}}\leq\underline{t}_{s}(\mathbb{X}_{f_{j}})\leq \underline{t}_{e}(\mathbb{X}_{f_{i}})\), thus \(\underline{t}_{\mathfrak{b}_{j}}\leq\underline{t}_{\mathfrak{b}_{j}}=\min(t _{e}(\mathbb{X}_{f_{j}}),\underline{t}_{s}(\mathbb{X}_{f_{i}}))\); 2) from (16), \(\underline{t}_{\mathfrak{b}_{j}}\leq t_{e}(\mathbb{X}_{f_{j}})=\bar{t}_{ \mathfrak{b}_{j}}\); 3) from Algorithm 3 and the definition of \(t_{e}(\cdot)\), \(\mathfrak{t}_{\mathfrak{b}_{j}}=t_{e}(\mathbb{X}_{f_{j}})=\bar{t}_{s}(\mathbb{ X}_{j})+\mathcal{D}(\mathbb{X}_{j})\leq\bar{t}_{s}(\mathbb{X}_{i})+\mathcal{D}( \mathbb{X}_{i})=t_{e}(\mathbb{X}_{f_{i}})=\bar{t}_{\mathfrak{b}_{i}}\).
If \(f_{i}\) is not a top-layer temporal fragment, then the third condition on the corresponding CBF \(\mathfrak{b}_{i}(x,t)\) is
1. \(\mathfrak{b}_{i}(x,\underline{t}_{\mathfrak{b}_{i}})\geq 0,\forall x\in\{x: \mathfrak{b}_{j}(x,\underline{t}_{\mathfrak{b}_{i}})\geq 0\}\), where \(f_{j}\) is the unique predecessor of \(f_{i}\).
Note that \(\mathfrak{b}_{j}(x,\underline{t}_{\mathfrak{b}_{i}})\) is well-defined in view of Lemma 2.
**Proposition 1**.: _Given a complete path \(\boldsymbol{p}\) and an initial condition \(x_{0}\), let \(f_{0},f_{1},...,f_{N}\) be the sequence of temporal fragments contained in \(\boldsymbol{p}\) and \(\mathfrak{b}_{0},\mathfrak{b}_{1},\ldots,\mathfrak{b}_{N}\) the corresponding CBFs. Assume that each \(\mathfrak{b}_{i},i\in 0,\ldots,N\) satisfies the conditions 1)-3). Furthermore, if \(\mathfrak{b}_{0}(x_{0},0)\geq 0\) and each of the CBFs \(\mathfrak{b}_{i}\) satisfies the condition (5) during the corresponding time domain, then the resulting trajectory satisfies this complete path \(\boldsymbol{p}\)._
Proof:: Without loss of generality, assume that \(f_{i}\) is the predecessor of \(f_{i+1}\), \(i=0,1,...,N-1\). For the top-level temporal fragment \(f_{0}\), since \(\mathfrak{b}_{0}(x_{0},0)\geq 0\) and the CBF condition (5) holds in \([0,\bar{t}_{\mathfrak{b}_{0}}]\), we have \(\mathfrak{b}_{0}(\boldsymbol{x}(t),t)\geq 0,\forall t\in[0,\bar{t}_{ \mathfrak{b}_{0}}]\). Now assume \(\mathfrak{b}_{i}(\boldsymbol{x}(t),t)\geq 0,\forall t\in[\underline{t}_{ \mathfrak{b}_{i}},\bar{t}_{\mathfrak{b}_{i}}]\). From condition 3), \(\mathfrak{b}_{i+1}(x(\underline{t}_{\mathfrak{b}_{i+1}}),\underline{t}_{ \mathfrak{b}_{i+1}})\geq 0\). In addition, the CBF condition (5) of \(\mathfrak{b}_{i+1}\) is satisfied for \(\forall t\in[\underline{t}_{\mathfrak{b}_{i+1}},\bar{t}_{\mathfrak{b}_{i+1}}]\), and then \(\mathfrak{b}_{i+1}(\boldsymbol{x}(t),t)\geq 0,\forall t\in[\underline{t}_{ \mathfrak{b}_{i+1}},\bar{t}_{\mathfrak{b}_{i+1}}]\). Inductively, we obtain \(\mathfrak{b}_{i}(\boldsymbol{x}(t),t)\geq 0,\forall t\in[\underline{t}_{ \mathfrak{b}_{i}},\bar{t}_{\mathfrak{b}_{i}}]\) for \(i=0,1,2,...,N\).
In addition, \(\mathfrak{b}_{i}(\boldsymbol{x}(t),t)\geq 0,\forall t\in[\underline{t}_{ \mathfrak{b}_{i}},\bar{t}_{\mathfrak{b}_{i}}]\) implies that \(\boldsymbol{x}(t)\in\mathbb{X}_{i},\forall t\in[\bar{t}_{s}(\mathbb{X}_{f_{i}}), t_{e}(\mathbb{X}_{f_{i}})]\) from condition 2). One verifies that \([\bar{t}_{s}(\mathbb{X}_{f_{i}}),t_{e}(\mathbb{X}_{f_{i}})],\forall f_{i}\) is a valid time interval coding of the complete path from Definition 13 items i-iii). Thus, the resulting trajectory satisfies the complete path \(\boldsymbol{p}\).
Up to now, we have shown the design of the CBFs and the online update of their time domains for each temporal fragment in the sTLT \(\mathcal{T}_{\bar{\varphi}}\). In what follows, we will show how to incorporate them to conduct the online control synthesis.
### _The overall algorithm_
In this subsection, we divide the nested STL formulas into 2 classes, i.e., nested STL formulas that contain no \(\vee\) operator and nested STL formulas that contain \(\vee\) operator. We differentiate these two cases because they have different sTLT satisfaction conditions as discussed in Remark 3.
#### Iv-E1 Nested STL formulas that contain no \(\vee\) operator
Let \(\varphi\) be a nested STL formula that contains no \(\vee\) operator. Then, the corresponding sTLT \(\mathcal{T}_{\bar{\varphi}}\) contains no operator nodes \(\vee\). Let \(\Pi\) be the set which collects all the temporal fragments \(f_{i}\). Denote by \(\mathfrak{b}_{i}\) the CBF designed for the temporal fragment \(f_{i}\). Note that when the start time interval is updated online (Algorithm 4), the time domain of the CBF \(\mathfrak{b}_{i}\) will also be updated correspondingly. The continuous-time control synthesis problem (Problem 1) can be solved by the following program:
\[\min_{u\in U} u^{T}Qu\] s.t. \[\theta_{i}(t)\Big{(}\frac{\partial\mathfrak{b}_{i}(x,t)}{\partial x }f(x,u)+\frac{\partial\mathfrak{b}_{i}(x,t)}{\partial t} \tag{17}\] \[+\alpha_{i}(\mathfrak{b}_{i}(x,t))\Big{)}\geq 0,\forall f_{i}\in\Pi,\]
where \(\theta_{i}(t)=\left\{\begin{array}{ll}1,&\text{if }t\in[\underline{t}_{ \mathfrak{b}_{i}},\bar{t}_{\mathfrak{b}_{i}}]\\ 0,&\text{otherwise}\end{array}\right.\) is an indicator function assigned to each CBF \(\mathfrak{b}_{i}\). Note that since \(\underline{t}_{\mathfrak{b}_{i}},\bar{t}_{\mathfrak{b}_{i}}\) are updated online, \(\theta_{i}(t)\) is also updated online.
#### Iv-E2 Nested STL formulas that contain \(\vee\) operator
Let \(\varphi\) be a nested STL formula that contains \(\vee\) operators. Then, as discussed in Remark 2, the operator nodes \(\vee\) only appear in the top layers of \(\mathcal{T}_{\bar{\varphi}}\).
Recall from Remark 3 that to obtain \((\boldsymbol{x},0)\cong\mathcal{T}_{\bar{\varphi}}\), \((\boldsymbol{x},0)\) needs to satisfy at least one branch of complete paths. Deciding which group of complete paths to satisfy can be done offline or online. In the following we show the case where the branch is chosen offline.
Without loss of generality, let \(\Pi_{l}\) be the set which collects all the temporal fragments \(f_{i}\) that belongs to the chosen branch. Then the online control synthesis is given by
\[u=\underset{u\in U}{\text{argmin}} u^{T}Qu\] (18) s.t. \[\theta_{i}(t)\Big{(}\frac{\partial\mathfrak{b}_{i}(x,t)}{\partial x }f(x,u)+\frac{\partial\mathfrak{b}_{i}(x,t)}{\partial t}\] \[+\alpha_{i}(\mathfrak{b}_{i}(x,t))\Big{)}\geq 0,\forall f_{
can be considered when selecting the branch. For example, one can use performance indexes like robustness metrics, optimal energy, shortest path or online re-plan in the presence of environmental uncertainties. This is however out of the scope of this work and will be pursued in the future._
**Remark 6** (Online CBFs update).: _Even though the time domains of the offline designed CBFs change as the start time intervals update online, this does not impose a need to re-compute the barriers from scratch. Instead, a simple translation in time will suffice. To illustrate this point, assume that we have computed two barriers \(\mathfrak{b}_{j}(x,t),t\in[\underline{t}_{b_{j}},\bar{t}_{b_{j}}]\) and \(\mathfrak{b}_{i}(x,t),t\in[\underline{t}_{b_{i}},\bar{t}_{b_{i}}]\) for two consecutive temporal fragments \(f_{j}f_{i}=\Theta_{f_{j}}\mathbb{X}_{f_{j}}\Theta_{f_{j}}\mathbb{X}_{f_{i}}\). Denote \(\bar{t}_{s}(\mathbb{X}_{f_{j}})\) before the update by \(t_{1}\). If, at time \(t^{\prime}\in[\underline{t}_{s}(\mathbb{X}_{f_{j}}),\bar{t}_{s}(\mathbb{X}_{f _{j}})]\), \(\underline{t}_{s}(\mathbb{X}_{f_{j}})\neq\bar{t}_{s}(\mathbb{X}_{f_{j}})\) and \(\mathbf{x}(t^{\prime})\) reads \(\bar{t}_{s}(\mathbb{X}_{f_{j}})=\bar{t}_{s}(\mathbb{X}_{f_{j}})=t^{\prime}\), and accordingly, the new time domains of the barriers become \([\underline{t}_{s_{j}}^{\prime},\bar{t}_{b_{j}}^{\prime}]:=[t^{\prime},t^{ \prime}+\mathcal{D}(\mathbb{X}_{f_{j}})]\) and \([\underline{t}_{b_{j}}^{\prime},\bar{t}_{b_{i}}^{\prime}]:=[\underline{t}_{b_ {i}}+t^{\prime}-t_{1},\bar{t}_{b_{i}}+t^{\prime}-t_{1}]\). The updated barriers are \(\mathfrak{b}_{j}^{\prime}(x,t)=\mathfrak{b}_{j}(x,t+t_{1}-t^{\prime}),t\in[ \underline{t}_{b_{j}}^{\prime},\bar{t}_{b_{j}}^{\prime}]\) and \(\mathfrak{b}_{i}^{\prime}(x,t)=\mathfrak{b}_{i}(x,t+t_{1}-t^{\prime}),t\in[ \underline{t}_{b_{i}}^{\prime},\bar{t}_{b_{i}}]\), respectively._
**Remark 7**.: _Recall that the above analysis is done for nested STL formulas as per Definition 7. It is straightforward to extend the results to STL tasks that are given by conjunction and/or disjunction of nested STL formulas, for instance, \(\varphi=\mathsf{F}_{[0,15]}(\mathsf{G}_{[0,10]}\mu_{1}\vee\mu_{2}\mathsf{U}_{ [5,10]}\mu_{3})\wedge\mathsf{G}_{[a_{5},b_{6}]}\mu_{4}\). The sTLT thus is constructed for \(\hat{\varphi}=\mathsf{F}_{[0,15]}\mathsf{G}_{[2,10]}\mu_{1}\vee\mathsf{F}_{[0,15]}(\mathsf{G}_{[0,10]}\mu_{2}\wedge\mathsf{F}_{[5,10]}\mu_{3})\wedge\mathsf{ G}_{[a_{5},b_{6}]}\mu_{4}\). The implementation can be done by adding an extra barrier condition corresponding to \(\mathsf{G}_{[a_{5},b_{5}]}\mu_{4}\) into (18)._
Now we summarize our proposed solution in the following theorem.
**Theorem 3**.: _Consider a dynamical system (1) and a nested STL specification \(\varphi\). Let the sTLT constructed according to Section III-A. If the initial condition \(x_{0}\in\mathbb{X}_{\text{{not}}}\) and the online program is feasible, then the resulting system trajectory \((\mathbf{x},0)\models\varphi\)._
Proof.: The proof follows from Proposition 1 and Theorem 2.
**Remark 8** (Nominal control as heuristics).: _In literature dedicated to studying CBFs [29], it is common to incorporate nominal controls to improve overall performance. This is usually done by replacing the weighted quadratic cost in (17) and (18) to the form \((u-u_{nom})^{T}Q(u-u_{nom})\), where \(u_{nom}\) is usually designed based on heuristics. More details on designing \(u_{nom}\) will be detailed in Case Studies._
**Remark 9** (Online feasibility).: _Although we require each \(\mathfrak{b}_{i}\) to be a valid CBF, in general there is no guarantee that they are compatible [32], i.e., the online programs in (17) and (18) are feasible for all \((x,t)\). When the system is control-affine, the feasibility of QPs is guaranteed when the time domains of individual CBFs do not overlap. In the general case, one can verify or falsify the compatibility of multiple CBFs a priori using the method from [32]. More detailed discussions are given in Case Studies with several empirical remedies._
### _Computational complexity_
The computational complexity of the overall approach involves offline and online computational complexities. The offline phase is composed of 1) the construction of the sTLT and 2) the design of a CBF for each temporal fragment of the sTLT.
**Construction of sTLT:** Given an STL formula \(\varphi\) in desired form with \(K\) operators, the constructed sTLT contains at most \(3K+1\) nodes (\(K\) operator nodes and at most \(2K+1\) set nodes). The bottleneck for constructing the sTLT, however, is the computation of set nodes, which involves computing maximal or minimal reachable sets (defined in Definitions 4 and 5) for the continuous-time dynamical systems under consideration. In the case of a linear continuous-time system, one can compute reachable sets efficiently for large-scale linear systems with several thousand state variables for bounded, but arbitrarily varying inputs [33]. In the case of a nonlinear continuous-time system, the computation of backward reachable sets is in general undecidable [34]. Fortunately, over the past decade, new approaches (e.g., decomposition approach [35] and deep learning approach [36] and software tools (e.g., Hamilton-Jacobi Toolbox [37] and CORA Toolbox [38]), have been developed for improving the efficiency of computing backward reachable sets. Once the sTLT is constructed, the design of a CBF further requires calculating the start time interval and duration of the set node in the corresponding temporal fragment (i.e., Algorithm 3). The complexity of Algorithm 3 is \(\mathcal{O}(1)\).
**Construction of CBFs:** The construction of CBFs can be computationally expensive for general nonlinear systems. Luckily, there are several remedies to simplify the computations. In view of the satisfaction condition of an sTLT, we could always construct a CBF based on an under-approximation of the set node in the corresponding temporal fragment when the exact reachable sets are difficult to calculate. Moreover, if the system is fully actuated, the CBF can in general be constructed analytically. One such example is the single integrator dynamics shown in the Case Studies. Other approaches include sum-of-squares techniques [39], learning-based approaches [40], and HJB reachability-based approaches [41]. In particular, we highlight that the construction of CBFs through HJB reachability analysis is a byproduct of computing the maximal/minimal reachable sets, which are essential for building the sTLT. The HJB reachability approach is also demonstrated in the Case Studies with unicycle dynamics.
**Online computations:** The online phase is composed of 1) the online update of the CBFs and 2) solving the optimization program (17) or (18). As pointed out in Remark 6, a simple translation in time is sufficient for updating the CBFs. Therefore, the complexity of this step is determined by online updating the start time intervals of set nodes (i.e., Algorithm 4), which is \(\mathcal{O}(1)\). The complexity of the optimization program (17) or (18) is determined by the system model. When the continuous-time dynamical system (1) is control-affine, i.e., (1) is of the form (9), the programs (17) and (18) are QPs.
## IV Case studies
In this section, we explain the explicit procedures to construct CBFs and formulate the online QP for the nested STL specification given in Example 1. It is worth noting that the developed theory is dynamics agnostic. We will show this by designing control synthesis schemes for both single-integrator dynamics and unicycle dynamics, where analytical and numerical CBFs are constructed, respectively. In the end of this section, we demonstrate the efficacy of our proposed method under a more complex STL specification. All the implementation code can be found at [https://github.com/xiaotanKTH/sTLT](https://github.com/xiaotanKTH/sTLT).
### _Single integrator model_
Consider a mobile robot with a single-integrator dynamics
\[\dot{x}=u, \tag{19}\]
where \(x=(x_{1},x_{2})\in\mathbb{R}^{2}\) and \(u=(u_{1},u_{2})\in U\subset\mathbb{R}^{2}\), and the control input set \(U=\{u:|u_{1}|\leq 1,|u_{2}|\leq 1\}\). The STL task specification is given by \(\varphi=\mathsf{F}_{[0,15]}(\mathsf{G}_{[2,10]}\mu_{1}\vee\mu_{2}\mathsf{U}_{[ 5,10]}\mu_{3})\) (the same as in Example 1), where \(\mathsf{S}_{\mu_{1}}=\{x\in\mathbb{R}^{2}\mid(x_{1}+4)^{2}+(x_{2}+4)^{2}\leq 1\}\), \(\mathsf{S}_{\mu_{2}}=\{x\in\mathbb{R}^{2}\mid(x_{1}-4)^{2}+x_{2}^{2}\leq 4 ^{2}\}\), and \(\mathsf{S}_{\mu_{3}}=\{x\in\mathbb{R}^{2}\mid(x_{1}-1)^{2}+(x_{2}+4)^{2}\leq 2 ^{2}\}\). Recall from Example 1, the sTLT \(\mathcal{T}_{\hat{\varphi}}\) is plotted in Fig. 2.
One observation is that, for single integrator dynamics and a given set node \(\mathbb{X}_{\varphi_{1}}\), the sets \(\mathcal{R}^{M}(\mathbb{X}_{\varphi_{1}},[a,b])\) and \(\overline{\mathcal{R}^{m}(\mathbb{X}_{\varphi_{1}},[a,b])}\), which are the set nodes obtained using the temporal operators \(\mathsf{F}_{[a,b]}\) and \(\mathsf{G}_{[a,b]}\) respectively, are monotonic increasing with respect to the input set \(U\). Thus, to simplify the set calculation, we calculate subsets of the reachable sets by shrinking the input set \(U\) to \(U^{\prime}=\{u:\|u\|\leq 1\}\). Then one can get that
\[\mathbb{X}_{5} =\mathbb{S}_{\mu_{1}},\mathbb{X}_{8}=\mathbb{S}_{\mu_{2}}, \mathbb{X}_{9}=\mathbb{S}_{\mu_{3}},\] \[\mathbb{X}_{3} =\{x\in\mathbb{R}^{2}\mid(x_{1}+4)^{2}+(x_{2}+4)^{2}\leq 3^{2}\},\] \[\mathbb{X}_{4} =\{x\in\mathbb{R}^{2}\mid(x_{1}-4)^{2}+x_{2}^{2}\leq 4^{2}\},\] \[\mathbb{X}_{1} =\{x\in\mathbb{R}^{2}\mid(x_{1}+4)^{2}+(x_{2}+4)^{2}\leq 18^{2}\}, \tag{20}\] \[\mathbb{X}_{2} =\{x\in\mathbb{R}^{2}\mid(x_{1}-4)^{2}+x_{2}^{2}\leq 19^{2}\},\] \[\mathbb{X}_{0} =\{x\in\mathbb{R}^{2}\mid(x_{1}+4)^{2}+(x_{2}+4)^{2}\leq 18^{2} \text{ or }\] \[(x_{1}-4)^{2}+x_{2}^{2}\leq 19^{2}\}.\]
Here \(\mathbb{X}_{0},...,\mathbb{X}_{5}\) are subsets of what one could obtain with the input set \(U\). Yet the under-approximation relation still holds in view of the iv)th condition in Definition 13. Here we note that although the sets \(\mathbb{X}_{0},\mathbb{X}_{1},\mathbb{X}_{2}\) are not needed for CBF design (since they do not correspond to any temporal fragments), they still play an important role that will become clear later. The sets \(\mathbb{X}_{3},\mathbb{X}_{4},\mathbb{X}_{5},\mathbb{X}_{8},\mathbb{X}_{9}\) are depicted in Fig. 4.
Denote the temporal fragments \(f_{1}=\mathsf{F}_{[0,15]}\mathbb{X}_{3},f_{2}=\mathsf{G}_{[2,10]}\mathbb{X}_{ 5},f_{3}=\mathsf{F}_{[0,15]}\mathbb{X}_{4},f_{4}=\mathsf{G}_{[0,10]}\mathbb{X }_{8},f_{5}=\mathsf{F}_{[5,10]}\mathbb{X}_{9}\) and their corresponding control barrier functions \(\mathsf{b}_{1},...,\mathsf{b}_{5}\). Using Algorithm 3 and (16), one obtains the initial starting time interval, the duration, and the time domain of the corresponding CBFs:
* \([\underline{t}_{s}(\mathbb{X}_{3}),\bar{t}_{s}(\mathbb{X}_{3})]=[0,15],\mathcal{ D}(\mathbb{X}_{3})=0,[\underline{t}_{b_{1}},\bar{t}_{b_{1}}]=[0,15]\);
* \([\underline{t}_{s}(\mathbb{X}_{5}),\bar{t}_{s}(\mathbb{X}_{9})]=[5,25],\mathcal{ D}(\mathbb{X}_{9})=0,[\underline{t}_{b_{5}},\bar{t}_{b_{2}}]=[2,25]\);
* \([\underline{t}_{s}(\mathbb{X}_{4}),\bar{t}_{s}(\mathbb{X}_{4})]=[0,15], \mathcal{D}(\mathbb{X}_{4})=0,[\underline{t}_{b_{3}},\bar{t}_{b_{3}}]=[0,15]\);
* \([\underline{t}_{s}(\mathbb{X}_{8}),\bar{t}_{s}(\mathbb{X}_{8})]=[0,15],\mathcal{ D}(\mathbb{X}_{8})=10,[\underline{t}_{b_{4}},\bar{t}_{b_{4}}]=[0,25]\);
* \([\underline{t}_{s}(\mathbb{X}_{9}),\bar{t}_{s}(\mathbb{X}_{9})]=[5,25],\mathcal{ D}(\mathbb{X}_{9})=0,[\underline{t}_{b_{5}},\bar{t}_{b_{1}}]=[0,15]\);
Taking into account the velocity limit, we design the initial CBFs as
\[\mathsf{b}_{1}(x,t) =(18-t)^{2}-(x_{1}+4)^{2}-(x_{2}+4)^{2},t\in[0,15];\] \[\mathsf{b}_{2}(x,t) =\left\{\begin{array}{l}(18-t)^{2}-(x_{1}+4)^{2}-(x_{2}+4)^{2},t \in[2,17];\\ 1^{2}-(x_{1}+4)^{2}-(x_{2}+4)^{2},t\in[17,25];\\ \end{array}\right.\] \[\mathsf{b}_{3}(x,t) =(19-t)^{2}-(x_{1}-4)^{2}-x_{2}^{2},t\in[0,15];\] \[\mathsf{b}_{4}(x,t) =\left\{\begin{array}{l}(19-t)^{2}-(x_{1}-4)^{2}-x_{2}^{2},t\in[0,15 ];\\ 4^{2}-(x_{1}-4)^{2}-x_{2}^{2},t\in[15,25];\\ \end{array}\right. \tag{21}\] \[\mathsf{b}_{5}(x,t) =(27-t)^{2}-(x_{1}-1)^{2}-(x_{2}+4)^{2},t\in[5,25].\]
It is evident that the zero super-level sets of the barriers are circular, which either remain static or shrink in radius at a velocity of 1. If the robot is about to leave the safe region, i.e., when \(\mathsf{b}_{i}(x,t)=0\), the robot can always steer itself towards the center with unit velocity, and thus always stay safe. One could easily verify that, for \(i=1,2,...,5\), 1) \(\mathsf{b}_{i}(x,t)\) is a valid CBF for the single integrator dynamics in (19); 2) \(\mathsf{b}_{i}(x,t)=h_{\mathbb{X}_{f_{i}}}(x),\forall t\in[\bar{t}_{s}(\mathbb{X}_{ f_{i}}),t_{e}(\mathbb{X}_{f_{i}})]\), where \(\mathbb{X}_{f_{i}}\) is the set node in the corresponding temporal fragment \(f_{i}\); 3) \(\mathsf{b}_{i}(x,\underline{t}_{b_{j}})\geq\mathsf{b}_{j}(x,\underline{t}_{b_{j }}),\forall x\), where the corresponding temporal fragment \(f_{j}\) is the predecessor of \(f_{i}\). Thus, CBFs in (21) fulfill the conditions in Sec. III.D. Note that here we calculate the initial CBFs, which will be updated online according to Algorithm 4 and Remark 6.
Since the nested STL formula contains \(\vee\) operator, we need to determine which branch out of two branches \(\{\mathbf{p}_{1}\}\) and \(\{\mathbf{p}_{2},\mathbf{p}_{3}\}\) (as in Example III-B) needs to be satisfied. The guideline to choose the branch is as follows: if the initial condition \(x_{0}\in\mathbb{X}_{1}\), we can choose \(\Pi_{l}=\{f_{1},f_{2}\}\); if \(x_{0}\in\mathbb{X}_{2}\), we can choose \(\Pi_{l}=\{f_{3},f_{4},f_{5}\}\); if \(x_{0}\notin\mathbb{X}_{0}\), then the proposed scheme fails to generate a control signal with correctness guarantee and a larger input bound is expected.
It is worth highlighting that in the special case of \(\Pi_{l}=\{f_{1},f_{2}\
STL specification is fulfilled if we choose the branch \(\{\mathbf{p}_{1}\}\); yet the online QP becomes infeasible if we choose the branch \(\{\mathbf{p}_{2},\mathbf{p}_{3}\}\). This is in line with the theoretical results.
When dealing with regions of irregular shapes or general nonlinear dynamics, the set nodes as well as the CBFs are difficult to calculate analytically. In the following we show a numerical construction scheme.
### _Unicycle model_
Consider a mobile robot with a unicycle dynamics
\[\begin{split}\dot{x}_{1}&=v\cos\theta,\\ \dot{x}_{2}&=v\sin\theta,\\ \dot{\theta}&=\omega,\end{split} \tag{22}\]
where the state \(x=(x_{1},x_{2},\theta)\), the control input \(u=(v,\omega)\). Here \((x_{1},x_{2})\) denotes the position, \(\theta\) the heading angle, and \(v\) the velocity, \(\omega\) the turning rate. We assume that the control input \(u=(v,\omega)\in U=\{u\ |\ |v|\leq 1,|\omega|\leq 1\}\). The STL task specification is again given by \(\varphi=\mathsf{F}_{[0,15]}(\mathsf{G}_{[2,10]}\mu_{1}\vee\mu_{2}\mathsf{U}_{[ 5,10]}\mu_{3})\) (the same as in Example 1), where \(\mathbb{S}_{\mu_{1}}=\{x\in\mathbb{R}^{2}\times S^{1}\ |\ (x_{1}+4)^{2}+(x_{2}+4)^{2} \leq 1\}\), \(\mathbb{S}_{\mu_{2}}=\{x\in\mathbb{R}^{2}\times S^{1}\ |\ (x_{1}-4)^{2}+x_{2}^{2} \leq 4^{2}\}\), and \(\mathbb{S}_{\mu_{3}}=\{x\in\mathbb{R}^{2}\times S^{1}\ |\ (x_{1}-1)^{2}+(x_{2}+4)^{2} \leq 2^{2}\}\). Recall from Example 1, the sTLT \(\mathcal{T}_{\widehat{\varphi}}\) is plotted in Fig. 2.
We note that the temporal fragments, their time encodings, the time domains for the barrier functions, and the branch choosing guidelines are similar to those as in the case of single integrator dynamics and thus omitted here. We will instead explain how the set nodes as well as the barrier functions are constructed through the use of a level-set reachability analysis toolbox [37, 42].
Here the set nodes with the input set \(U=\{u\ |\ |v|\leq 1,|\omega|\leq 1\}\) are computed via reachability analysis. We may also use a shrinked input set to mitigate the online QP infeasibility issue. In brevity, we numerically obtain the value function \(h_{\mathbb{X}_{\xi}}\) for the sets \(\mathbb{X}_{i},i=0,1,..,9,\) following the reachability operations in Example 1. The projection of sets \(\mathbb{X}_{3},\mathbb{X}_{4},\mathbb{X}_{5},\mathbb{X}_{8},\mathbb{X}_{9}\) to the first two dimensions are depicted in Fig. 5.
Now we show how the CBFs are constructed. Take the construction of \(\mathsf{b}_{2}\) as an example, which corresponds to \(f_{2}=\mathsf{G}_{[2,10]}\mathbb{X}_{5}\). Recall \(\mathbb{X}_{5}=\{x\ |\ h_{\mathbb{X}_{5}}(x)\geq 0\}\), \([\underline{t}_{s}(\mathbb{X}_{5}),\bar{t}_{s}(\mathbb{X}_{5})]=[2,17]\), \([\underline{t}_{\mathsf{b}_{2}},\bar{t}_{\mathsf{b}_{2}}]=[2,25]\). Here the function \(\mathsf{b}_{2}\) is expected to be a valid control barrier function for the unicycle dynamics in (22) which guides \(\mathbf{x}(t)\) towards the set \(\mathbb{X}_{5}\) for \(t\in[2,17]\) and keeps \(\mathbf{x}(t)\) in the set \(\mathbb{X}_{5}\) for \(t\in[17,25]\). We construct such a \(\mathsf{b}_{2}\) by solving the following optimal control problem:
\[\begin{split} V(x,t)&=\max_{\mathbf{u}(s),s\in[t,17]}h_ {\mathbb{X}_{5}}(\mathbf{x}^{\mathbf{u}}_{x,t}(17))\\ \text{s.t.}&\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:
\((-2,3.5,\pi/2)\), both of which lie within \(\mathbb{X}_{1}\cap\mathbb{X}_{2}\). Here the \(\alpha_{i}\) in (18) is set to be \(\alpha_{i}(v)=v,v\in\mathbb{R},\forall i\), and \(Q\) in (18) an identity matrix. An intuitive nominal controller similar to the single integrator case is also utilized in this example. For all the trajectories, the input bound \(U\) is respected. Again, we observe that every trajectory satisfies the STL specification \(\varphi\).
### _Examples for more complex specifications_
In this subsection, we consider the more complex STL formula below
\[\varphi=\mathsf{G}_{[0,1]}\mathsf{F}_{[2,3]}\mu_{1}\wedge \mathsf{F}_{[6,7]}\mathsf{G}_{[1,2]}\mu_{2}\wedge\mathsf{F}_{[13,14]}(\mu_{3} \mathsf{U}_{[1,4]}\mu_{1})\\ \wedge\mathsf{G}_{[0,20]}\neg\mu_{4}\wedge\mathsf{F}_{[15,20]}\mu _{5}. \tag{24}\]
The control synthesis process consists of constructing the corresponding sTLT, calculating the set nodes using reachability analysis, calculating the time encodings, and constructing the corresponding CBFs. This offline design process is similar to what we have detailed before, except that the region associated with the predicate \(\mu_{5}\) is a square. We take two different approaches: in the case of single integrator dynamics (Fig. 6), we use the largest inscribed circular region to under-approximate \(\mathscr{S}_{\mu_{5}}\), and analytical CBFs are constructed; in the case of unicycle dynamics (Fig. 7), we use the signed distance function of the square as the superlevel set function and calculate the value function to the HJB equation as the CBF. Implementation details can be found in the online code repository. For the online synthesis, since the formula does not contain \(\vee\), the QP in (17) will be used. Figure 6 and Figure 7 demonstrate the resulting system behaviors for a mobile robot with single integrator dynamics in (19) and with the unicycle dynamics in (22), respectively. We note that all trajectories satisfy the STL specification in (24), while respecting the dynamics and input bounds.
## V Conclusions
In this paper, we develop an efficient control synthesis framework for continuous-time dynamical systems under nested STL specifications. To this purpose, we introduce a notion of signal temporal logic tree (sTLT), detail on its construction from a given STL formula, its semantics (i.e., satisfaction condition), and the equivalence or under-approximation relation between the sTLT and the STL formula. Under the guidance of the sTLT, we show how to design CBFs and online update their activation time intervals. The control signal is thus given by an online CBF-based program. For future work, we will tackle the motion coordination problem of multi-agent systems under STL specifications leveraging task decomposition and distributed CBF techniques.
|
2309.04154 | A novel model for layer jamming-based continuum robots | Continuum robots with variable stiffness have gained wide popularity in the
last decade. Layer jamming (LJ) has emerged as a simple and efficient technique
to achieve tunable stiffness for continuum robots. Despite its merits, the
development of a control-oriented dynamical model tailored for this specific
class of robots remains an open problem in the literature. This paper aims to
present the first solution, to the best of our knowledge, to close the gap. We
propose an energy-based model that is integrated with the LuGre frictional
model for LJ-based continuum robots. Then, we take a comprehensive theoretical
analysis for this model, focusing on two fundamental characteristics of
LJ-based continuum robots: shape locking and adjustable stiffness. To validate
the modeling approach and theoretical results, a series of experiments using
our \textit{OctRobot-I} continuum robotic platform was conducted. The results
show that the proposed model is capable of interpreting and predicting the
dynamical behaviors in LJ-based continuum robots. | Bowen Yi, Yeman Fan, Dikai Liu | 2023-09-08T06:43:11Z | http://arxiv.org/abs/2309.04154v2 | # A Novel Model for Layer Jamming-based Continuum Robots
###### Abstract
Continuum robots with variable stiffness have gained wide popularity in the last decade. Layer jamming (LJ) has emerged as a simple and efficient technique to achieve tunable stiffness for continuum robots. Despite its merits, the development of a control-oriented dynamical model tailored for this specific class of robots remains an open problem in the literature. This paper aims to present the first solution, to the best of our knowledge, to close the gap. We propose an energy-based model that is integrated with the LuGre frictional model for LJ-based continuum robots. Then, we take a comprehensive theoretical analysis for this model, focusing on two fundamental characteristics of LJ-based continuum robots: shape locking and adjustable stiffness. To validate the modeling approach and theoretical results, a series of experiments using our _OctRobot-I_ continuum robotic platform was conducted. The results show that the proposed model is capable of interpreting and predicting the dynamical behaviors in LJ-based continuum robots.
## I Introduction
Continuum robots can be used in many applications due to their inherent flexibility and light weight. When interacting with the environment or humans, there is a need to actively change the dynamical response of the robots, particularly mechanical impedance. Indeed, numerous continuum robots have integrated variable stiffness techniques within their design, allowing for flexible soft motion or rigid resistance and greatly expanding their range of applications [4, 16].
Among various stiffening techniques, jamming approaches have shown great success in adjustable stiffness continuum robots with rapid reversible responses [19, 5, 15]. They can be broadly classified into fiber, granular, and layer jamming. Notably, layer jamming (LJ), the concept of which was originally proposed in [12, 13], has received particular attention due to light weight and compactness. It utilizes thin plastic or paper layers as its jamming flaps. For LJ-based continuum robots, there is an airtight pneumatic chamber in which a series of overlapping layers are installed to cover the robot spine or wrapped as the robot body. This mechanism exploits the friction between layers that can be controlled by external pressure via a vacuum, and provides a large range of controllable stiffness.
Feedback control is one of the most important topics in the field of continuum robotics, though still in infancy. Over the past few years, _model-based_ approaches have gained resurgence since more experimental evidence has shown that feedback approaches are robust to approximations for continuum robot dynamics [7, 9]. However, as figured out in [17], the modeling of LJ-based continuum robots with variable stiffness has not been well addressed yet. There are some recent works on analytical or computational models to characterize the mechanism of stiffness variation in LJ-based continuum robots [17, 13, 22]. However, to the best of the authors' knowledge, there is no _control-oriented_ model in the literature that approximates their dynamical behaviors.
In this paper, we aim to close the above-mentioned gap by proposing a novel model for a class of LJ-based, tendon-driven continuum robots. It integrates the energy-based modeling technique and the LuGre frictional model [1]. The overall model is in port-Hamiltonian form with the vacuum pressure gradient as an additional control input. We theoretically prove the model's ability to illustrate the important phenomena of shape locking and adjustable stiffness. Besides, we present an analytical relation between stiffness and negative pressure.
_Notation._ All functions and mappings are assumed to be \(C^{2}\)-continuous. \(I_{n}\) is the \(n\times n\) identity matrix, \(0_{n\times s}\) is an \(n\times s\) matrix of zeros, and \(\mathbf{1}_{n}:=\text{col}(1,\ldots,1)\in\mathbb{R}^{n}\). For \(x\in\mathbb{R}^{n}\), \(S\in\mathbb{R}^{n\times n}\), \(S=S^{\top}>0\), we denote the Euclidean norm \(|x|^{2}:=x^{\top}x\), and the weighted-norm \(\|x\|_{S}^{2}:=x^{\top}Sx\). Given a function \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}\), we define the differential operators \(\nabla f:=(\frac{\partial f}{\partial x})^{\top},\ \nabla_{x_{i}}f:=(\frac{ \partial f}{\partial x_{i}})^{\top},\) where \(x_{i}\in\mathbb{R}^{p}\) is an element of the vector \(x\). The set \(\bar{n}\) is defined as \(\bar{n}:=\{1,\ldots,n\}\). We use \(\text{diag}\{x_{i}\}\ (i\in\bar{n})\) to represent the diagonal matrix \(\text{diag}\{x_{1},\ldots,x_{n}\}\), and define the set \(B_{\varepsilon}(\mathcal{X}):=\{x\in\mathbb{R}^{n}:\inf_{y\in\mathcal{X}}|x-y| \leq\varepsilon\}\) for a given
Fig. 1: Schematic of a layer-jamming structure in continuum robots
set \(\mathcal{X}\in\mathbb{R}^{n}\). When clear from the context, the arguments of the functions may be omitted.
## II Dynamic Modeling
### _Preliminary of Jamming-free Model_
In our previous work [21], we consider the control-oriented modeling of a class of underactuated tendon-driven continuum robots. A high-dimensional rigid link model is used to approximate the dynamical behavior of continuum robots as follows:
\[\left[\begin{array}{c}\dot{q}\\ \dot{p}\end{array}\right]=\left[\begin{array}{cc}0_{n\times n}&I_{n}\\ -I_{n}&-D(q)\end{array}\right]\left[\begin{array}{c}\nabla_{q}H\\ \nabla_{p}H\end{array}\right]+\left[\begin{array}{c}0_{n}\\ G(q)u\end{array}\right] \tag{1}\]
with the configuration variable \(q\in\mathcal{X}\subset\mathbb{R}^{n}\), the generalized momentum \(p\in\mathbb{R}^{n}\), the input matrix \(G:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n\times m}\), the damping matrix \(D(q)\in\mathbb{R}_{\succeq 0}^{n\times n}\), and the tension input \(u\in\mathbb{R}^{m}\). The total energy is characterized by the Hamiltonian as
\[H(q,p)=\frac{1}{2}p^{\top}M^{-1}(q)p+U(q), \tag{2}\]
where \(M:\mathbb{R}^{n}\rightarrow\mathbb{R}_{\succ 0}^{n\times n}\) is the positive definite inertial matrix satisfying \(m_{1}I\preceq M(q)\preceq m_{2}I\) for some \(m_{2}\geq m_{1}>0\), and the potential energy function \(U(q)\) contains the gravitational part \(U_{\mathtt{G}}\) and the elastic part \(U_{\mathtt{E}}\) that are functions of \(q\), i.e.
\[U(q)=U_{\mathtt{G}}(q)+U_{\mathtt{E}}(q). \tag{3}\]
It is shown in [21] that these functions can be modeled as
\[\begin{array}{rcl}U_{\mathtt{G}}&=&\alpha_{1}[1-\cos(q_{\Sigma})]\\ U_{\mathtt{E}}&=&\frac{1}{2}q^{\top}\Lambda q+U_{0}\end{array} \tag{4}\]
with the diagonal matrix \(\Lambda:=\text{diag}\{\alpha_{2},\ldots,\alpha_{2}\}\), some positive scalar \(U_{0}\) and \(q_{\Sigma}:=\sum_{i\in\mathbb{R}}q_{i}\). Note that \(\alpha_{1}\) and \(\alpha_{2}\) are some elastic and gravitational coefficients, respectively. We refer the interested reader to [21] for more details about the robotic structure and its modeling procedure.
### _Friction Model for Layer Jamming_
In continuum robots, the layer jamming technique provides a lightweight and rapid response approach to adjust the robots' stiffness [3, 10, 17]. In these robots, layer jamming - consisting of a laminate of flexible strips or sheets - is installed throughout the continuum robot's body, and winded up into a tube sheath, as illustrated in Fig. 1. In [21], the robot is operated in the mode that behaves highly compliant. Meanwhile, the jamming sheath forms an enclosed structure in which we may apply a negative pressure \(-u_{\mathtt{P}}\leq 0\) (relative to the atmospheric pressure) [8]. As a result, friction between strips or sheets would increase dramatically, thus changing the robot stiffness and dissipating energy [17].
As described above, the pressure value \(u_{\mathtt{P}}\in\mathbb{R}_{\geq 0}\) can be adjusted online and viewed as an additional input that changes the robotic dynamics. One of the main objectives of this paper is to propose a control-oriented dynamical model for LJ-based continuum robots. For control purposes, it is useful to have simple models that describe the essential properties of continuum robots with layer jamming; in particular, when the pressure \(u_{\mathtt{P}}=0\), the model to be obtained should degrade into the LJ-free model in Section II-A.
First, we discuss the dependence of the plant parameters \(\alpha_{i}\) (\(i=1,2\)) and the function \(D(q)\) on \(u_{\mathtt{P}}\). We make the following assumptions.
**Assumption 1**: _During the variation process of \(u_{\mathtt{P}}\), the continuum robot satisfies_
**(a)**: _The mass change of air in the airtight membrane is negligible. Hence, the gravitational parameter_ \(\alpha_{1}\) _is constant and thus_ independent_ _of the pressure_ \(u_{\mathtt{P}}\)_._
**(b)**: _The elastic coefficient_ \(\alpha_{2}>0\) _is constant._
**Assumption 2**: _The energy dissipation of the robot is only derived from the lumped friction torque with \(D(q)=0\)._
Under the above assumptions, the model (1) of the LJ-based continuum robot can be compactly written as
\[\Sigma_{r}:\quad\begin{cases}\dot{x}=J\nabla H(x)+G_{r}(x)u-G_{f}\tau_{f}\\ v=G_{f}^{\top}\nabla H(x)\end{cases} \tag{5}\]
with the new variable \(x:=\text{col}(q,p)\) and
\[\begin{split} G_{r}&:=\left[\begin{array}{c}0_{n\times m }\\ G(q)\end{array}\right],\\ G_{f}&:=\left[\begin{array}{c}0_{n\times n}\\ I_{n}\end{array}\right],\\ J&:=\left[\begin{array}{cc}0_{n\times n}&I_{n}\\ -I_{n}&0_{n\times n}\end{array}\right],\end{split}\]
where \(\tau_{f}\in\mathbb{R}^{n}\) is the lumped frictional torque acting in the links. If we view the friction \(\tau_{f}\) as the "input", then the _passive output_\(v\in\mathbb{R}^{n}\) is, indeed, the generalized velocity [18, 20], i.e.,
\[v=M^{-1}(q)p. \tag{6}\]
The jamming phenomenon is due to the distributed friction along the layers, and the remaining task boils down to studying the modeling of the frictional effects from \(\tau_{f}\) and its interconnection to the system \(\Sigma_{r}\).
To take this behavior into account in the model, we consider the LuGre friction model which was proposed in [6]. It is a _dynamical_ model capable of describing many frictional properties, such as zero slip displacement (a.k.a. micromotion), slick-slip motion, invariance, state boundedness, and passivity [1].
Before presenting the LuGre model, we make the following assumption about the (lumped) normal force \(F_{n}>0\) between the surfaces.
**Assumption 3**: _The pressure along the layer is uniformly distributed with the value \((-u_{\mathtt{P}})\) and is proportional to the lumped normal force, i.e., \(F_{n}\propto u_{\mathtt{P}}\)._
To facilitate the following analysis, we adopt the port-Hamiltonian form of the LuGre model [14]:
\[\Sigma_{z}:\quad\begin{cases}\dot{z}=-R_{z}(v)\nabla H_{z}(z)+[\mathcal{N}(v) -\mathcal{P}(v)]v\\ \tau_{f}=[\mathcal{N}(v)+\mathcal{P}(v)]^{\top}\nabla H_{z}(z)+Sv,\end{cases} \tag{7}\]
where \(z\in\mathbb{R}^{n}\) represents the virtual bristle deflection at each joint, \(v\in\mathbb{R}^{n}\) is the input - the relative generalized velocity of the surfaces in contact given by (6), and the output
\(\mathbb{R}^{n}\) is the frictional torque. The mappings in \(\Sigma_{z}\) include the virtual bristle potential energy
\[H_{z}(z)=\frac{1}{2}\sigma_{0}u_{\mathsf{P}}|z|^{2}, \tag{8}\]
the damping matrix
\[R_{z}(v) = \text{diag}(\beta_{1}(v),\ldots,\beta_{n}(v)) \tag{9}\] \[\beta_{i}(v) := \frac{|v_{i}|}{u_{\mathsf{P}}\rho(v_{i})},\quad i\in\bar{n}\]
the state-modulation matrices
\[\mathcal{N}(v) := I_{n}-\frac{1}{2}\sigma_{1}u_{\mathsf{P}}R_{z}(v) \tag{10}\] \[\mathcal{P}(v) := -\frac{1}{2}\sigma_{1}u_{\mathsf{P}}R_{z}(v)\] \[S := (\sigma_{1}+\sigma_{2})u_{\mathsf{P}}I_{n},\]
and the function
\[\rho(v_{i})=\mu_{C}+(\mu_{S}-\mu_{C})\exp\left(-\left|\frac{v_{i}}{v_{s}} \right|^{\sigma_{3}}\right). \tag{11}\]
Physical meanings of coefficients in the above model are summarized in Table I. The interested reader may refer to [2, 6, 14] for additional details.
**Remark 1**: _The model \(\Sigma_{z}\) is well-posed for all \(u_{\mathsf{P}}\geq 0\) even though \(u_{\mathsf{P}}\) appears in the denominator of the function \(\beta_{i}\) in (9). There is due to the product \(R_{z}(v)\nabla H_{z}\) in the dynamics and \(\nabla H_{z}\) being linear in \(u_{\mathsf{P}}\). If the pressure \(u_{\mathsf{P}}=0\), we have \(\tau_{f}=0\) for which we roughly regard zero friction injected to the robotic mechanical dynamics \(\Sigma_{r}\). The friction torque \(\tau_{f}\) at the steady-state stage becomes \(\tau_{f}^{\text{gs}}=[\text{diag}\{\rho(v_{i})\}\text{sign}(v)+\sigma_{2}v]u_{ \mathsf{P}}\), with \(\text{sign}(v):=\text{col}(\text{sign}(v_{1}),\ldots,\text{sign}(v_{n}))\) collecting the signs of \(v_{i}\)._
**Remark 2**: _The LuGre model has the boundedness property for the internal state, i.e., the set \(\mathcal{E}_{z}:=\{z\in\mathbb{R}^{n}:\;|z|\leq\frac{\mu_{S}}{\sigma_{0}}\}\)[1]. The input-output pair \((v,\tau_{f})\) satisfies the particularly appealing passivity property_
\[\int_{0}^{t}v^{\top}(s)\tau_{f}(s)ds\geq H_{z}(z(t))-H_{z}(z(0)),\quad\forall t\geq 0\]
_if the coefficients satisfy some inequality constraints [2]._
### _Variable Stiffness Model with Layer Jaming_
The overall dynamical model for the LJ-based continuum robots is the negative interconnection of \(\Sigma_{r}\) and \(\Sigma_{z}\). For convenience, we define the full systems state as
\[\chi:=\text{col}(q,p,z)\in\mathbb{R}^{3n}. \tag{12}\]
Its dynamics can be compactly written in port-Hamiltonian form as [14]
\[\dot{\chi}=[\mathcal{J}-\mathcal{R}]\nabla\mathcal{H}+\mathcal{G}(\chi)u \tag{13}\]
with the total Hamiltonian
\[\mathcal{H}(\chi,u_{\mathsf{P}}) := H(q,p)+H_{z}(z,u_{\mathsf{P}}) \tag{14}\] \[= \underbrace{\frac{1}{2}p^{\top}M^{-1}(q)p}_{\text{ kinematic energy}}+\underbrace{\frac{1}{2}\sigma_{0}u_{\mathsf{P}}|z|^{2}+U(q)}_{\text{ potential energy}}\]
and the matrices
\[\mathcal{J}(\chi,u_{\mathsf{P}}) := \left[\begin{array}{cc}J&-G_{f}\mathcal{N}^{\top}\\ \mathcal{N}G_{f}^{\top}&0_{n\times n}\end{array}\right]\] \[\mathcal{R}(\chi,u_{\mathsf{P}}) := \left[\begin{array}{cc}G_{f}S(v)G_{f}^{\top}&G_{f}\mathcal{P}^{ \top}\\ \mathcal{P}^{\top}G_{f}^{\top}&R_{z}\end{array}\right] \tag{15}\] \[\mathcal{G}(\chi) := \left[G_{r}^{\top}&0_{n\times m}^{\top}\right]^{\top}.\]
Note that \(\mathcal{N},\mathcal{P}\) and \(S\) are linear functions of the pressure \(u_{\mathsf{P}}\). The overall model has an \((m+1)\)-dimensional input
\[u_{\chi}=\left[\begin{array}{c}u\\ u_{\mathsf{P}}\end{array}\right]\]
with all the elements non-negative.
**Remark 3**: _The damping matrix \(\mathcal{R}\) can be expanded as \(\mathcal{R}=\text{diag}(0_{n\times n},\mathcal{R}_{22})\) with_
\[\mathcal{R}_{22}:=\begin{bmatrix}(\sigma_{1}+\sigma_{2})u_{\mathsf{P}}I_{n}&- \frac{1}{2}\sigma_{1}u_{\mathsf{P}}R_{z}(v)\\ -\frac{1}{2}\sigma_{1}u_{\mathsf{P}}R_{z}^{\top}(v)&R_{z}(v)\end{bmatrix}.\]
_Clearly, the positive semidefiniteness of \(\mathcal{R}\) is equivalent to_
\[\sigma_{1}+\sigma_{2}-\frac{|v_{i}|}{4\rho(v_{i})}\geq 0,\quad i\in\bar{n}. \tag{16}\]
_For any coefficient, a small \(|v|\) can always guarantee (16), thus making \(\mathcal{R}\) qualified as a damping matrix._
## III Interpretation to Key Phenomena
In this section, we theoretically verify that the model proposed in Section II-C can interpret the two phenomena in LJ-based continuum robots - shape locking and tunable stiffness.
### _Shape Locking_
Shape locking is one of the important capabilities of LJ structures when applied to continuum robots [17, 19, 22]. Tensions along the cables can change the robot's configuration from its undeformed shape; when a vacuum with negative pressure \((-u_{\mathsf{P}})\) is applied before the release of tension actuation, the continuum robot is able to preserve its current shape. This phenomenon is known as shape locking. In this subsection, we aim to illustrate that shape locking can be characterized by the proposed model. First, we formulate its mathematical definition as follows.
**Definition 1**: _(Shape Locking) Consider the LJ-based continuum robotic model with zero input \(u\). If the deformed
configuration \(\bar{q}\neq 0_{3}\) guarantees the set \(\mathcal{E}_{\mathfrak{SL}}:=\{(q,p,z)\in\mathbb{R}^{3n}:q=\bar{q},\ p=0_{3}\}\) under \(u_{\mathfrak{p}}>0\) forward invariant, i.e.,
\[\chi(0)\in\mathcal{E}_{\mathfrak{SL}} \Longrightarrow [\ \dot{q}(t)=0,\ \dot{p}(t)=0,\ \forall t\geq 0\ ], \tag{17}\]
then we call this invariance as shape locking.
The following proposition gives a theoretical analysis of the shape-locking phenomenon using the proposed model. Immediately after its proof, we will show an intuitive illustration.
**Proposition 1**: _Consider the LJ-based continuum robot model (13) without external input, i.e., \(u=0_{m}\). For arbitrary configuration \(q_{a}\in\mathbb{R}^{n}\) and a constant pressure \(u_{\mathfrak{p}}>0\),_
* _There exists a vector_ \(z_{a}\in\mathbb{R}^{n}\) _such that_ \((q_{a},0_{n},z_{a})\) _is an equilibrium;_
* _The equilibria manifold_ \[\mathcal{M}:=\{(q,p,z)\in\mathbb{R}^{3n}:p=0,\ \nabla U(q)=\sigma_{0}u_{ \mathfrak{p}}z\}\] _is locally asymptotically stable._
First, let us verify the existence of \(z_{a}\) such that \((q_{a},0_{n},z_{a})\) is an equilibrium. From (6), \(p=0\) implies the velocity \(v=0\), thus
\[\dot{q}=\nabla_{p}H=M^{-1}(q)p=0.\]
The dynamics of \(z\) is given by
\[\dot{z}=-R_{z}(0)\nabla H_{z}+[\mathcal{N}-\mathcal{P}]v\Big{|}_{v=0}=0,\]
where we have used the fact \(R_{z}(0)=0\) from (9). For the momentum, we have the following:
\[\dot{p} = -\frac{\partial}{\partial q}\left\{\frac{1}{2}p^{\top}M^{-1}(q)p \right\}-\nabla U(q_{a})+Sv\] \[+[\mathcal{N}+\mathcal{P}]\nabla H_{z}\Big{|}_{p=0}\] \[= -\nabla U(q_{a})+[I_{n}-\sigma_{1}u_{\mathfrak{p}}R_{z}(v)] \sigma_{0}u_{\mathfrak{p}}z\] \[= -\nabla U(q_{a})+\left(I_{n}-\sigma_{1}\mbox{diag}\left\{\frac{| v_{i}|}{\rho(v_{i})}\right\}\right)\bigg{|}_{v=0}\sigma_{0}u_{\mathfrak{p}}z\] \[= -\nabla U(q_{a})+\sigma_{0}u_{\mathfrak{p}}z. \tag{18}\]
Hence, for any non-zero \(u_{\mathfrak{p}}\), the point \(\chi_{\star}:=\mbox{col}(q_{a},0_{n},z_{a})\) with
\[z_{a}:=\frac{1}{\sigma_{0}u_{\mathfrak{p}}}\nabla U(q_{a}) \tag{19}\]
is an equilibrium.
The next step of the proof is to show the local asymptotic stability of the manifold \(\mathcal{M}\). Calculating the time derivative of the overall Hamiltonian, it yields for \(\chi\in B_{\varepsilon}(\mathcal{M})\) with a small \(\varepsilon>0\),
\[\dot{\mathcal{H}} = -\left[\nabla\mathcal{H}(\chi,u_{\mathfrak{p}})\right]^{\top} \mathcal{R}(\chi,u_{\mathfrak{p}})\nabla\mathcal{H}(\chi,u_{\mathfrak{p}})\] \[\leq -\left\|\begin{bmatrix}\nabla_{p}\mathcal{H}\\ \nabla_{z}\mathcal{H}\end{bmatrix}\right\|_{\mathcal{R}_{22}}^{2}\] \[\leq 0,\]
in which we have used the fact that in \(B_{\varepsilon}(\mathcal{M})\) the matrix \(\mathcal{R}\) is positive semidefinite from Remark 3. Thus, in the neighborhood of the manifold \(\mathcal{M}\), the system is Lyapunov stable. In the set
\[\{\chi\in\mathbb{R}^{3n}:\|\mbox{col}(\nabla_{p}\mathcal{H},\nabla_{z} \mathcal{H})\|_{\mathcal{R}_{22}}=0\}, \tag{21}\]
it should verify
\[(\sigma_{1}+\sigma_{2})M^{-1}(q)p-\frac{1}{2}\sigma_{0}\sigma_{1 }u_{\mathfrak{p}}R_{z}(v)z = 0 \tag{22}\] \[-\frac{1}{2}\sigma_{1}R_{z}(v)M^{-1}(q)p+\sigma_{0}R_{z}(v)z = 0. \tag{23}\]
Let us first consider (23). There are two possible cases:
* case (i): \(R_{z}(v)=0\) (or equivalently \(p=0\)).
* case (ii): For some \(j\in\bar{n}\), \(\beta_{j}(v)\neq 0\), and thus \[M^{-1}(q)p=2\frac{\sigma_{0}}{\sigma_{1}}z.\] (24)
For case (i), the trajectory verifies \(p(t)\equiv 0\), thus
\[\dot{p}=-\nabla U(q)+\sigma_{0}u_{\mathfrak{p}}z=0,\]
which is exactly the manifold \(\mathcal{M}\). For case (ii), we substitute (24) into (22), resulting in
\[4(\sigma_{1}+\sigma_{2})z=\sigma_{1}^{2}\beta_{j}(v)u_{\mathfrak{p}}z. \tag{25}\]
There are two sub-cases: case (ii-1) \(z=0\) and case (ii-2) \(z\neq 0\). For case (ii-1), the trajectory should guarantee \(z\equiv 0\) and thus
\[\dot{z} = -\mathcal{R}_{z}(v)\nabla H_{z}(0)+[\mathcal{N}-\mathcal{P}]v \Big{|}_{v\neq 0}\] \[= [\mathcal{N}-\mathcal{P}]v\Big{|}_{v\neq 0}=0.\]
Since \(\mathcal{N}-\mathcal{P}=I_{n}\), it contradicts with \(v\neq 0\) in case (ii). Thus, there is no feasible trajectory. For case (ii-2), the equation (25) can be rewritten as
\[\sigma_{1}+\sigma_{2}=\sigma_{1}^{2}\frac{|v_{j}|}{4\rho(v_{j})}. \tag{26}\]
Note that \(\lim_{|v|\to 0}\rho(v_{j})=\mu_{C}\). For given coefficients \(\sigma_{1},\sigma_{2}\), the equation (26) does not admit any feasible solution for a sufficiently small \(\varepsilon>0\). Therefore, the only feasible solutions in \(B_{\varepsilon}(\mathcal{M})\) are all on the equilibria manifold \(\mathcal{M}\).
The system is time invariant since we consider constant pressure \(u_{\mathfrak{p}}\). As we have shown above, \(\mathcal{M}\) is the largest invariant set in the neighborhood \(B_{\varepsilon}(\mathcal{M})\subset\mathbb{R}^{3n}\). Applying the LaSalle's invariance principle [11, Sec. 3], the manifold \(\mathcal{M}\) is locally asymptotically stable.
**Remark 4**: _The above proposition shows that_
* _If the initial condition_ \(\chi(0)\) _starts from any configuration_ \(q_{a}\) _and zero momentum_ \(p(0)=0\)_, we may always find a virtual bristle vector_ \(z_{a}\) _such that the system trajectory maintains at the initial values over time, and we also note_ \(\mathcal{M}\subset\mathcal{E}_{\mathfrak{SL}}\)_. In this way, it achieves shape locking._
* _A more realistic scenario is that the continuum robot achieves deformation with the tension input_ \(u\in\mathbb{R}^{m}\)_; then we apply a vacuum and release the actuator. Once the tension release is completed, the initial condition is given by_ \(\chi(0)=(q(0),0_{3},0_{3})\) _instead of_ \((q_{a},0_{3},z_{a})\)
Proposition 1(b) shows the _local_ asymptotic stability of the manifold \(\mathcal{M}\), which means if the initial distance \[d(\chi(0),\mathcal{M}):=\inf_{\chi^{\prime}\in\mathcal{M}}|\chi^{\prime}-\chi(0)| <\varepsilon_{0}\] (27) is small, the trajectory ultimately converges to equilibrium \((q_{a},0_{3},z_{a})\in\mathcal{M}\).
3. From (ii), the convergence only happens when \(\varepsilon_{0}>0\) is small. Note that the vector \(z_{a}\) is parameterized as \(z_{a}=\frac{1}{\sigma_{q}u_{\mathsf{P}}}\nabla U(q_{a})\). Thus, a large value of \(u_{\mathsf{P}}\) can impose the initial condition \((q(0),0_{3},0_{3})\) in a small neighborhood of \(\mathcal{M}\); see Fig. 2 for an intuitive illustration. _Physically, it means that a large pressure value \(u_{\mathsf{P}}\) is capable of achieving shape locking._
4. The above item shows that after releasing the actuation, the system will change from the initial configuration \((q(0),0_{3},0_{3})\) to the new equilibrium \((q_{a},0_{3},z_{a})\), and it will be closed to each other with a high pressure \(u_{\mathsf{P}}\). It means when the continuum robot changes from flexible to stiff, we may observe a tiny positional change that has been experimentally verified in [4, Sec. III-B].
### _Adjustable Open-loop Stiffness_
The open-loop equilibrium \(\chi_{\star}:=(q_{\star},p_{\star},z_{\star})\) is the origin. In the stiffness analysis, we assume that there is an external torque \(\tau_{\mathtt{ext}}\) acting on the dynamics of \(p\), i.e., the dynamics with \(u=0\) becomes
\[\dot{\bar{\chi}}=[\mathcal{J}-\mathcal{R}]\nabla\mathcal{H}+G_{0}\tau_{ \mathtt{ext}}. \tag{28}\]
with \(G_{0}=\text{col}(0_{3\times 3},I_{3},0_{3\times 3})\), under which there is a shifted equilibrium \(\bar{\chi}:=\text{col}(\bar{q},0,\bar{z})\).
**Definition 2**: (Stiffness) _Assume that we can find a positive semidefinite matrix \(K\in\mathbb{R}^{3\times 3}\) such that_
\[\tau_{\mathtt{ext}}:=K(\bar{q}-q_{\star}) \tag{29}\]
_solves (28)-(29). When taking \(\bar{q}\to q_{\star}\) and \(\bar{z}\to z_{\star}\), if the limit of \(K\) exists, we call \(K\) the overall stiffness._
We are now in position to present the open-loop stiffness of the proposed LJ-based continuum robotic model.
**Proposition 2**: _Consider the LJ-based continuum robotic model (13). Its overall stiffness in the sense of Definition 2 at the open-loop equilibrium \(\chi_{\star}\) is given by_
\[K=\alpha_{1}\mathbf{1}_{n\times n}+[\alpha_{2}+\sigma_{0}u_{\mathsf{P}}]I_{n}, \tag{30}\]
_with \(\mathbf{1}_{n\times n}\in\mathbb{R}^{n\times n}\) an all elements ones._
Let us consider a tiny displacement \((\delta q,\delta z)\in\mathbb{R}^{n}\times\mathbb{R}^{n}\) around \((q_{\star},z_{\star})\), i.e.,
\[q=q_{\star}+\delta q,\quad z=z_{\star}+\delta z. \tag{31}\]
For ease of analysis, we rewrite the model in an Euler-Lagrangian form
\[\dot{M}(q)\bar{q}+C(q,\dot{q})\dot{q}+\nabla U(q) = \tau_{\mathtt{ext}}-\tau_{f} \tag{32}\] \[\dot{z} = -\sigma_{0}\text{diag}\left\{\frac{|\dot{q}_{i}|}{\rho(\dot{q}_{i })}\right\}z+\dot{q}\] \[\tau_{f} = (\sigma_{0}z+\sigma_{1}\dot{z}+\sigma_{2}\dot{q})u_{\mathsf{P}},\]
with zero initial condition, in which \(C(q,\dot{q})\) is the Coriolis and Centrifugal term [18].
Linearizing the dynamics (32) around \(q_{\star}=0,\dot{q}_{\star}=0\) and \(z_{\star}=0\) and invoking (31), we obtain the model
\[\begin{split} M_{\star}\delta\ddot{q}+[\sigma_{1}u_{\mathsf{P}}] \delta\dot{q}+[\nabla^{2}U(q_{\star})+&\sigma_{0}u_{\mathsf{P}} I_{3}]\delta q\\ &=\tau_{\mathtt{ext}}+\mathcal{O}(\delta q^{2})\end{split} \tag{33}\]
with \(M_{\star}:=M(q_{\star})\) and high-order remainder term \(\mathcal{O}(\delta q^{2})\), in which we have used the facts
\[C(q_{\star},0)=0,\quad\nabla U(q_{\star})=0.\]
Since
\[\sigma_{1}u_{\mathsf{P}}>0\] \[\nabla^{2}U(q_{\star})+\sigma_{0}u_{\mathsf{P}}I_{3}\succ 0,\]
the linearized dynamics (33) is exponentially stable at equilibrium
\[\delta q=[\nabla^{2}U(q_{\star})+\sigma_{0}u_{\mathsf{P}}I_{3}]^{-1}\tau_{ \mathtt{ext}}+\mathcal{O}(\delta q^{2}).\]
By taking \(|\delta q|\to 0\), the algebraic equation (29) is obtained with \(K\) given by
\[K=\nabla^{2}U(q_{\star})+\sigma_{0}u_{\mathsf{P}}I_{3}.\]
Substituting the function \(U\) in (3) into the above, we obtain (30) and complete the proof.
Fig. 3: Relation between \(u_{\mathsf{P}}\) and the transverse stiffness from the experiments in which we used the jamming sheath with 5 layers. (“\(\times\)” shows the mean values; color band represents the \(\pm 1\) standard deviation.)
Fig. 2: An illustration of the initial condition and the equilibria manifold \(\mathcal{M}\). For a given initial condition \((q(0),0_{3},0_{3})\), a larger \(u_{\mathsf{P},1}\) implies a smaller distance from \(\chi(0)\) to \(\mathcal{M}\), thus \(\chi(0)\) located in its domain of attraction; a smaller \(u_{\mathsf{P},2}\) may cause the initial condition outside the domain of attraction, failing to achieve shape locking.
## IV Experiments
In this section, we verify the theoretical analysis in Section III on our continuum robotic platform _OctRobot-I_. Although the overall stiffness matrix \(K\succ 0\) cannot be measured directly, we are able to detect the transverse stiffness \(K_{\mathtt{T}}\in\mathbb{R}_{\geq 0}\) in the end-effector around the open-loop equilibrium \(q_{\star}\) of the continuum robot. We used the same testing setup and approach as in [21, Sec. VI-B], and repeated three times under the same conditions. The open-loop transverse stiffness under different negative pressures \((-u_{\mathtt{p}})\) is plotted in Fig. 3. The coefficient of determination \(R_{s}^{2}\) is 0.8083 showing good linearity with respect to the value \(u_{\mathtt{p}}\). This verifies the results in Section III-B.
The second experiment was designed to verify the results about shape-locking. The robot was initialized from the open-loop configuration \(q_{\star}=0\) (Phase 1), and then driven to the bending angle of 60\({}^{\circ}\) via tendon (Phase 2). When the system kept at the steady-state stage, we vacuumed and kept the jamming layer sheath to a negative pressure of -30 kPa (Phase 3), and then released the tendons (Phase 4). During this process, the sequence photos are presented in Fig. 4, and the force \(u_{1}\) was tunned as the trajectory in Fig. 5. Note that we use the time interval \([-5,0]\) s to denote the initial status before starting the motor drive. As illustrated in Figs. 4(c)-(d), it achieved shape locking after applying a negative pressure (\(-u_{\mathtt{p}}\)). To clearly show the shape-locking phenomenon, Fig. 6 illustrates the overlay photos of Phases 3 and 4 with two different \(u_{\mathtt{p}}\) (\(30\) and \(80\) kPa). It can be observed tiny positional changes as theoretically predicted in Remark 4(iv) - a larger \(u_{\mathtt{p}}\) yielded a smaller displacement (3.8mm of 80 kPa and 9.2mm of 30 kPa).
## V Conclusion
In this paper, we have presented a novel dynamical model for layer jamming-based continuum robots, which integrates the energy-based modeling approach and the LuGre frictional model. In terms of the proposed model, we theoretically analyze its dynamical behavior and show its usefulness in interpreting the two important phenomena (i.e., shape locking and adjustable stiffness) in this kind of robots with quantitative results. These have been experimental verified on our robotic platform.
The motivation of this work is to propose a _control-oriented_ model, and naturally our future work will center on feedback controller synthesis using the proposed model. Another important direction is to revisit Assumption 3 regarding the relation between the pressure and lumped normal force. This would be helpful to improve the fitting accuracy on the relation between robot stiffness and negative pressure.
|
2310.20195 | Generating Continuations in Multilingual Idiomatic Contexts | The ability to process idiomatic or literal multiword expressions is a
crucial aspect of understanding and generating any language. The task of
generating contextually relevant continuations for narratives containing
idiomatic (or literal) expressions can allow us to test the ability of
generative language models (LMs) in understanding nuanced language containing
non-compositional figurative text. We conduct a series of experiments using
datasets in two distinct languages (English and Portuguese) under three
different training settings (zero-shot, few-shot, and fine-tuned). Our results
suggest that the models are only slightly better at generating continuations
for literal contexts than idiomatic contexts, with exceedingly small margins.
Furthermore, the models studied in this work perform equally well across both
languages, indicating the robustness of generative models in performing this
task. | Rhitabrat Pokharel, Ameeta Agrawal | 2023-10-31T05:40:33Z | http://arxiv.org/abs/2310.20195v2 | # Generating Continuations in Multilingual Idiomatic Contexts
###### Abstract
The ability to process idiomatic or literal multiword expressions is a crucial aspect of understanding and generating any language. The task of generating contextually relevant continuations for narratives containing idiomatic (or literal) expressions can allow us to test the ability of generative language models (LMs) in understanding nuanced language containing non-compositional figurative text. We conduct a series of experiments using datasets in two distinct languages (English and Portuguese) under three different training settings (zero-shot, few-shot, and fine-tuned). Our results suggest that the models are only slightly better at generating continuations for literal contexts than idiomatic contexts, with exceedingly small margins. Furthermore, the models studied in this work perform equally well across both languages, indicating the robustness of generative models in performing this task.
## 1 Introduction
Idiomatic expressions are a common feature of all human languages and are often used to convey emotions, cultural references, and implied meanings. These are phrases or expressions that have a figurative meaning that is different from the literal meaning of the words that make it up. In particular, it is the notion of non-compositionality that makes an idiomatic phrase often challenging as it requires understanding the phrase's meaning as a whole. As such, the ability to understand and generate idiomatic expressions is an important task for natural language processing systems, as it allows them to better understand and generate human languages. This is particularly important for applications such as machine translation, language generation, and dialogue systems, where idiomatic expressions are often used to convey meaning. As an example, consider Figure 1 where the multiword expression "big picture" can convey vastly different meanings depending on the context (idiomatic vs. literal) in which it is being used.
In the field of idiomaticity, prior works have focused on detecting idioms (Tayyar Madabushi et al., 2021; Tan and Jiang, 2021; Tedeschi et al., 2022; Tedeschi and Navigli, 2022), paraphrasing idiomatic sentences to literal paraphrases (Zhou et al., 2021), cloze task such as fill-in-the-blank language comprehension (Zheng et al., 2019), classifying idiomatic and literal expressions (Peng et al., 2015), translating idiomatic language (Tang, 2022), and generating continuations for idiomatic contexts (Chakrabarty et al., 2022).
The question remains whether generative language models (LMs), typically trained on extensive text corpora of human language, perform differently or similarly under contexts containing literal and idiomatic expressions, particularly in multilingual settings. We explore this by generating text continuations within contexts featuring multiword expressions in both idiomatic and literal forms. Our investigation considers two distinct languages - English and Portuguese. Both languages use Latin script and subject-verb-object sentence structure. However, notable differences exist between these two languages. English is classified as a language with the highest resource level ('5'), whereas Portuguese is categorized as '4' according
Figure 1: An example where a sentence (S2) contains the same multiword expression used in two contexts – idiomatic and literal. The task is to generate a coherent follow-up continuation (S3).
to the linguistic diversity taxonomy Joshi et al. (2020), which could potentially impact how well the models process texts in these languages. Moreover, the distinct traditions and historical influences of Portuguese-speaking and English-speaking cultures lead to differences in social norms and idiomatic expressions.
Using existing datasets of sentence sequences where multiword expressions are used in both literal and idiomatic senses, we empirically evaluate several language models under various settings including zero-shot, few-shot, and fully supervised, by generating logical continuations of narratives. Our findings suggest that while the models show a slight preference for the literal and compositional use of multiword expressions, resulting in more coherent continuations in literal contexts compared to idiomatic ones, this trend is only consistently observed in approximately half of the cases (with the performance being comparable in the other half). Moreover, the difference is extremely minor, typically not exceeding 0.02 metric points. In terms of multilingual models, our study indicates that all models perform comparably well in both languages, which is an encouraging outcome. Interestingly, the best results are obtained under the zero-shot setting (rather than few-shot setting) using the GPT-3 davinci model for both English and Portuguese, suggesting that for creative text generation tasks like continuation generation, zero-shot settings are not only effective but also efficient in terms of cost. The main contributions of this research include:
* Investigating the ability of generative language models to generate coherent subsequent sentences for idiomatic as well as literal contexts; we will make the code1 publicly accessible to facilitate further research; Footnote 1: [https://github.com/PortNLP/llm-in-idiomatic-context](https://github.com/PortNLP/llm-in-idiomatic-context)
* Studying and evaluating four generative models under three training settings (zero-shot, few-shot, and fully supervised) in two distinct languages (English and Portuguese).
## 2 Related Work
Prior research focusing on idioms can be broadly categorized into two areas: _classification_ and _generative_. Although our work relates to the latter, i.e., generating continuations in multilingual idiomatic contexts, we provide an overview of the background and current developments within both fields of research, and a brief summary in Table 1. In this context, the terms "idiomatic" and "figurative" are used interchangeably as they both denote language that conveys a meaning that is distinct from its literal or compositional interpretation.
### Idioms-related Classification Tasks
Tayyar Madabushi et al. (2021) studied several transformer-based models such as BERT, XLNet,
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Paper** & **Task** & **Languages** \\ \hline Tayyar Madabushi et al. (2021) & Idiomaticity detection & en, pt \\ Tedeschi et al. (2022) & Idiomaticity detection & en, dc, it, es \\ Tedeschi and Navigli (2022) & Idiomaticity detection & en, pt, gl \\ \hline Tan and Jiang (2021) & Idioms interpretation & en \\ Chakrabarty et al. (2022) & Idioms interpretation & en \\ \hline Moussallem et al. (2018) & Idiom translation, idiom linking & en, de, it, pt, ru \\ Fadaee et al. (2018) & Idiom translation & en, de \\ Tang (2022) & Idiom translation & cz, en \\ \hline Korkontzelos et al. (2013) & Semantic similarity & en, fr, de, it \\ Peng et al. (2015) & Idiomatic and literal expression classification & en \\ Zheng et al. (2019) & Cloze test & cz \\ Chakrabarty et al. (2021) & Idiomatic continuation generation & en \\ Dashtipour et al. (2022) & Sentiment analysis of idiomatic sentences & fa \\ Zhou et al. (2021) & Paraphrasing idioms & en \\ \hline \hline \end{tabular}
\end{table}
Table 1: A survey of works that have focused on idioms in different languages.
pressions in a sentence as a binary classification task, and additionally, proposed a similarity metric to assess the similarity between idiomatic and non-idiomatic expressions. Tedeschi et al. (2022) utilized a BERT-based architecture for idiomatic expression detection, while Tedeschi and Navigli (2022) measured the similarity between a potentially idiomatic expression and its context to detect idiomatic usage.
In addition to idiom detection, the classification method has also been applied to the comprehension of idioms, encompassing a variety of subjects. One of them is the classification of different sentiments conveyed through idiomatic expressions (Dashtipour et al., 2022). Jhamtani et al. (2021) investigated whether dialogue models are able to handle figurative language usage and concluded that they do not perform well in this area. Tan and Jiang (2021) evaluated the ability of BERT to understand idioms by selecting the correct paraphrase from a set of options. Liu et al. (2022) examined models by having them choose the correct metaphorical phrase between two opposite metaphorical phrases, concluding that language models do not make use of context when dealing with metaphorical phrases. In addition, one of the tasks conducted by Chakrabarty et al. (2022) involved the selection of a plausible continuation from two candidate options.
### Idioms-related Generative Tasks
In contrast to classification tasks, there has been limited exploration of generative tasks related to idiomatic expressions. Zhou et al. (2021) used the paraphrasing task to study the ability of models to understand idioms by replacing idiomatic expressions with literal paraphrases. They employed BART model and several metrics to compare the generated text with the reference text. Chakrabarty et al. (2022) explored the task of generating a coherent next sentence for English idiomatic contexts.
While similar in spirit, there are some notable differences between our work and prior work. Chakrabarty et al. (2022) exclusively focused on idiomatic usages, whereas our study takes a more comprehensive approach by encompassing and comparing the performance of generative models across _both_ idiomatic and literal language expressions, which is a novel analysis in this area. It offers a deeper understanding of how these models interpret idiomatic context. Specifically, it sheds light on whether these models consistently interpret idiomatic phrases in the same manner (either literally or idiomatically), or if their interpretation varies depending on the surrounding context. Moreover, whereas their work was conducted only in English, our investigation extends its reach to two languages: English (EN) and Portuguese (PT).
## 3 Method
### Problem Description
Given a text sequence of two consecutive sentences \(S1\) and \(S2\), such that \(S2\) contains a multiword expression used either in a literal sense or an idiomatic sense, the goal is to generate the next sentence \(S3^{\prime}\) that reasonably and logically continues the narrative and is relevant within the context formed by \(S1\) and \(S2\). To evaluate the quality of the generated continuation \(S3^{\prime}\), we can either compare \(S3^{\prime}\) to the reference text \(S3\) or assess it within the context formed by \(S1\) and \(S2\).
### Models
Figure 2 presents an overview of the modeling process. Generative language models are used to generate text by learning patterns and structures from large collections of data, allowing them to generate new, coherent sentences based on the learned patterns. To generate the \(S3^{\prime}\) sentences, we use
Figure 2: Overview of the modeling process.
three generative language models: GPT-22 (117M), OPT3 (125M), GPT-34 (ada and davinci models), under three training settings:
Footnote 2: [https://huggingface.co/gpt2](https://huggingface.co/gpt2)
Footnote 3: [https://huggingface.co/facebook/opt-125m](https://huggingface.co/facebook/opt-125m)
Footnote 4: [https://openai.com](https://openai.com)
Footnote 5: [https://huggingface.co/docs/transformers/v4_25.1/en/model_doc/gpt2#transformers](https://huggingface.co/docs/transformers/v4_25.1/en/model_doc/gpt2#transformers). GPT2Tokenizer
Footnote 6: [https://huggingface.co/docs/transformers/v4_25.1/en/main_classes/text_generation#transformers.GenerationMixin.generate](https://huggingface.co/docs/transformers/v4_25.1/en/main_classes/text_generation#transformers.GenerationMixin.generate)
(a) _Zero-shot_: using the models without any further training,
(b) _Few-shot_: fine-tuning the models using a few examples each from idiomatic and literal contexts (full details in Table 2), and
(c) _Fully supervised_: fine-tuning the models using the entire training dataset.
To fine-tune the models (GPT-2 and OPT), we first tokenized the input sentences using the GPT2Tokenizer5. We then appended the special token \(<|endoftext|>\) at the end of each sample to ensure that the models could correctly recognize the end of the input text. After the output text was generated, we tokenized it using the NLTK tokenizer Bird (2006) and extracted only the first sentence of the generated output as \(S3^{\prime}\) in cases where the models generate more than one sentence.
Footnote 5: [https://huggingface.co/docs/transformers/v4_25.1/en/model_doc/gpt2#transformers](https://huggingface.co/docs/transformers/v4_25.1/en/model_doc/gpt2#transformers). GPT2Tokenizer
For GPT-3 models, we only use few-shot and zero-shot settings with the default settings. As input, we provide the context using \(S1\) and \(S2\), followed by the prompt:
"\(\backslash\)n\(\backslash\)nQuestion: Generate a logical next sentence.\(\backslash\)nAnswer:"
appended to the end of each context. The generated text was cleaned by removing any HTML tags or trailing white spaces.
### Implementation Details
We experimented with three temperature settings (0.6, 0.8, and 1.0) which control the diversity or randomness of the generated output, with temperature = 1 generating the most diverse and creative text, and temperature = 0 generating the least diverse text. The GPT-2 and OPT models were trained for 20 epochs, while the GPT-3 models were trained for 4 epochs. We set the learning rate to \(2e^{-5}\) and use AdamW optimizer to train the models. The maximum sequence length was set to 400 and the batch size to 16. We used HuggingFace's utility function generate6 by turning on sampling. When sampling is turned on, the model generates text by randomly selecting the next word based on its predicted probabilities. This allows for more diverse and creative outputs, as compared to deterministic approaches like greedy decoding. Since the model does not know when to stop the text generation, we set the generated text's minimum length to 20 and maximum length to 100.
Footnote 6: [https://huggingface.co/docs/transformers/v4_25.1/en/main_classes/text_generation#transformers.GenerationMixin.generate](https://huggingface.co/docs/transformers/v4_25.1/en/main_classes/text_generation#transformers.GenerationMixin.generate)
## 4 Evaluation
### Datasets
We use an exiting dataset called Multilingual Idiomaticity Detection and Sentence Embedding dataset7 (Tayyar Madabushi et al., 2021). Specifically, we use the English and Portuguese subsets of the data which were collected by a team of 12 judges from naturally occurring sources. The dataset contains sequences of three consecutive sentences with the middle sentence \(S2\) containing multiword expressions in either idiomatic or literal sense. Note that this dataset describes these multiword expressions as _potentially idiomatic expressions_ (PIE), which means \(S2\) contains PIEs, which may or may not necessarily be idioms. However, this is the only available dataset that is closest to the task at hand and includes data from two languages. Table 2 presents the dataset's statistics, and some sample instances are shown in Table 3. In the test data8, the number of idiomatic and non-idiomatic instances was balanced using random undersampling.
Footnote 7: [https://github.com/H-TayyarMadabushi/SemEval_2022_Task2-idiomaticity](https://github.com/H-TayyarMadabushi/SemEval_2022_Task2-idiomaticity)
Footnote 8: We consider the development set from the original dataset as the test data in our experiments as we did not have access to the ground truth labels for the test set.
### Metrics
We conduct automatic and human evaluations of the generated continuations. For automatic evaluation, we use the following three metrics which compare the generated sentence \(S3^{\prime}\) with a refer
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & \multicolumn{3}{c}{**Train**} & **Test** \\ \cline{2-5} & ZS & FS & Full & \\ \hline
**EN** & - & 87 & 3412 & 364 \\
**PT** & - & 53 & 1217 & 238 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Dataset statistics. The test dataset for a language was the same under all the settings (zero-shot (ZS), few-shot (FS), and fully supervised (Full)).
ence sentence \(S3\) that is already available in the dataset.
* **ROUGE-L**Lin (2004), typically used to compare machine-generated text with human reference text, measures the longest common subsequence between the two texts.
* **METEOR**Banerjee and Lavie (2005) is another widely used evaluation metric that aims to measure the degree of lexical and phrasal overlap between a machine-generated text and one or more reference texts.
* **BERTScore**Zhang et al. (2019) is a semantic similarity metric that uses cosine similarity between the sentence embeddings to compare the meaning of two sentences. The embedding model we used was microsoft/deberta-xlarge-mnli He et al. (2021).
While the automatic evaluation measuring the similarity between \(S3^{\prime}\) and an existing \(S3\) serves as a quick and cost-effective method of evaluation, it may not comprehensively capture the nuances of natural language, particularly when several valid outputs are possible. Therefore, we complement our evaluation by obtaining human assessment of the outputs where \(S3^{\prime}\) is evaluated within the contexts formed by \(S1\) and \(S2\).
\begin{table}
\begin{tabular}{p{56.9pt} p{142.3pt} p{142.3pt} p{142.3pt}} \hline \hline
## 5 Results and Discussion
The results of our experiments are evaluated automatically, through human assessment, and qualitatively, as discussed next.
### Automatic Evaluation
Table 4 presents the main results of our experiments, from which we make some observations to answer the following questions.
**Are literal contexts easier for language models than idiomatic contexts?** Overall, in both the language datasets and all three metrics, the literal continuations obtain slightly higher scores than idiomatic continuations. However, in looking closely, we observe that the lexical continuations are better than idiomatic continuations in only about half the scenarios or less (11/20, 4/20, and 12/20 for ROUGE-L, METEOR, and BERTScore, respectively). When we consider the absolute difference in performance, it is interesting to note that the lexical continuations are superior to idiomatic continuations only by a very small margin (maximum difference of 0.01, 0.02, and 0.02 points for ROUGE-L, METEOR, and BERTScore, respectively). The results of statistical significance testing (\(t\)-test) yield \(p\)-values > 0.4, indicating that the disparities between idiomatic and literal results lack statistical significance. Taken together, these results lead us to conclude that the generative language models process these distinct contexts somewhat similarly, and that idiomatic contexts are not necessarily more challenging than literal contexts in this task.
We analyze the lengths of the different context sentences (Figure 3). It is observed that the lengths of \(S1\), \(S2\), and \(S3\) are comparable between the idiomatic and literal contexts. Moreover, in both
\begin{table}
\begin{tabular}{c c c c c|c c|c c} \hline \hline \multirow{2}{*}{**Lang.**} & \multirow{2}{*}{**Model**} & \multicolumn{2}{c}{**ROUGE-L**} & \multicolumn{2}{c}{**METEOR**} & \multicolumn{2}{c}{**BERTScore**} \\ \cline{3-10} & & & **I** & **L** & **I** & **L** & **I** & **L** \\ \hline \multirow{10}{*}{EN} & \multirow{3}{*}{ZS} & GPT2 & **0.10** & 0.09 & **0.11** & 0.10 & 0.55 & 0.55 \\ & & OPT & 0.10 & 0.10 & 0.11 & **0.12** & 0.55 & 0.55 \\ & & GPT3 ada & 0.11 & **0.12** & 0.11 & **0.13** & 0.55 & 0.55 \\ & & GPT3 davinci & 0.12 & **0.13*** & 0.12 & **0.14*** & 0.59 & **0.60*** \\ \cline{2-10} & \multirow{3}{*}{FS} & GPT2 & 0.10 & 0.10 & 0.10 & **0.11** & 0.53 & **0.54** \\ & & OPT & 0.09 & **0.10** & 0.11 & 0.11 & 0.55 & **0.56** \\ & & GPT3 ada & 0.10 & 0.10 & 0.13 & 0.13 & 0.52 & **0.53** \\ & & GPT3 davinci & 0.10 & **0.11** & **0.14** & 0.13 & 0.54 & **0.55** \\ \cline{2-10} & \multirow{3}{*}{Full} & GPT2 & 0.10 & 0.10 & 0.13 & 0.13 & 0.53 & 0.53 \\ & & OPT & 0.10 & **0.11** & 0.12 & 0.12 & 0.55 & 0.55 \\ \hline \hline \multirow{10}{*}{PT} & \multirow{3}{*}{ZS} & GPT2 & 0.07 & 0.07 & 0.08 & 0.08 & 0.50 & **0.52** \\ & & OPT & 0.10 & **0.11** & 0.12 & 0.12* & 0.56 & **0.57** \\ & & GPT3 ada & 0.06 & 0.06 & 0.07 & 0.07 & 0.51 & **0.52** \\ & & GPT3 davinci & **0.12*** & 0.11 & **0.11** & 0.10 & 0.60 & **0.61*** \\ \cline{2-10} & \multirow{3}{*}{FS} & GPT2 & 0.08 & 0.08 & 0.09 & 0.09 & 0.52 & 0.52 \\ & & OPT & 0.10 & **0.11** & 0.11 & 0.11 & 0.58 & 0.58 \\ & & GPT3 ada & 0.09 & **0.10** & 0.08 & 0.08 & 0.56 & **0.58** \\ & & GPT3 davinci & 0.11 & **0.12** & 0.10 & 0.10 & 0.58 & 0.58 \\ \cline{2-10} & \multirow{3}{*}{Full} & GPT2 & 0.09 & **0.10** & 0.11 & 0.11 & 0.54 & **0.55** \\ \cline{2-10} & & OPT & 0.10 & **0.11** & 0.11 & 0.11 & 0.57 & **0.59** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Performance of the models for different metrics with temperature set to 1.0. I = Idiomatic, L = Literal, ZS = Zero Shot, FS = Few Shot, Full = Fully finetuned. The higher score between idiomatic and literal comparison is shown in **bold**, for each metric the best result for each training setting is underlined, and for each metric the best overall result for each dataset is shown with an *asterisk (where multiple best overall results exist, the one in the more cost-effective setting is shown). The differences between idiomatic and literal scores are found to be _not_ statistically significant, with \(p\)-values > 0.4 using \(t\)-test.
contexts, \(S3^{\prime}\) generated under the zero-shot setting is similar in length as the original \(S3\), while \(S3^{\prime}\) under the few-shot setting is slightly longer. Furthermore, consistent results are obtained under all three temperature settings studied (Figure 4).
**How do language models compare between English and Portuguese?** In terms of comparing the performance of all LMs between the two different languages, it appears that the results are comparable, which is encouraging given that English is considered the highest resource language (level '5') whereas Portuguese is '4', a high resource level, in the taxonomy of linguistic diversity Joshi et al. (2020). For all the metrics, performance on English dataset is superior to that of Portuguese dataset by a maximum of 0.05 metric points, and in cases where Portuguese set performs better than English set, it is with at most about 0.04 points, suggesting that the performance across both languages remains largely similar.
**How do the models perform across different training settings?** In line with general expectations, the newer and larger model (GPT-3 davinci) generally outperforms the older and smaller models (GPT-2, OPT, GPT-3 ada), even with no training (zero-shot) or little training (few-shot), although the difference remains small. In comparing the freely available models such as GPT-2 and OPT, a few interesting results emerge: (i) OPT generally outperforms GPT-2 across all settings, but more clearly in Portuguese, (ii) these models benefit from some training especially in the case of Portuguese, and (iii) for English, zero-shot setting yields better results than few-shot setting, but for Portuguese, few-shot setting yields better results than zero-shot setting.
**How is the performance under limited context?** As further analysis, we modify our experimental set up to use only \(S2\) as the input context (instead of both \(S1\) and \(S2\)). The results in Table 5 show that, as expected, the results are generally lower when only \(S2\) is provided. However, this gap is noticeably larger in English than in Portuguese, suggesting that additional contexts are more useful in English than in Portuguese.
Figure 4: The results (BERTScore) of GPT-3 davinci under zero-shot for different temperature settings for English (top) and Portuguese (bottom).
Figure 3: The graph comparing the average lengths of the sentences (numbers of words) for English (top) and Portuguese (bottom).
\begin{table}
\begin{tabular}{l|c c|c c} \hline \hline & \multicolumn{2}{c}{**METEOR**} & \multicolumn{2}{c}{**BERTScore**} \\ \cline{2-5} & **I** & **L** & **I** & **L** \\ \hline \multicolumn{5}{l|}{**Only \(S2\) is used**} & & & \\ \hline EN & 0.10 & **0.11** & 0.58 & **0.59** \\ PT & **0.09** & 0.08 & 0.59 & **0.61** \\ \hline \multicolumn{5}{l|}{\(S1\) and \(S2\) are used} & & & \\ \hline EN & 0.12 & **0.14** & 0.59 & **0.60** \\ PT & 0.10 & 0.10 & 0.59 & **0.61** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Performance of GPT-3 davinci model under zero-shot setting when only \(S2\) is used (without \(S1\)). ‘I’ denotes idiomatic contexts where ‘L’ denotes literal contexts. As comparison, we also add the corresponding results here, borrowing from Table 4.
### Human Evaluation
For conducting the human evaluation of the generated outputs, two annotators were invited to estimate the relevance and grammatical accuracy of the third sentence (\(S3^{\prime}\)) in the context of first (\(S1\)) and second (\(S2\)) sentences across 25 randomly selected English samples (12 idiomatic and 13 literal samples) generated from GPT-3 davinci model.
The annotators were assigned two tasks.
**Task 1** involved rating the relevance of \(S3^{\prime}\) on a scale of 0 to 2, with 0 indicating no relevance, 1 representing neutrality, and 2 signifying relevance. The annotators reached an agreement on 15 samples, which accounts for approximately 60% of the total. For these 15 samples, both annotators assigned the same relevance scale. Within this subset, 9 samples (about 60%) were idiomatic, indicating a consistent interpretation across both idiomatic as well as literal contexts by both annotators. Additionally, within this subset, the majority of samples labeled as relevant were idiomatic (7 out of 8). This observation suggests that the model's generated idiomatic continuations were generally preferred.
Overall, considering all the 50 annotations (25 each per annotator), the annotators marked a total of 26 samples (52%) as relevant (16 idiomatic and 10 literal), 21 (42%) as neutral (5 idiomatic and 16 literal), and 3 (0.06%) as not relevant at all (3 idiomatic). These findings indicate that GPT-3 performed well in generating relevant continuations across both the contexts, but particularly so for idiomatic cases.
**Task 2** involved identifying any grammatical errors in the generated outputs. These errors primarily included instances where \(S3^{\prime}\) failed to form complete sentences or had some punctuation issues. Other errors included missing spaces after sentence endings, unexpected numbers or symbols inserted into the text, random dates appearing, sentences with unclear or nonsensical content, or unexpected underlined sections. 45 out of 50 annotations were flagged as having some kind of abovementioned grammatical errors to some degree and the errors were distributed almost equally between the idiomatic and literal samples. In addition to highlighting the importance of human assessment in natural language generation tasks such as this one, these results suggest that natural language generation continues to present a challenge for these models.
### Qualitative Analysis
The evaluation of generative tasks, such as narrative continuation, often benefits from qualitative investigation. In this regard, Table 6 presents a selection of texts generated by the GPT-3 davinci model. It demonstrates that \(S3^{\prime}\) is a logical sentence when considered within its context. However, one can observe certain grammatical errors in the generated
\begin{table}
\begin{tabular}{p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt}} \hline \hline
**S1** & **S2** & **S3** & **S3’** & **Label** \\ \hline He believes Mueller’s report on Russian interference in the 2016 election is devastating, but too complicated for mass consumption. & Attorney General William P. Barr spins up a dust storm of inaccuracies while Mueller “chose to submit his unreadable — and then go away without a sound,”Comey writes. & He could have found a way to speak to the American people in their language. & Mueller’s report favours the idea that US intelligence agencies were not effectively informed of the backdrop of Russian activity before the election, and Syria is details which could support or prove the theory’s inches. & \(I\) \\ \hline Plant-based food has become a part of the mainstream culinary world. & It’s exciting to be leading this conversation at Flower Child. & We work hard to maintain consistency and quality throughout the seasons. & We believe that everyone, regardless of their diet preference or lifestyle, should have options for amazing plant-based food that not only tastes great but is also good for you.”\(\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\
text, which contribute to the inconsistency in the results obtained from automated metrics.
## 6 Conclusion
In this work, we investigate the ability of generative language models to generate reasonable continuations under idiomatic and literal contexts. The results suggest that literal continuations seem less challenging for the models than idiomatic continuations, but only slightly so. In particular, the human annotators found the continuations in idiomatic contexts to be fairly relevant. These observations were consistent across English and Portuguese datasets. The GPT-3 davinci model consistently outperformed all other models, and, interestingly, its performance under a zero-shot setting was better than under a few-shot setting.
We have multiple directions for future work that we intend to explore. For example, in this work, we experimented with only a handful of prompts. There are several ways in any language to write the same prompt. As such, the generated text might depend on how the prompt is designed, which eventually affects the meaning of the generated text (Lu et al., 2021). In terms of models, especially in the case of GPT-3 models, we were somewhat limited to the number of versions that we could experiment with due to limited computational resources and accessing it as a paid service. Recent versions of the ChatGPT model as well as more open source models could also be studied. Additionally, given the non-deterministic nature of text generations, multiple \(S3^{\prime}\) continuations could be generated and studied. Although this paper focused primarily on higher-resource languages within the same language family, we plan to extend the inquiry to include lower-resource languages from different language families.
## Ethics Consideration
The use of idiomatic expressions in natural language can potentially alter the intended meaning of a message. If a language model is unable to accurately interpret these idiomatic expressions, it can easily lead to a misinterpretation of the message and negatively impact the overall effectiveness of the model. Language models have also been shown to contain gender biases (Lucy and Bamman, 2021). As we used existing datasets from credible sources (SemEval 2022, Task 2) in our experiments, we did not verify every instance manually but considering that the data originated from 'naturally occurring sentences', it is possible that the data may contain unintended biases or offensive content.
## Limitations
We explored only a handful of prompts in this work. There are several ways in any language to write the same prompt. As such, the generated text might depend on how the prompt is designed eventually affecting the meaning of the generated text (Lu et al., 2021). Another limitation of our work is that human assessment was only conducted on English samples. In terms of models, especially in the case of GPT-3 models, we were limited to the number of variants we could experiment with due to limited computational resources and accessing it as a paid service.
## Acknowledgments
We would like to thank the anonymous reviewers and the PortNLP research group for their insightful feedback. This research was supported by the National Science Foundation under Grant No. CRII:RI-2246174.
|
2310.00245 | Singularities and clusters | The aim of this note is to describe a geometric relation between simple plane
curve singularities classified by simply laced Cartan matrices and cluster
varieties of finite type also classified by the simply laced Cartan matrices.
We construct certain varieties of configurations of flags out of Dynkin
diagrams and out of singularities and show that they coincide if the Dynkin
diagram corresponds to the singularity. | Vladimir Fock | 2023-09-30T04:21:21Z | http://arxiv.org/abs/2310.00245v2 | # Singularities and Clusters.
###### Abstract.
The aim of this note is to describe a geometric relation between simple plane curve singularities classified by simply laced Cartan matrices and cluster varieties of finite type also classified by the simply laced Cartan matrices. We construct certain varieties of configurations of flags out of Dynkin diagrams and out of singularities and show that they coincide if the Dynkin diagram corresponds to the singularity.
2000 Mathematics Subject Classification: 14H20, 13F60, 34M40 The correspondence between singularities and cluster varieties was first observed by A.Fomin, P.Pilyavsky, D.Thurston, and E.Shustin in their remarkable paper [2]. Starting from certain real forms of deformations of the singularities introduced by N.A'Campo [1] and S.Gussein-Zade [3] they produced a cluster variety and showed that different resolutions of the same singularity give the same cluster variety. In this note we try to make this correspondence more geometrical and less mysterious. In particular we show that there is a map from the base of a versal deformation of the singularity to the corresponding cluster variety. For this purpose we generalize a construction by R.Nevanlinna [4] brought to our attention by B.Shapiro [5]. R.Nevanlinna studied the map from the space of the differential operators of the form \(D=\partial^{2}/\partial z^{2}+P(x)\), where \(P\) is a polynomial, to the collections of points in a projective line \(P^{1}\). These points just correspond to solutions of the equation \(D\psi=0\) vanishing at infinity along different rays on the complex plane and viewed as lines in the two-dimensional space of global solutions. On the other hand, the symbol of this operator \(p^{2}+P(x)\) is just an equation for the versal deformation of a plane curve singularity of type \(A\).
Our construction is in a sense a generalization of this one for singularities of other types. Namely, a versal family of a planar singularity can be considered as a family of Lagrangian curves in the plane \((\mathbb{C}^{2},dp\,dx)\). Observe that this family can be locally parameterized by the cohomology class of the form \(pdx\). On the other hand the family of equations defining these curves can be transformed into a family of differential operators having the equations as their symbol. The space of Stokes data at infinity of the differential operator is the corresponding cluster variety.
The correspondence between symbols and operators is of course not canonical. First of all, we can change change a representation of the symplectic plane \((\mathbb{C}^{2},dp\,dx)\) as a cotangent bundle to a complex line \(\mathbb{C}\) in different ways. All these ways give different Stokes data, but, as one can verify in the examples, they are equivalent as cluster varieties. Sometimes the birational equivalence of the corresponding configuration space is not so trivial and can be considered as a generalization of the Gale duality.
On the other hand the correspondence between the space of differential operators and its symbol is not canonical either. We conjecture that this map becomes canonical in the tropical limit and the cluster coordinates in this limit are periods of the Lagrangian curves given by the versal family of the singular one.
I am strongly indebted to F.Chapoton for reading paper and making very crucial remarks.
## 1. Recall: Configurations corresponding to planar bipartite graphs
Let \(a\) be a collection of points of a projective space \(P(V)\) vector space \(V\). Denote by \(\langle a\rangle\) a projective subspace generated by the points from \(a\). Recall that a collection \(a\) of \(k\) points is called _free_ if \(\dim\left\langle a\right\rangle=k-1\) and a _circuit_ if \(\dim\left\langle a\right\rangle=k-2\). For example, two points form a circuit if they coincide, three points form a circuit if they are colinear four points form a circuit if they are coplanar _etc._
Let \(\Gamma\) be a bipartite graph with the set of white vertices \(W\), black vertices \(B\) and edges \(E\). For simplicity we assume here that any two vertices are connected by no more than one edge. We say that an association of a point \(p_{w}\) in a projective space to every white vertex \(w\in W\)_corresponds_ to \(\Gamma\) if for every black vertex \(b\in B\) the points corresponding to its white neighbors form a circuit. We also require that the collection of points does not belong to a proper projective subspace.
Denote by \(\mathcal{X}_{\Gamma}\) the set of configurations corresponding to \(\Gamma\) considered up to the action of the projective group \(PGL(V)\). We call the _dimension of the configuration_ the dimension of the projective space \(P(V)\). We say that the graph \(\Gamma\) is _minimal_ if removal of any black vertex increases the dimension of the corresponding configuration.
A discrete connection on a graph \(\Gamma\) is an association of one dimensional vector spaces to vertices and association to edges of isomorphisms of the spaces corresponding to its endpoints. Given a basis in each of the one-dimensional subspaces, the discrete connection becomes an association of nonzero numbers to edges. These numbers can be organized into a matrix \(M_{b}^{w}\) with columns and rows enumerated by the black and white vertices, respectively and with zeroes for pairs of vertices not connected by an edge. Changing the bases amounts to the multiplication of this matrix by invertible diagonal matrices from the left and from the right. Given a closed path of the graph, the monodromy of the connection around this graph is a composition of the maps corresponding to the edges. In terms of the connection matrix if the path passes consecutively through the vertices \(b_{0},w_{0},\dots,b_{k},w_{k},b_{0}\) then the monodromy is given by a Laurent monomial in the matrix entries \(M_{b_{1}}^{w_{1}}(M_{b_{2}}^{w_{1}})^{-1}\cdots(M_{b_{0}}^{w_{k}})^{-1}\). If the graph is planar a graph connection is uniquely determined by the monodromies around its faces.
The set of graph connections on a graph \(\Gamma\) is in a bijection with the set of configurations corresponding to \(\Gamma\). Indeed, given the matrix \(M_{b}^{w}\) representing the connection we can consider it as a map \(M:\mathbb{C}^{B}\to\mathbb{C}^{W}\). The image of the standard basis of \(\mathbb{C}^{W}\) in the projectivized cokernel \(P(\mathbb{C}^{W}/\text{Im }M)\) form the desired configuration. Conversely, given a configuration of points in a projective space \(P(V)\), for every white vertex \(w\) choose a vector \(\tilde{p}_{w}\in V\) representing \(p_{w}\). Given a black vertex \(b\) the chosen vectors corresponding to its white neighbors satisfy a nontrivial linear relation \(\sum M_{b}^{w}\tilde{p}_{w}=0\). The matrix \(M_{b}^{w}\) forms the desired graph connection.
Call two configurations _equivalent_ if one can construct one from another by adding and removing points which can be constructed from the remaining points. The graphs corresponding to equivalent configurations are also called equivalent.
There are four equivalences generating many (and conjecturally all) others. Namely
[MISSING_PAGE_POST]
30.
## 3. Stokes data.
Let \(D(z,\frac{\partial}{\partial z})\) be a differential operator with polynomial coefficients of order \(n\). Let \(D(x,p)\) be its symbol and \(\Delta\) be its Newton polygon, namely the convex hull of the points with coordinates \((a,b)\in\mathbb{Z}^{2}\) for each monomial term \(x^{a}p^{b}\) of the polynomial \(D(x,p)\).
Consider the algebraic curve \(C=\{(x,p)\in(\mathbb{C}^{*})^{2}|D(x,p)=0\}\). This curve has the genus equal to the number of integer points strictly inside the polygon \(\Delta\) and can be compactified by adding points, called _compactification points_, corresponding to sides of the polygon. By a side we mean a segment of the boundary of \(\Delta\) between two adjacent points of \(\mathbb{Z}^{2}\). If we orient the boundary of the polygon counterclockwise each side \(s\) correspond to indivisible vector \((a,b)\in\mathbb{Z}^{2}\). The sum of such vectors obviously vanishes. At the compactification points corresponding to \(s\) the functions \(x\) and \(p\) have a zero of order \(-b_{s}\) and \(a_{s}\), respectively. Therefore in the vicinity of such point we have \(x^{a_{s}}p^{b_{s}}=O(1)\) and hence \(p\sim A_{s}x^{-a_{s}/b_{s}}\) for some constants \(A_{s}\).
One should however realize that the correspondence between sides and compactification points is not canonical but is defined up to permutations of sides corresponding to equal vectors.
The same polygon defines a curve in \(\mathbb{C}^{2}\). The sides of the polygon with \(a>0\) and \(b\leq 0\) correspond to compactifcation points with both coordinates tending to a finite constant and thus belonging to the curve in \(\mathbb{C}^{2}\). The remaining compactification points are called _points at infinity_.
Consider now the equation \(D(x,p)=0\) as a family of equations for the indeterminate \(p\) depending on \(x\) as a parameter. Our aim is to determine the asymptotic behavior of its roots when \(x\to\infty\). This behavior of \(x\) corresponds to the sides of the polygon with \(b_{s}>0\) (and thus going upward on a picture where the \(p\)-axis points up). Each side corresponds to \(b_{s}\) roots with asymptotic \(p\sim A_{s}x^{-a_{s}/b_{s}}\). The total number of roots is thus equal to the height of the polygon, which is just the degree of the polynomial \(D(x,p)\) with respect to \(p\).
Now proceed to the study of the asymptotic behaviour of the differential equation \(D(x,\frac{\partial}{\partial x})\psi=0\). In a simply connected domain where the coefficient of the \(n\)-th derivative vanishes nowhere, it has an \(n\)-dimensional space of solutions. However in a sufficiently small vicinity of infinity the differential equation defines an \(n\)-dimensional local system. Indeed, fix a number \(R\) such that the coefficient of the highest derivative of \(D\) does not vanish for \(|x|>R\), a number \(\varepsilon\in]0,\pi/2[\) and an angle \(\alpha\in\mathbb{R}/2\pi\mathbb{Z}\). The domain \(\{x\in\mathbb{C}|\ |\arg x-\alpha|<\varepsilon,\ |x|>R\}\) satisfies the above conditions. Define \(V_{\alpha}\) as the space of solutions of the differential equation in this domain. The space \(V_{\alpha}\) does depend neither of \(\varepsilon\) nor of \(R\) and does not depend on \(\alpha\) locally. Namely if \(|\alpha_{1}-\alpha_{2}|<\varepsilon\) the corresponding domains intersect, their intersection is simply connected and therefore the spaces \(V_{\alpha_{1}}\) and \(V_{\alpha_{2}}\) can be identified. Hence the family \(V_{\alpha}\) form an \(n\)-dimensional local system over the circle \(\mathbb{R}/2\pi\mathbb{Z}\).
Every solution of the equation has asymptotic behavior \(\psi(x)\sim e^{\int p(x)dx}\) for a root \(p(x)\) and therefore for generic angle \(\alpha\) the space \(V_{\alpha}\) is filtered by the rates of growth of the functions \(\Re e^{\int^{re^{i\alpha}}}p(x)dx\) with \(r\to+\infty\).
Consider a sufficiently large \(R\) and mark the points \(A_{s}(Re^{i\alpha})^{(b_{s}-a_{s})/b_{s}}\) on the complex plane. (For a given \(s\) there are \(b_{s}\) such points.) These points are ordered according to their projection on the real axis and the interval between the \(i\)-th
and the \(i+1\)-st points correspond to the subspace of \(V_{\alpha}\) of dimension \(i\). As \(\alpha\) runs around the circle, the points rotate around the origin with the angular speed \((b_{s}-a_{s})/b_{s}\) and the order of their projections changes. We call the collection of points corresponding to generic \(\alpha\) with their angular speeds indicated for each point the _growth diagram_. When the \(i\)-th and the \(i+1\)-th projections pass through each other the \(i\)-th dimensional subspace of \(V_{\alpha}\) changes.
The local system and the sequence of flags in its fibers constitute the Stokes data of the differential operator \(D\) at infinity. If the coefficient of the highest derivative in the operator \(D\) is constant, the local system is trivial and the Stokes data at infinity amounts to the collection of flags in a fixed vector space.
For nontrivial local systems, taking the universal cover of the circle we can consider a finite sequence of flags in a local system as a infinite quasi-periodic one in a fixed vector space. Recall that the sequence is called quasi-periodic if its shift by a period coincides with the action of an element of \(GL(n)\).
Recall that pairs of complete flags in an \(n\)-dimensional space up to diagonal action of \(GL(n)\) are in bijection with the permutation group \(\mathfrak{S}_{n}\). The standard generators \(s_{i}\) of this group correspond to the pairs of flags different only in the subspaces of dimension \(i\).
Therefore a sequence of flags in a local system on a circle such that adjacent flags differ in one subspace can be encoded by an infinite periodic word in the same generators \(s_{1},\dots,s_{n-1}\). We will denote such words as \([w]\), where \(w\) is a period of the infinite word. For example a word \([(s_{2}s_{1})^{m}]\) corresponds to a 3-dimensional local system with \(2m\) flags such that the subspaces of dimension 1 and 2 change alternatively. Such sequence is equivalent to the quasiperiodic sequence of 1-dimensional subspaces since the 2-dimensional subspaces can be restored from the 1-dimensional ones. In the projective space this sequence corresponds to a quasi-periodic set of points with the period of length \(m\).
Observe that if two words are related by braid relations they correspond to equivalent sequences of flags.
The word \([(s_{2}s_{1}^{2})^{m}]\) corresponds to a sequence of flags where one dimensional subspace changes twice after each change of the 2-dimensional subspace. In the projective space it corresponds to a quasi-periodic broken line with a marked point on each side.
As another example consider a growth diagram given by a regular \(n\)-gon rotating around its center with angular velocity \(m/n\). We will show that it corresponds to an \(m\)-periodic sequence of points in \(P^{n-1}\). Indeed, in this case all even subspaces change at once and then all odd subspaces change at once. The corresponding word is therefore \(w=[(w_{\text{odd}}w_{\text{even}})^{m}]\) where \(w_{\text{odd}}=s_{1}s_{3}\cdots\) and \(w_{\text{even}}=s_{2}s_{4}\cdots\) are the products of the odd and even generators, respectively. We claim that such configuration is equivalent just to the sequence of 1-dimensional subspaces with quasi-period \(m\). Indeed, given a sequence of flags corresponding to the word \(w\) one can construct the sequence of 1-dimensional subspaces just discarding all subspaces of higher dimension. On the other hand given a sequence of 1-dimensional subspaces \(\{\tilde{p}_{i}|i\in\mathbb{Z}/n\mathbb{Z}\}\), one can construct a sequence of \(2m\) flags
\[F_{2i+1}=\{\tilde{p}_{i}\subset\tilde{p}_{i}+\tilde{p}_{i+1}\subset\tilde{p}_ {i-1}+\tilde{p}_{i}+\tilde{p}_{i+1}\subset\tilde{p}_{i-1}+\tilde{p}_{i}+ \tilde{p}_{i+1}+\tilde{p}_{i+2}\subset\cdots\}\]
and
\[F_{2i}=\{\tilde{p}_{i}\subset\tilde{p}_{i-1}+\tilde{p}_{i}\subset\tilde{p}_{i-1}+ \tilde{p}_{i}+\tilde{p}_{i+1}\subset\tilde{p}_{i-2}+\tilde{p}_{i-1}+\tilde{p}_{i }+\tilde{p}_{i+1}\subset\cdots\}\]
and observe that \(F_{2i}\) differ from \(F_{2i+1}\) in odd dimensional terms and from \(F_{2i-1}\) in even dimensional terms.
## 4. Example: \(A_{n}\).
The Dynkin diagram of type \(A_{n}\) is just a chain of \(n\) vertices and the corresponding bipartite graph shown on Fig.1a consists of \(n+3\) white and \(n+1\) black vertices.
It corresponds just to \(n+3\) collinear points.
The versal deformation of a singularity of type \(A_{n}\) is represented by the polynomial
\[D(x,p)=p^{2}+x^{n+1}+a_{1}+a_{2}x+\cdots+a_{n}x^{n-1}.\]
The corresponding Newton polygon is a right triangle with sides \(2\) and \(n\) (shown on Fig.1b for \(n=4\)) with one or two sides for \(n\) odd or even, respectively, going upwards. It corresponds to a curve of genus \(\lfloor(n-1)/2\rfloor\) with one or two points at infinity, respectively with homology of rank \(n\). The growth diagram (shown on Fig.1c) consists of two points, which are opposite if \(n\) is even, rotating with the angular speed \((n+3)/2\). It corresponds to the word \([s_{1}^{n+3}]\) and thus the configurations of \(n+3\) points in \(P^{1}\).
Interchanging \(p\) and \(x\) we get a differential equation of order \(n+1\), the growth diagram consists of points forming a regular \(n+1\)-gon rotating about its center with angular velocity \((n+3)/(n+1)\) which corresponds to the cyclic word \([(w_{\rm odd}w_{\rm even})^{n+3}]\) and thus to \(n+3\) points in \(P^{n}\). These two configuration space are known to be isomorphic since one can trace a unique rational normal curve through this point thus obtaining \((n+3)\) points in \(P^{1}\).
## 5. Example: \(D_{4}\).
Consider the Dynkin diagram of the type \(D_{4}\) and construct a bipartite graph shown on Fig. 2a. It corresponds to three groups of collinear points \(C,1,2,A\), \(A,3,4,B\) and \(B,5,6,A\) and the dimension of the configuration is \(2\). The configuration is obviously equivalent to the configuration of \(6\) points \(1,2,3,4,5,6\in P^{2}\) as shown on Fig.2b. This configuration of points corresponds to a cyclic word \([(s_{2}s_{1}^{3})^{3}]\)
The versal deformation of the singularity \(D_{4}\) reads as
\[D(p,x)=p^{3}+x^{2}p+a_{1}+a_{2}p+a_{3}p^{2}+a_{4}x\]
Figure 1.
with the Newton polygon is a quadrilateral shown on the Fig.2c. There are three sides of the polygon directed upward: \((-1,1)\) with multiplicity \(2\) and \((1,1)\) with multiplicity \(1\). It corresponds to a curve of genus \(1\) with three points at infinity, and with homology of rank \(4\). The growth diagram consist of two points rotating with angular speed \(2\) and one point closer to the center which does not move. It gives a word \([(s_{2}s_{1}^{2})^{4}]\). Therefore the space Stokes data can be considered as the configuration space of quadrilaterals with marked points on each side. Such configurations are equivalent to the configurations of six points -- four points on the sides and two opposite vertices of the quadrilateral. On the other hand one can deduce this equivalence from the equality of the words \([(s_{2}s_{1}^{2})^{4}]\) and \([(s_{2}s_{1}^{3})^{3}]\) in the braid group.
Consider another form of the same singularity just with the variables \(p\) and \(x\) interchanged.
\[D(p,x)=x^{3}+xp^{2}+a_{1}+a_{2}x+a_{3}x^{2}+a_{4}p\]
The Newton polygon is just the reflection of the original one, but the corresponding differential equation is of order \(2\) and the local system is nontrivial since the coefficient at the highest derivative vanish at the origin. The growth diagram consists of two points rotating with the angular speed \(2\) and thus the Stokes data amounts to the configuration of four lines in a two-dimensional local system on a circle.
Remarkably these two configuration space turns out to be birationally isomorphic. The only isomorphism I know is given by describing both as cluster varieties, and I don't know any geometric way to describe it.
## 6. Example: \(E_{8}\).
Consider the Dynkin diagram of the type \(E_{8}\) and construct a bipartite graph shown on Fig. 3a. On the same picture we show the corresponding bipartite graph. This diagram corresponds to configurations of \(13\) points corresponding to white vertices with black vertices corresponding to collinear triples of points. It implies that there are three groups of collinear points \(C,1,2,3,4,5,A\), \(A,6,7,8,B\) and \(B,9,10,C\). Such configuration can be realized in two dimensional projective space \(P^{2}\) as a triangle with \(2\),\(3\) and \(5\) points on its sides, respectively as shown on Fig. 3b. Observe that this configuration space is birationally isomorphic to the space of unrestricted \(8\)-tuples of points \(1,2,7,8,9,10,X,Y\) in \(P_{2}\). Indeed, as it
Figure 2.
is clear from the picture the points \(A,B,C,3,4,5,6,7\) can be reconstructed out of \(1,2,7,8,9,10,X,Y\) and vise versa.
The versal deformation of the singularity \(E_{8}\) is
\[D(x,p)=x^{5}+p^{3}+a_{1}+a_{2}x+a_{3}x^{2}+a_{4}x^{3}+a_{5}p+a_{6}xp+a_{7}x^{2}p +a_{8}x^{3}p\]
the Newton polygon is shown of Fig.3c. It has one side \((-5,3)\) directed upward with multiplicity \(1\). Thus the curve has genus \(4\) with one point at infinity with the rank of homology group \(8\). The growth diagram shown on Fig.3d consists of three points with angle \(2\pi/3\) between them rotating with the angular speed \(8/3\). It corresponds to the periodic word \([(s_{1}s_{2})^{8}]\) and thus the Stokes data amounts to a configuration of \(8\) points in \(P^{2}\).
Exchanging \(x\) and \(p\) we get on the growth diagram \(5\) points in a vertices of a regular pentagon rotating about its center with the angular speed \(8/5\). It corresponds to a sequence of \(8\) points in \(P^{4}\). The two configuration space are birationally isomorphic via Gale duality.
## 7. Other cases.
We leave the detailed consideration of singularities of other types as an exercise.
* The Dynkin diagram corresponds to a configuration of triangles in the projective plane \(P^{2}\) with \(2\), \(2\) and \(n-2\) points on its respective sides. The corresponding singularity \(xp^{2}+x^{n+1}\) corresponds to a configuration of \(n\) lines in a two-dimensional local system on a circle. The differential operator corresponding to \(p^{n+1}+x^{2}p\) corresponds to configurations of flags in \(P^{n}\).
Figure 3.
* The Dynkin diagram as well as the singularity \(p^{3}+x^{4}\) correspond to a configuration of triangles in \(P^{2}\) with 2,3 and 3 points on the sides, respectively. The singularity \(p^{3}+x^{4}\) corresponds to the word \([(s_{1}s_{2})^{7}]\), i.e., to configurations of 7-tuples of points of \(P^{2}\). It is easy to see that the two configuration spaces are equivalent. The singularity \(p^{4}+x^{3}\) corresponds to configurations of 7-tuples points in \(P^{3}\).
* The Dynkin diagram corresponds to a configuration of triangles in \(P^{2}\) with 2,3 and 4 points on the sides, respectively. The singularity \(p^{3}+x^{3}p\) corresponds to configurations of pentagons in \(P^{2}\) with one marked point on each side, which is equivalent to the space of configurations given by the Dynkin diagram. The singularity \(xp^{3}+x^{3}\) corresponds to a configuration space of 21-periodic sequences flags in \(P^{3}\) which is too complicated to be described here.
|
2309.14353 | Limited Communications Distributed Optimization via Deep Unfolded
Distributed ADMM | Distributed optimization is a fundamental framework for collaborative
inference and decision making in decentralized multi-agent systems. The
operation is modeled as the joint minimization of a shared objective which
typically depends on observations gathered locally by each agent. Distributed
optimization algorithms, such as the common D-ADMM, tackle this task by
iteratively combining local computations and message exchanges. One of the main
challenges associated with distributed optimization, and particularly with
D-ADMM, is that it requires a large number of communications, i.e., messages
exchanged between the agents, to reach consensus. This can make D-ADMM costly
in power, latency, and channel resources. In this work we propose unfolded
D-ADMM, which follows the emerging deep unfolding methodology to enable D-ADMM
to operate reliably with a predefined and small number of messages exchanged by
each agent. Unfolded D-ADMM fully preserves the operation of D-ADMM, while
leveraging data to tune the hyperparameters of each iteration of the algorithm.
These hyperparameters can either be agent-specific, aiming at achieving the
best performance within a fixed number of iterations over a given network, or
shared among the agents, allowing to learn to distributedly optimize over
different networks. For both settings, our unfolded D-ADMM operates with
limited communications, while preserving the interpretability and flexibility
of the original D-ADMM algorithm. We specialize unfolded D-ADMM for two
representative settings: a distributed estimation task, considering a sparse
recovery setup, and a distributed learning scenario, where multiple agents
collaborate in learning a machine learning model. Our numerical results
demonstrate that the proposed approach dramatically reduces the number of
communications utilized by D-ADMM, without compromising on its performance. | Yoav Noah, Nir Shlezinger | 2023-09-21T08:05:28Z | http://arxiv.org/abs/2309.14353v2 | # Limited Communications Distributed Optimization via Deep Unfolded Distributed ADMM
###### Abstract
Distributed optimization is a fundamental framework for collaborative inference and decision making in decentralized multi-agent systems. The operation is modeled as the joint minimization of a shared objective which typically depends on observations gathered locally by each agent. Distributed optimization algorithms, such as the common distributed alternating direction method of multipliers (D-ADMM), tackle this task by iteratively combining local computations and message exchanges. One of the main challenges associated with distributed optimization, and particularly with D-ADMM, is that it requires a large number of communications, i.e., messages exchanged between the agents, to reach consensus. This can make D-ADMM costly in power, latency, and channel resources. In this work we propose _unfolded D-ADMM_, which follows the emerging deep unfolding methodology to enable D-ADMM to operate reliably with a predefined and small number of messages exchanged by each agent. Unfolded D-ADMM fully preserves the operation of D-ADMM, while leveraging data to tune the hyperparameters of each iteration of the algorithm. These hyperparameters can either be agent-specific, aiming at achieving the best performance within a fixed number of iterations over a given network, or shared among the agents, allowing to learn to distributedly optimize over different networks. For both settings, our unfolded D-ADMM operates with limited communications, while preserving the interpretability and flexibility of the original D-ADMM algorithm. We specialize unfolded D-ADMM for two representative settings: a distributed estimation task, considering a sparse recovery setup, and a distributed learning scenario, where multiple agents collaborate in learning a machine learning model. Our numerical results demonstrate that the proposed approach dramatically reduces the number of communications utilized by D-ADMM, without compromising on its performance.
## I Introduction
The proliferation of sophisticated electronic devices results in data, e.g., locally sensed signals, being divided among many different agents. By collaborating with each other, the users can jointly extract desired information or learn a machine learning models from the divided data in a decentralized fashion. This is achieved by _distributed optimization_[2], leveraging the communications capabilities of the agents, without relying on centralized data pooling implemented by traditional centralized systems. A shared objective is iteratively optimized in a distributed manner by having each agent process the data locally to solve a local optimization problem, and then update its neighbors, repeating until consensus is achieved [3]. Various different algorithms implement distributed optimization, see survey in [4]. A popular and common distributed optimization algorithms is distributed alternating direction method of multipliers (D-ADMM) [5, 6], which can often be guaranteed to converge into optimal consensus [7, 8, 9].
Distributed optimization algorithms, including D-ADMM, typically require multiple iterations to reach consensus. Each iteration involves not only local computations, but also message exchanges between the participating agents. The latter implies excessive communications, possibly inducing notable delays on the overall optimization procedure, limiting its real-time applicability, and can be costly in power and spectral resources [10, 11]. This raises a notable challenge that is not encountered in centralized optimization, and can be a limiting factor in multi-agent systems that are inherently limited in power and communications, such as Internet of Things (IOT) and sensor networks [12, 13].
Two leading approaches are proposed in the literature to carry out distributed optimization with reduced communications. The first approach aims at deriving alternative optimization algorithms with improved convergence rates [14, 15, 16, 17, 18]. For instance, the primal dual method of multipliers [19] is closely related to D-ADMM, and can be shown to converge more rapidly (assuming it converges) [20]. Alternatively, one can reduce the communication overhead by quantizing the exchanged messages, such that each message is constrained to a fixed and small number of bits [21, 22, 23, 24, 25]. However, these conventional approaches are typically studied in the context of convergence rate, which is an asymptotic property. In practice, one is often interested in operating with a fixed latency, and there is a concrete need to enable reliable low-latency distributed optimization techniques to be carried out with a small and fixed number of communication rounds.
We are witnessing a growing interest in machine learning, and particularly in deep learning, beyond its traditional domains of computer vision and natural language processing. Deep learning systems employ data-driven parameterized deep neural networks (DNNs) that learn their mapping from data [26]. Distributed optimization tasks, involving iterative message exchange between agents and local processing, can be carried out using graph neural networks (GNNs) [27], where each GNN layer implements a single message exchange round. Accordingly, various GNN architectures were employed for tasks involving distributed optimization [28, 29] and learning [30]. Although GNNs can learn to implement distributed optimization with a fixed and limited number of communication rounds, they lack the interpretability of conventional distributed optimization, and often require large volumes of data for learning purposes.
While deep learning methods are traditionally considered to
replace principled inference based on mathematical modelling, they can also be integrated with classical inference [31, 32, 33]. In particular, deep learning tools can be employed to _learn-to-optimize_[32], i.e., improve iterative optimization in performance and run-time [34]. This can be realized using the deep unfolding methodology [35], that leverages data to optimize the performance of an iterative inference rule with a fixed number of iterations. Recent works have considered the distributed optimization of centralized deep unfolded mappings [36] and the integration of deep unfolding for centralized aggregation in distributed learning [37]. Yet, deep unfolding is currently considered in the context of centralized optimization, where its gains are mostly in run-time and in possibly increased abstractness of the resulting inference rule [34], motivating its exploration for facilitating distributed optimization with a fixed and small number of communication rounds.
In this work we explore the usage of deep unfolding to facilitate distributed optimization with a fixed and limited communication budget. We focus on the D-ADMM algorithm, being a popular and common distributed optimization algorithm, aiming to show the usefulness of the combination of deep unfolding with distributed optimization. To that aim, we unfold D-ADMM to operate with a predefined fixed number of communication rounds. Then, we leverage data to tune the optimization hyperparameters, converting D-ADMM into a trainable discriminative machine learning model [38] such that the resulting distributed mapping fits the training data. Our proposed _unfolded D-ADMM_ thus learns from data to achieve improved performance within the fixed number of iterations.
We consider two parameterizations for the unfolded architecture. The first learns _agent-specific hyperparameters_, i.e., allowing them to vary not only between iterations, but also between agents. The second learns _shared hyperparameters_, where all agents use the same hyperparameters that vary between iterations. As each setting learned with shared hyperparameters is a special case of the agent specific case, this latter allows achieving improved performance. However, learning shared hyperparameters is not limited to a given set of agents, and thus a mapping learned for a given network can generalize to different networks. In both cases, the resulting unfolded D-ADMM thus completely preserves the operation of the conventional D-ADMM, while allowing it to achieve improved performance within a predefined and small communication budget.
We showcase the application of unfolded D-ADMM in two representative case studies: \((i)\) distributed sparse recovery; and \((ii)\) the distributed learning of a linear regression model. For each setting, we specialize the formulation of D-ADMM and its unfolded architecture. Our experimental studies show that the application of deep unfolding yields notable improvements in both case studies. In particular, we demonstrate reductions by factors varying between \(\times 8\) and up to \(\times 154\) in the amount of communications compared with the conventional D-ADMM, without degrading its performance and while completely preserving its interpretable operation. We also show that by following a principled optimization algorithm, our unfolded D-ADMM notably outperforms GNN architectures trained for the same task.
The rest of this work is organized as follows: Section II formulates the generic distributed optimization setup and recalls the conventional D-ADMM. The proposed unfolded D-ADMM algorithm is derived in Section III. The considered case studies of distributed sparse recovery and distributed linear regression learning are reported in Sections IV-V, respectively, along with their corresponding experimental study. Finally, Section VI provides concluding remarks.
## II System Model
This section provides the necessary background needed to derive our proposed unfolded D-ADMM in Section III. In particular, we first formulate the generic distributed optimization problem in Subsection II-A, after which we recall D-ADMM in Subsection II-B.
### _Generic Distributed Optimization Problem Formulation_
We consider a set of \(P\) agents indexed in \(1,\ldots,P\). Each agent of index \(p\in 1,\ldots,P\) has access to a local data \(\mathbf{b}_{p}\in\mathbb{R}^{m}\), representing, e.g., locally acquired observations. Based on the local data, the agents aim at jointly solving an optimization problem of the form
\[\operatorname*{arg\,min}_{\mathbf{\bar{y}}}\quad\sum_{p=1}^{P}f_{p}(\mathbf{\bar{y}}; \mathbf{b}_{p}). \tag{1}\]
In (1), the vector \(\mathbf{\bar{y}}\in\mathbb{R}^{n}\) is the optimized variable, and \(f_{p}:\mathbb{R}^{n}\times\mathbb{R}^{m}\mapsto\mathbb{R}^{+}\) is the \(p\)th objective.
The generic formulation in (1) can be viewed as a centralized optimization problem with a decomposable objective. Yet, the distributed nature of (1) stems from the fact that each observation \(\mathbf{b}_{p}\) is available solely to the \(p\)th agent, and there is no data pooling, i.e., the gathering of \(\{\mathbf{b}_{p}\}_{p=1}^{P}\) by a centralized data fusion entity.
Each agent can communicate with its neighbours. The resulting communication network is modeled as an undirected connected graph \(G(\mathcal{V},\mathcal{E})\), where \(\mathcal{V}=\{1,\ldots,P\}\) is the set of nodes and \(\mathcal{E}\subseteq\mathcal{V}\times\mathcal{V}\) is the set of edges. The graph is proper colored, i.e., there exist non-overlapping sets \(\{\mathcal{V}_{c}\}_{c=1}^{C}\) where \(\cup_{c=1}^{C}\mathcal{V}_{c}=\mathcal{V}\) and no two vertices in each \(\mathcal{V}_{c}\) share an edge. This definition specializes uncolored graphs for \(C=1\).
Our goal in this paper is to tackle the optimization problem formulated in (3) under a scenario with limited predefined communication and maximal latency. Here, each node is allowed to transmit up to \(T\) messages to each of its neighbours. Furthermore, to enable deriving rapid communications-limited optimizers, we also have access to a labeled data set comprised of \(L\) pairs of local data sets along the corresponding desired optimization variable. The data set is given by
\[\mathcal{D}=\left\{\left(\left\{\mathbf{b}_{l,p}\right\}_{p=1}^{P},\mathbf{\bar{y}}_{l }\right\}\right\}_{l=1}^{L}, \tag{2}\]
where for each sample of index \(l\), \(\mathbf{b}_{l,p}\in\mathbb{R}^{m}\) is the observations vector at the \(p\)th node, and \(\mathbf{\bar{y}}_{l}\in\mathbb{R}^{n}\) is the desired optimization outcome.
### D-ADMM Algorithm
A popular method for tackling distributed optimization problems as formulated in (1) is the D-ADMM algorithm [6]. To formulate D-ADMM, we first cast (1) as a consensus setup. This is done by assigning copies of the global parameter \(\mathbf{\bar{y}}\) to each node while constraining all copies to be equal. Specifically, since the network is connected, the constraint is equivalently imposed for each pair of neighbouring agents [39]. The resulting distributed optimization problem is reformulated as
\[\operatorname*{arg\,min}_{\mathbf{y}_{1},\ldots,\mathbf{y}_{P}} \sum_{p=1}^{P}f_{p}(\mathbf{y}_{p};\mathbf{b}_{p}) \tag{3}\] \[\text{subject to}\quad\mathbf{y}_{p}=\mathbf{y}_{j},\forall j\in\mathcal{N }_{p}.\]
In (3), \(\mathcal{N}_{p}\) denotes the set of all the neighbors of node \(p\) in \(G(\mathcal{V},\mathcal{E})\), and \(\mathbf{\bar{y}}=(\mathbf{y}_{1},\ldots,\mathbf{y}_{p})\in(\mathbb{R}^{n})^{P}\) is the optimization variable. The equivalent formulation in (3) represents (1) as a set of \(P\) local optimzation problems, that are constrained to have identical solutions among the different agents.
D-ADMM tackles the constrained optimization in (3) by formulating the augmented Lagrangian for each agent. For the \(p\)th agent, the augmented Lagrangian is given by
\[\mathcal{L}_{p}(\mathbf{y}_{p},\{\mathbf{y}_{j}\}_{j\in\mathcal{N}_{p}},\mathbf{\lambda}_{p})\triangleq f_{p}(\mathbf{y}_{p};\mathbf{b}_{p})\] \[\qquad\qquad+\sum_{j\in\mathcal{N}_{p}}\mathbf{\lambda}_{p}^{T}(\mathbf{y }_{p}-\mathbf{y}_{j})+\frac{\rho}{2}\|\mathbf{y}_{p}-\mathbf{y}_{j}\|_{2}^{2}, \tag{4}\]
where \(\rho>0\) is a fixed hyperparameter and \(\mathbf{\lambda}_{p}\) is the dual variable of node \(p\). D-ADMM then has each agent alternate between \((i)\) minimizing (4) with respect to the local optimization variable \(\mathbf{y}_{p^{*}}\) (\(ii\)) share this update with its neighbours; and \((iii)\) maximize (4) with respect to the local dual variable \(\mathbf{\lambda}_{p}\).
The implementation of the minimization and maximization steps typically depends on the specific objective functions. Following [6], we focus on the generic implementation of these operations by gradient descent and gradient ascent steps, respectively. By letting \(\{\mathbf{y}_{j,p}\}_{j\in\mathcal{N}_{p}}\) denote the copies of \(\{\mathbf{y}_{j}\}\) available to the \(p\)th agent, the \(k\)th iteration computes
\[\mathbf{y}_{p}^{(k+1)}=\mathbf{y}_{p}^{(k)}-\alpha\nabla_{\mathbf{y}_{p}} \mathcal{L}_{p}\left(\mathbf{y}_{p}^{(k)},\{\mathbf{y}_{j,p}\}_{j\in\mathcal{N}_{p}}, \mathbf{\lambda}_{p}^{(k)}\right), \tag{5}\]
by each agent \(p\), with \(\alpha>0\) being a step-size. The agent then shares \(\mathbf{y}_{p}^{(k+1)}\) with its neighbours, who update their local copies. The iteration is concluded by having all agents update their dual variable with step-size \(\eta>0\) via
\[\mathbf{\lambda}_{p}^{(k+1)}=\mathbf{\lambda}_{p}^{(k)}+\eta\nabla_{\mathbf{\lambda}_{p}} \mathcal{L}_{p}\left(\mathbf{y}_{p}^{(k+1)},\{\mathbf{y}_{j,p}\}_{j\in\mathcal{N}_{p}}, \mathbf{\lambda}_{p}^{(k)}\right). \tag{6}\]
The fact that the graph is colored allows some of the computations in the above iterations to be carried out in parallel. The resulting procedure is summarized as Algorithm 1, where all nodes in the same color group simultaneously update (5), and the operation is repeated until a stopping criteria is reached, e.g., convergence is achieved or a maximal number of iterations is exhausted. While this reduces the run-time of each iteration, it does not reduce the amount of message exchanges, which is dictated by the number of iterations required to reach a the stopping criterion, and is often very large (in the order of hundreds or thousands of iterations to converge).
```
0:\(\forall p\in\mathcal{V}\), set \(\mathbf{\lambda}_{p}^{(1)}\), \(\mathbf{y}_{p}^{(1)}\), \(\{\mathbf{y}_{j,p}\}\) to zero; \(k=1\)
1repeat
2for\(c=1,2,\ldots,C\)do
3forall\(p\in\mathcal{V}_{c}\) [in parallel]do
4 Update \(\mathbf{y}_{p}^{(k+1)}\) via (5);
5 Send \(\mathbf{y}_{p}^{(k+1)}\) to \(\mathcal{N}_{p}\);
6
7forall\(p\in\mathcal{V}\)[in parallel]do
8 Update \(\mathbf{\lambda}_{p}^{(k+1)}\) via (6);
9
10\(k\gets k+1\)
11
12untilstopping criteria is reached;
```
**Algorithm 1**D-ADMM
The above formulation of D-ADMM is given for a generic problem, i.e., with explicitly stating the local objective functions \(\{f_{p}(\cdot)\}\). Two special cases of the application of D-ADMM for distributed sparse recovery and for distributed learning of a linear regression model are detailed in Sections IV and V, respectively.
## III Deep Unfolded D-ADMM
Although D-ADMM is a suitable algorithm for tackling (1) with the objective (4), its direct application is likely to yield inaccurate estimates when applied with a fixed and small communication budget \(T\), i.e., when allowed to run up to \(T\) iterations. In this section we describe how we leverage the available data set in (2) to enable D-ADMM to operate reliably and rapidly via deep unfolding. As deep unfolding converts an iterative algorithm into a sequential discriminative model [38], we begin by describing the trainable architecture in Subsection III-A. Then, we detail the training procedure in Subsection III-B, after which we provide a discussion in Subsection III-C.
### _Trainable Architecture_
We unfold D-ADMM by fixing its number of iterations to be \(T\), thus meeting the communication budget imposed in Section II. In order to enable the resulting algorithm to infer reliably, we note that D-ADMM tackles the objective in (3) by introducing three hyperparameters: the regularization coefficient \(\rho\), and the primal-dual step-sizes \(\alpha\) and \(\eta\). While the exact setting of \((\rho,\alpha,\eta)\) typically does not affect the algorithm outcome when allowed to run until convergence (under some conditions [5, 7]), they have notable effect when using a fixed number of iterations [34].
Our proposed unfolded D-ADMM builds upon the insight that a proper setting of \((\rho,\alpha,\eta)\) can notably improve the performance of D-ADMM when constrained to use \(T\) iterations. We thus treat D-ADMM with \(T\) iterations as discriminative machine learning model by considering two possible settings of trainable parameters: _agent-specific hyperpaemeters_ and _shared hyperparameters_.
#### Iii-A1 Agent-Specific Hyperparameters
Here, we allow \((\rho,\alpha,\eta)\) to _vary between agents_ and between iterations, while treating them as trainable parameters. By doing so, each iteration of D-ADMM can be viewed as a layer of a \(T\)-layered trainable model; the parameters of the \(k\)th layer are given by
\[\mathbf{\theta}_{k}=\{\rho_{p}^{(k)},\alpha_{p}^{(k)},\eta_{p}^{(k)}\}_{p=1}^{P}, \tag{7}\]
where \(\rho_{p}^{(k)}\), \(\alpha_{p}^{(k)}\), and \(\eta_{p}^{(k)}\) are used as the setting of \(\rho\), \(\alpha\) and \(\eta\), by the \(p\)th agent at the \(k\)th iteration of Algorithm 1.
The agent-specific approach parameterization allows each agent to use different hyperparameters that also change between iterations. As such, it provides increased flexibility to the operation of D-ADMM with \(T\) iterations for a given graph \(\mathcal{G}(\mathcal{V},\mathcal{E})\), which is exploited by learning these hyperparameters from data (see Subsection III-B). However, the fact that this form of unfolded D-ADMM assigns a specific set of hyperparameters to each individual agent implies that it should be trained and applied on the same set of agents. Furthermore, the number of trainable parameters grows with the number of agents, which make training more complicated and computationally complex, particularly when dealing with massive networks.
#### Iii-A2 Shared Hyperparameters
In order to decouple the parameterization of the unfolded D-ADMM from the graph \(\mathcal{G}(\mathcal{V},\mathcal{E})\), we also consider a setting with shared hyperparameters. Here, the hyperparameters vary between iterations but are _shared between agents_. Accordingly, each D-ADMM iteration is viewed as a layer of a \(T\)-layered trainable model, where the parameters of the \(k\)th layer are given by
\[\mathbf{\theta}_{k}=\{\rho^{(k)},\alpha^{(k)},\eta^{(k)}\}, \tag{8}\]
with \(\rho^{(k)}\), \(\alpha^{(k)}\), and \(\eta^{(k)}\) being the setting of \(\rho\), \(\alpha\) and \(\eta\) employed at the \(k\)th iteration of Algorithm 1.
The parameterization in (8) is clearly a constrained case of that allowed in (7), and thus every configuration learned with shared hyperparaemters can also be learned with agent specific hyperparameters. This indicates that this conversion of D-ADMM into a trainable machine learning model is not expected to achieve improved performance compared with the agent-specific setting, when training and inference are carried out on the same graph. However, the main benefit of the shared hyperparameters approach is that it is not sensitive to the graph specificities. As such, the same learned hyperparameters can be employed on different graphs, as we numerically demonstrate in Section IV. Furthermore, it allows our unfolded D-ADMM can be trained end-to-end on a small graph and then have trained hyperparameters used on a bigger graph, an approach which is known to facilitate learning on large graphs in the context of GNNs [40].
### _Training Procedure_
The unfolding of D-ADMM into a \(T\)-layered model yields an architecture with trainable parameters \(\mathbf{\theta}=\{\mathbf{\theta}_{k}\}_{k=1}^{T}\). In particular, \(\mathbf{\theta}\) is comprised \(3\cdot P\cdot T\) trainable parameters for agent-specific hyperparameters or of \(3\cdot T\) parameters for shared hyperparameters (though one can possibly introduce additional trainable parameters arising, e.g., from the objective, as we do in Section IV). We use the data set \(\mathcal{D}\) in (2) to tune \(\mathbf{\theta}\) such that the local decisions obtained by the agents, i.e., \(\{\bar{\mathbf{y}}_{p}\}\), accurately match the desired consensus output \(\bar{\mathbf{y}}\).
In particular, we train unfolded D-ADMM using the mean-squared error (MSE) loss. Let \(\mathbf{y}_{p}^{(k)}(\{\mathbf{b}_{i}\}_{i=1}^{P};\mathbf{\theta})\) be the estimate produced at the \(k\)th iteration at node \(p\) when applying D-ADMM with parameters \(\mathbf{\theta}\) and observations \(\{\mathbf{b}_{i}\}_{i=1}^{P}\). The resulting training loss is
\[\mathcal{L}_{\mathcal{D}}(\mathbf{\theta})\!=\!\frac{1}{|\mathcal{D}|P}\!\sum_{(\{ \mathbf{b}_{i},i\},\bar{\mathbf{y}}_{i})\in\mathcal{D}}\sum_{p=1}^{P}\!\|\mathbf{y}_{p}^{ (T)}\!\big{(}\{\mathbf{b}_{i,i}\};\mathbf{\theta}\big{)}\!-\!\bar{\mathbf{y}}_{i}\|_{2}^{2}. \tag{9}\]
We optimize the parameters \(\mathbf{\theta}\) based on (9) using deep learning optimization techniques based on, e.g., mini-batch stochastic gradient descent.
When training on a large number of agents it is likely to suffer from hardware limitations, hence the training process is done using a sequential training method [41]. Unlike conventional batch training, which requires processing large volumes of data all at once, sequential training takes a more incremental approach. It divides the training process into smaller segments, allowing the model to be updated progressively. By breaking down the training process into \(t\)-layers segments, where \(t<T\), we mitigate the memory and computation bottlenecks, reduce the immediate strain on the hardware. The fit layer output is a soft estimate of the MSE loss for each \(t\in T\). Therefore, each building block of unfolded D-ADMM can be trained individually, by minimizing the MSE loss. To formulate this objective, let \(\mathbf{y}_{p}^{(t)}\big{(}\{\mathbf{b}_{i}\}_{i=1}^{P};\mathbf{\theta}_{t}\big{)}\) be the estimate produced at the \(t\)th iteration at node \(p\) when applying D-ADMM with parameters \(\mathbf{\theta}_{t}\) and observations \(\{\mathbf{b}_{i}\}_{i=1}^{P}\). Where \(\mathbf{\theta}_{t}\) represents the hyperparameters at the \(t\)th layer. The resulting training loss is
\[\mathcal{L}_{\mathcal{D}}(\mathbf{\theta}_{t})\!=\!\frac{1}{|\mathcal{D}|P}\!\sum_{( \{\mathbf{b}_{i},i\},\bar{\mathbf{y}}_{i})\in\mathcal{D}}\sum_{p=1}^{P}\!\|\mathbf{y}_{p}^{ (t)}\!\big{(}\{\mathbf{b}_{i,i}\};\mathbf{\theta}_{t}\big{)}\!-\!\bar{\mathbf{y}}_{i}\|_{2}^ {2}. \tag{10}\]
We optimize the hyperparameters \(\mathbf{\theta}_{t}\) based on (10) using deep learning optimization techniques based on, e.g., mini-batch stochastic gradient descent. In general, this form of learning based on first-order stochastic optimization requires one to be able to compute the gradient of (9) and (10) with respect to the trainable parameters \(\mathbf{\theta}\) and \(\mathbf{\theta}_{t}\), respectively. By the formulation of D-ADMM, and particularly, (5)-(6), we note that the output of D-ADMM with \(T\) iterations is indeed differentiable with respect to these hyperparameters, as long as the local objectives \(\{f_{p}(\cdot)\}\) are differentiable (which is implicitly assumed when formulating D-ADMM with gradient steps, as in Algorithm 1). The resulting procedure is summarized as Algorithm 2 assuming mini-batch stochastic gradient descent, which can naturally be extended to alternative learning mechanisms based on momentum and/or adaptive step-sizes [42]. We initialize \(\mathbf{\theta}\) before the training process with fixed hyperparameters with which D-ADMM converges (without a limited communications budget). After training, the learned parameters \(\mathbf{\theta}\) are used as hyperparameters for applying D-ADMM with \(T\) iterations.
### _Discussion_
The proposed unfolded D-ADMM leverages data, obtained from simulations or measurements, to enable the D-ADMM algorithm to operate reliably and rapidly with a fixed number of communication rounds. Distributed optimization algorithms are typically evaluated and analyzed in terms of their convergence, which is an asymptotic property. Here, we are particularly interested in operation with a fixed and typically small number of communication rounds (e.g., in our numerical studies we use \(T=20\)), acknowledging that often in practice it is highly beneficial to impose such limitations on distributed systems. For this reason, we design unfolded D-ADMM to optimize the D-ADMM distributed optimizer within this predefined finite horizon.
While we use deep learning techniques to optimize the optimizer, the resulting distributed inference rule is not a black-box DNN, and maintains the interpretable operation of the classic D-ADMM algorithm. This follows since deep learning automated training machinery is utilized to adapt the hyperparameters of the algorithm and the regularization coefficient of the objective, which are typically tuned by hand, while allowing these parameters to vary between iterations. By doing so, we remove the need to adapt these hyperparameters manually or via additional lengthy computations during inference (via e.g., backtracking or line-search [43, Ch. 9.2]), while notably improving both accuracy and run-time, as experimentally demonstrated in Subsections IV-C and V-C for the case studies of distributed sparse recovery and of distributed linear regression learning, respectively.
The unfolding of a distributed optimizer unveils a key benefit of deep unfolding in its ability to reduce not only the run-time of iterative optimizers, but also lead to savings in communications. Our formulation here considers the unfolding of a specific distributed optimization algorithm, D-ADMM, and specializes for the representative case studies distributed sparse recovery and linear regression. However, the consistent improvements in performance and run-time of our unfolded D-ADMM reported in the sequel indicates on the potential of this approach to other optimization problems and alternative distributed algorithms, which can possibly operate with fewer communications when combined with our considered form of deep unfolding. However, we leave the generalization of this study to other forms of distributed optimization for future work. Finally, while our derivation does not distinguish between different links, one can extend our approach to settings where some links are more constrained or costly by using weighted graphs, as well as be combined with distributed optimization with quantized messages. These extensions are all left for future study.
## IV Case Study 1: Distributed Sparse Recovery
The formulation of our proposed unfolded D-ADMM in Section III derives the algorithm for a generic distributed optimization setup. However, its conversion of D-ADMM into a trainable machine learning model highly depends on the specific problem considered. Therefore, we next provide two case studies for which we specialize and evaluate unfolded D-ADMM: A distributed sparse recovery, detailed in this section, and distributed linear regression presented in Section V.
We particularly focus on the application of unfolded D-ADMM for distributed sparse recovery based on the distributed least absolute shrinkage and selection operator (D-LASSO) objective, which is a convex relaxation based problem formulation commonly employed in compressed sensing setups [44, Ch. 1]. In Subsection IV-A we describe the D-LASSO problem formulation. Then, we specialize unfolded D-ADMM for solving this sparse recovery problem in Subsection IV-B and experimentally evaluate it in Subsection IV-C.
### _D-LASSO Problem Formulation_
We consider the recovery of a sparse signal from a set of compressed measurements observed by the agents. Accordingly, the local measurements represent compressed versions of some sparse high dimensional vector obtained using a set of sensing matrices. The distributed nature of the problem arises from the decentralized data residing on the computing agents.
The D-LASSO problem formulates this task by relaxing it into a \(\ell_{1}\) regularized recovery objective. Here, the functions \(\{f_{p}(\cdot)\}\) in (1) are given by
\[f_{p}(\mathbf{\bar{y}};\mathbf{b}_{p})=\frac{1}{2}\|\mathbf{A}_{p}\mathbf{\bar{y}}-\mathbf{b}_{p} \|_{2}^{2}+\tau\|\mathbf{\bar{y}}\|_{1}, \tag{11}\]
for each \(p\in\mathcal{V}\). In (11), \(\mathbf{A}_{p}\in\mathbb{R}^{m\times n}\) is the sensing matrix with which the \(p\)th agent acquires its local observation \(\mathbf{b}_{p}\in\mathbb{R}^{m}\), with \(m<n\). The regularization coefficient \(\tau>0\) balances the sparsity of the solution and its matching of the data, and is a hyperparameter originating from the representation of the sparse recovery task as a relaxed \(\ell_{1}\) regularized objective. This objective hyperparameter is often tuned manually.
### _Unfolded D-ADMM for the D-LASSO_
D-ADMM can be utilized for tackling the D-LASSO problem in (11), which we in turn unfold following the methodology detailed in Section III. Therefore, in the following we first specialize D-ADMM for the D-LASSO problem, after which we formulate its associated unfolded D-ADMM machine learning model.
#### Iv-B1 D-ADMM for the D-Lasso
As described in Subsection II-B, D-ADMM converts the objective via variable splitting into
\[\operatorname*{arg\,min}_{\mathbf{y}_{1},\dots,\mathbf{y}_{P}} \sum_{p=1}^{P}\frac{1}{2}\|\mathbf{A}_{p}\mathbf{y}_{p}-\mathbf{b}_{p}\|_{2}^{2 }+\tau\|\mathbf{y}_{p}\|_{1},\] (12) subject to \[\mathbf{y}_{p}=\mathbf{y}_{j},\forall j\in\mathcal{N}_{p},\]
where \(\mathcal{N}_{p}\) is the set of neighbors of node \(p\) in \(G(\mathcal{V},\mathcal{E})\). The augmented Lagrangian based on (12) for the \(p\)th agent is formulated as
\[\mathcal{L}_{p}(\mathbf{y}_{p},\{\mathbf{y}_{j}\}_{j\in\mathcal{N}_{p}}, \mathbf{\lambda}_{p})\triangleq\frac{1}{2}\|\mathbf{A}_{p}\mathbf{y}_{p}-\mathbf{b}_{p}\|_{2}^ {2}+\tau\|\mathbf{y}_{p}\|_{1}\] \[\qquad\qquad+\sum_{j\in\mathcal{N}_{p}}\mathbf{\lambda}_{p}^{T}(\mathbf{ y}_{p}-\mathbf{y}_{j})+\frac{\rho}{2}\|\mathbf{y}_{p}-\mathbf{y}_{j}\|_{2}^{2}, \tag{13}\]
where \(\rho>0\) is a fixed hyperparameter and \(\mathbf{\lambda}_{p}\) is the dual variable of node \(p\).
Accordingly, D-ADMM (Algorithm 1) can be applied to tackling the D-LASSO problem in (11) with (5) becoming
\[\mathbf{y}_{p}^{(k+1)} =\mathbf{y}_{p}^{(k)}-\alpha\nabla_{\mathbf{y}_{p}}\mathcal{L}_{p}(\mathbf{y }_{p}^{(k)},\{\mathbf{y}_{j,p}\}_{j\in\mathcal{N}_{p}},\mathbf{\lambda}_{p}^{(k)})\] \[=\mathbf{y}_{p}^{(k)}-\alpha\Big{(}\mathbf{A}_{p}^{T}\mathbf{a}_{p}\mathbf{y}_{p}^ {(k)}-\mathbf{A}_{p}^{T}\mathbf{b}_{p}^{(k)}+\tau\cdot\operatorname*{sign}\bigl{(}\bm {y}_{p}^{(k)}\bigr{)}\] \[\qquad\qquad+\sum_{j\in\mathcal{N}_{p}}\mathbf{\lambda}_{p}^{(k)}+ \rho\bigl{(}\mathbf{y}_{p}^{(k)}-\mathbf{y}_{j,p}\bigr{)}\Big{)}, \tag{14}\]
by each agent \(p\), with \(\alpha>0\) being a step-size. Similarly, (6) is specialized into
\[\mathbf{\lambda}_{p}^{(k+1)} =\mathbf{\lambda}_{p}^{(k)}+\eta\nabla_{\mathbf{\lambda}_{p}}\mathcal{L}_ {p}(\mathbf{y}_{p}^{(k+1)},\{\mathbf{y}_{j,p}\}_{j\in\mathcal{N}_{p}},\mathbf{\lambda}_{ p}^{(k)})\] \[=\mathbf{\lambda}_{p}^{(k)}+\eta\sum_{j\in\mathcal{N}_{p}}\bigl{(}\bm {y}_{p}^{(k+1)}-\mathbf{y}_{j,p}\bigr{)}, \tag{15}\]
with step-size \(\eta>0\) being the dual variable step size.
#### Iv-B2 Unfolding D-ADMM
Following the description in Section III, we unfold D-ADMM into a machine learning model by fixing its number of iterations to be \(T\) and treating the hyperparameters of D-ADMM, i.e., \(\alpha,\rho,\) and \(\eta\) as trainable parameters. Moreover, we note that the formulation of the D-LASSO objective introduces an additional hyperparameter, \(\tau\), which is not unique to its tackling by D-ADMM. Since the tuning of this parameter can largely affect the performance of iterative optimizers applied to such objectives, we leverage our available data to also treat \(\tau\) as a trainable parameter.
Accordingly, the trainable parameters of unfolded D-ADMM are given by the set of iteration-specific hyperparameters, i.e., \(\mathbf{\theta}=\{\mathbf{\theta}_{k}\}_{k=1}^{T}\). The resulting trainable architecture is illustrated in Fig. 1. When applying an agent-specific hyperparameters, the trainable parameters are \(\mathbf{\theta}_{k}=\{\rho_{p}^{(k)},\alpha_{p}^{(k)},\eta_{p}^{(k)},\tau_{p}^{(k )}\}_{p=1}^{P}\), i.e., \(4\cdot P\cdot T\) parameters.
### _Numerical Evaluation_
We numerically evaluate the proposed unfolded D-ADMM algorithm1, comparing it to the D-ADMM algorithm [6], where we used fixed hyperparameters manually tuned based on empirical trials to systematically achieve convergence. We also compare unfolded D-ADMM with a data-driven GNN, trained with the same dataset, where we employ a GNN based on the popular GraphSage architecture [45].
Footnote 1: The source code and hyperparameters used in this experimental study as well as the one reported in Subsection V-C are available at [https://github.com/yoav1131/Deep-Unfolded-D-ADMM.git](https://github.com/yoav1131/Deep-Unfolded-D-ADMM.git).
We simulate a communication network using the Erdos-Renyi graph model of \(P\in\{5,20,50\}\) nodes with proper coloring. An example of a graph generated with \(P=50\) vertices is illustrated in Fig. 2. We generate observations for each node as \(\mathbf{b}_{p}=\mathbf{A}_{p}\mathbf{\bar{y}}+\mathbf{n}_{p}\) with \(n=2000\) and \(m=\frac{500}{P}\), where \(\mathbf{n}_{p}\) is Gaussian with zero-mean i.i.d. entries of variance \(\sigma^{2}\). The desired \(\mathbf{\bar{y}}\) has \(25\%\) non-zero entries, and the sensing matrices \(\{\mathbf{A}_{p}\}\) are taken to be the block sub-matrices of the full sensing matrix is taken from Problem 902 of the Sparco toolbox [46]. The signal-to-noise ratio (SNR), defined as \(1/\sigma^{2}\), is set to \(\operatorname{SNR}\in\{-2,0,2,4\}\) [dB].
For each simulated setting, we apply Algorithm 2 to optimize \(\mathbf{\theta}\) with \(T=25\) iterations based on a labeled dataset as in (2) comprised of \(L=1200\) training samples. Both unfolded D-ADMM and the GNN are trained using \(100\) epochs of Adam [42] with a batch size of \(100\). All considered algorithms are evaluated over \(200\) test observations.
Fig. 1: Unfolded D-ADMM for LASSO at agent \(p\) in iteration \(k\). Dashed green and blue blocks are the primal and dual updates, respectively. Red fonts represent trainable parameters.
Fig. 2: Proper colored network with \(P=50\) nodes example.
## 4 Conclusion
Fig. 4: Final loss versus SNR, distributed LASSO.
Fig. 5: Loss versus SNR after 10 communication rounds, distributed LASSO.
Fig. 3: Loss per iteration, distributed LASSO.
We first evaluate the loss versus iteration achieved by unfolded D-ADMM with agent specific hyperparameters compared with conventional D-ADMM. The results are depicted in Fig. 3. It is observed in Fig. 3 that the proposed unfolded D-ADMM improves upon the D-ADMM with fixed hyperparameters and D-ADMM with line search optimization, by requiring much fewer iterations (hence communications) to converge. In particular, for \(P=5\) we observe in Figs. (a)a and (d)d an average of communications reduction by factors of \(\times 8\) and \(\times 4\) (\(25\) iterations vs. \(211\) iterations and \(101\) iterations, respectively), depending on the SNR. The corresponding reduction for \(P=20\) is by \(\times 13\) and \(\times 6\) (\(25\) iterations vs. \(341\) iterations in Fig. (b)b and \(151\) iterations in Fig. (e)e) while for \(P=50\) we observe a reduction by \(\times 11\) and \(\times 6\) (\(25\) iterations vs. \(285\) iterations in Fig. (c)c and \(168\) iterations in Fig. (f)f).
Next, we compare the performance achieved by unfolded D-ADMM after its last iteration (\(T=25\)), compared with the GNN with \(10\) layers trained for the same task, as well as the conventional D-ADMM when allowed to run until convergence. We observe in Fig. 4 that for all considered graph sizes, our unfolded D-ADMM effectively coincides with the converged D-ADMM, while using only \(T=25\) iterations. The GNN, which is invariant of the optimizer operation and just learns an abstract message passing as layers of a DNN, is outperformed by our unfolded D-ADMM. Since the GNN only employs \(10\) communication rounds (being comprised of \(10\) layers), in Fig. 5 we compare the performance of all optimizers after \(T=10\) communication rounds. There we systematically observe the improved performance achieved by the unfolded optimizer. These results demonstrate the benefits of the proposed approach in leveraging data to improve both performance and convergence speed while preserving the interpretability and suitability of conventional iterative optimizers.
## V Case Study 2: Distributed Linear Regression
Our second case study considers a distributed machine learning task. Collaborative learning of machine learning models is becoming increasingly popular, particularly due to its ability to alleviate privacy considerations associated with data sharing [47]. A common framework for distributed machine learning is federated learning (FL), which typically deals with distributed learning with centralized orchestration, i.e., where there is a central server with which all agents communicate directly and that enforces consensus on each communication round. Nonetheless, recent explorations have also considered its extensions to purely decentralized networks [48].
We particularly focus on learning a linear model based on the distributed linear regression (D-LR) objective, as formulated in Subsection V-A. The application of unfolded D-ADMM for this learning task is then presented in Subsection V-B and numerically evaluated in Subsection V-C.
### _D-LR Problem Formulation_
We consider the learning of a linear model from a dataset that is partitioned into subsets and distributed across multiple computing agents. The local data here represents labeled sets used for learning purposes. Accordingly, the D-LR problem specializes the generic formulation in (1) by setting the objectives \(\{f_{p}(\cdot)\}\) to be
\[f_{p}(\bar{\mathbf{y}};\mathbf{b}_{p})=\frac{1}{2\cdot L_{p}}\sum_{(\mathbf{x}_{i,p},\mathbf{ i}_{i,p})\in\mathbf{b}_{p}}(\mathbf{a}^{T}\mathbf{x}_{i,p}+\omega-s_{i,p})^{2}. \tag{16}\]
In (16), the local data \(\mathbf{b}_{p}\) for agent \(p\in\mathcal{V}\) is the set of \(L_{p}\) labeled pairs written as \(\mathbf{b}_{p}=\{\mathbf{x}_{i,p},s_{i,p}\}_{i=1}^{L_{p}}\), where \(s_{i,p}\) is the scalar label associated with the \(d\times 1\) input vector \(\mathbf{x}_{i,p}\). The optimization variable \(\bar{\mathbf{y}}\) is a linear regression model written as \(\bar{\mathbf{y}}=\{\bar{\mathbf{a}},\bar{\omega}\}\) representing an affine transformation with parameters \(\bar{\mathbf{a}}\in\mathbb{R}^{d}\) and \(\bar{\omega}\in\mathbb{R}\) denoting the regression vector and the bias, respectively.
### _Unfolded D-ADMM for the D-LR_
D-ADMM can be utilized for tackling the D-LR problem in (16), which we in turn unfold following the methodology detailed in Section III. Therefore, in the following we first specialize D-ADMM for the D-LR problem, after which we formulate the unfolded D-ADMM machine learning model.
#### V-B1 D-ADMM for the D-LR
Following Subsection II-B, D-ADMM converts the objective via variable splitting into
\[\operatorname*{arg\,min}_{\mathbf{y}_{1},\dots,\mathbf{y}_{P}} \sum_{p=1}^{P}\sum_{(\mathbf{x}_{i,p},s_{i,p})\in\mathbf{b}_{p}}\frac{1}{2 \cdot L_{p}}(\mathbf{a}_{p}^{T}\mathbf{x}_{i,p}+\omega_{p}-s_{i,p})^{2},\] (17) subject to \[\mathbf{a}_{p}=\mathbf{a}_{j},\forall j\in\mathcal{N}_{p},\] \[\omega_{p}=\omega_{j},\forall j\in\mathcal{N}_{p},\]
where the local models \(\mathbf{y}_{p}\) have two components, \(\{\mathbf{a}_{p},\omega_{p}\}\), hence (17) has two constraints. This leads to two primal and two dual problems one for each component. The augmented Lagrangian based on (17) for the \(p\)th agent is formulated as
\[\mathcal{L}_{p}(\mathbf{y}_{p},\{\mathbf{y}_{j}\}_{j\in\mathcal{N}_{p}}, \mathbf{\mu}_{p},\mathbf{\lambda}_{p})\triangleq\frac{1}{2\ast|L_{p}|}(\mathbf{a}_{p}^{T} \mathbf{x}_{i,p}+\omega_{p}-s_{i,p})^{2}\] \[\quad\quad\quad\quad+\sum_{j\in\mathcal{N}_{p}}\mathbf{\mu}_{p}^{T} (\mathbf{a}_{p}-\mathbf{a}_{j})+\frac{\rho_{p}}{2}\|\mathbf{y}_{p}-\mathbf{y}_{j}\|_{2}^{2}\] \[\quad\quad\quad\quad+\sum_{j\in\mathcal{N}_{p}}\mathbf{\lambda}_{p}^{T} (\mathbf{a}_{p}-\mathbf{a}_{j})+\frac{\beta_{p}}{2}\|\mathbf{y}_{p}-\mathbf{y}_{j}\|_{2}^{2}, \tag{18}\]
where \(\rho>0\), \(\beta>0\) are a fixed hyperparameters and \(\mathbf{\mu}_{p}\)\(\mathbf{\lambda}_{p}\) are the dual variables of node \(p\).
Accordingly, D-ADMM (Algorithm 1) can be applied to tackling the D-LR problem in (16) with (5) becoming
\[\mathbf{a}_{p}^{(k+1)} =\mathbf{a}_{p}^{(k)}-\alpha_{p}^{(k)}\nabla_{\mathbf{a}_{p}}\mathcal{L}_ {p}(\mathbf{y}_{p}^{(k)},\{\mathbf{y}_{j,p}\}_{j\in\mathcal{N}_{p}},\mathbf{\lambda}_{p}^{(k )})\] \[=\mathbf{a}_{p}^{(k)}-\alpha_{p}^{(k)}\Big{(}\frac{1}{|L_{p}|}\sum_{ \mathbf{x}_{i,s_{i}\in\mathcal{D}_{p}}}\mathbf{x}_{i,p}\mathbf{x}_{i,p}^{T}\mathbf{a}_{p}^{(k)}+ \mathbf{x}_{i,p}\omega_{p}^{(k)}\] \[\quad-\mathbf{x}_{i,p}s_{i,p}+\sum_{j\in\mathcal{N}_{p}}\mathbf{\mu}_{p}^{(k )}+\rho_{p}^{(k)}\big{(}\mathbf{a}_{p}^{(k)}-\mathbf{a}_{j,p}\big{)}\Big{)}, \tag{19}\]
and
\[\omega_{p}^{(k+1)} =\omega_{p}^{(k)}-\delta p^{(k)}\nabla_{\omega_{p}}\mathcal{L}_{p}(y _{p}^{(k)},\{\mathbf{y}_{j,p}\}_{j\in\mathcal{N}_{p}},\mathbf{\lambda}_{p}^{(k)})\] \[=\omega_{p}^{(k)}-\delta_{p}^{(k)}\Big{(}\frac{1}{|\mathcal{D}_{p }|}\sum_{\mathbf{x}_{i,s_{i}}\in\mathcal{D}_{p}}\mathbf{a}_{p}^{(k)T}\mathbf{x}_{i,p}+\omega _{p}^{(k)}+s_{i,p}\] \[\quad\quad+\sum_{j\in\mathcal{N}_{p}}\mathbf{\lambda}_{p}^{(k)}+\beta _{p}^{(k)}\big{(}\omega_{p}^{(k)}-\omega_{j,p}\big{)}\Big{)}, \tag{20}\]
by each agent \(p\), with \(\alpha>0\) and \(\delta>0\) being step-sizes. The agent then shares \(\mathbf{y}_{p}^{(k+1)}=\{\mathbf{a}^{(k+1)},\omega^{(k+1)}\}\) with its neighbours, who update their local copies. The iteration is concluded by having all agents update their dual variables with step-sizes \(\eta>0\) and \(\gamma>0\) via
\[\mathbf{\mu}_{p}^{(k+1)} =\mathbf{\mu}_{p}^{(k)}+\eta_{p}^{(k)}\nabla_{\mathbf{\mu}_{p}}\mathcal{ L}_{p}(\mathbf{y}_{p}^{(k+1)},\{\mathbf{y}_{j,p}\}_{j\in\mathcal{N}_{p}},\mathbf{\mu}_{p}^{(k)})\] \[=\mathbf{\mu}_{p}^{(k)}+\eta_{p}^{(k)}\sum_{j\in\mathcal{N}_{p}}\big{(} \mathbf{a}_{p}^{(k+1)}-\mathbf{a}_{j,p}\big{)}. \tag{21}\]
and
\[\mathbf{\lambda}_{p}^{(k+1)} =\mathbf{\lambda}_{p}^{(k)}+\gamma_{p}^{(k)}\nabla_{\mathbf{\lambda}_{p} }\mathcal{L}_{p}(\mathbf{y}_{p}^{(k+1)},\{\mathbf{y}_{j,p}\}_{j\in\mathcal{N}_{p}}, \mathbf{\lambda}_{p}^{(k)})\] \[=\mathbf{\lambda}_{p}^{(k)}+\gamma_{p}^{(k)}\sum_{j\in\mathcal{N}_{p} }\big{(}\omega_{p}^{(k+1)}-\omega_{j,p}\big{)}. \tag{22}\]
#### V-B2 Unfolding D-ADMM
By unfolding the above D-ADMM steps following the method described in Section III, the iterative steps are viewed as layers of a DNN. In this problem we formulate the set of learnable hyperparameters on two cases, the first case is when the graph is fixed in time and the second case is when all the agents shares the same learnable hyperparameters.
1. _Agent-Specific Hyperparameters:_ In this case the set of learnable hyperparameters of agent \(p\) is \(\mathbf{\theta}_{p}=\{\alpha_{p}^{k},p_{p}^{k},\delta_{p}^{k},\beta_{p}^{k},\eta_{ p}^{k},\gamma_{p}^{k}\}_{k=1}^{T}\) (resulting in \(6\cdot P\cdot T\) trainable parameters).
2. _Shared Hyperparameters:_ In this case we neglect the dependency of our model on the number of agents \(P\). Therefore, the set of learnable hyperparameters \(\mathbf{\theta}=\{\alpha^{k},\rho^{k},\beta^{k},\eta^{k},\gamma^{k}\}_{k=1}^{T}\), will be the same for all agents (i.e., \(6\cdot T\) trainable parameters).
The main difference between these two cases is with the dependency of the learnable hyperparameters set on \(P\). The resulting trainable architecture (for an agent-specific parameterization) is illustrated in Fig. 6.
### _Numerical Evaluation_
We numerically evaluate the proposed unfolded D-ADMM algorithm with the two configurations discussed above, i.e., \((i)\) Agent-Specific Hyperparameters; \((ii)\) Shared Hyperparameters. For both cases we consider the learning of a handwritten digit classifier, with observations for each agent taken from MNIST dataset as \(\mathbf{b}_{p}=\{\mathbf{x}_{i,p}.s_{i,p}\}_{i=1}^{L_{p}}\). The desired output \(\mathbf{\tilde{y}}\) is a linear regression model that is applicable for MNIST.
Fig. 8: Loss per iteration; \(P=12\)
Fig. 6: Unfolded D-ADMM for linear regression model illustration at agent \(p\) in iteration \(k\). Dashed green and blue blocks are the primal update and the dual update, respectively. Red fonts represent trainable parameters.
Fig. 7: Loss per iteration; \(P=5\)
#### V-C1 Agent-Specific Hyperparameters
Here, we compare unfolded D-ADMM with conventional D-ADMM, where we used fixed hyperparameters manually tuned based on empirical trials to systematically achieve convergence. We then distributed optimizers with training the same model using conventional FL being a conventional framework for studying distributed machine learning. Note that FL assumes a centralized server that has direct links to each of the \(P\) agents. Here, each agent implements \(20\) local training iterations using Adam before communicating with the server for synchronization.
We simulate a communication network using the Erdos-Renyi graph model with proper coloring. The labeled dataset as in (2) is comprised of \(L=L_{p}\cdot P\) training samples, where each agent has local data of size \(L_{p}=200\). We set the graph size to be \(P\in\{5,12,20\}\). For each configuration, we apply Algorithm 2 to optimize \(\mathbf{\theta}\) with \(T=20\) iterations using Adam. All considered algorithms are evaluated over \(200\) test observations.
The results for the settings of \(P\in\{5,12,20\}\) are depicted in Figs. 7-9, respectively. It is observed that the proposed unfolded D-ADMM improves not only upon fixed hyperparameters D-ADMM, but also over FL (which has an additional centralized orchestration), by requiring much fewer communication iterations to converge. In particular, Fig. 7 shows a communications reduction by a factor of \(\times 154\) and \(\times 38\) (\(20\) iterations vs. \(3080\) and \(760\) iterations respectively), while in Fig. 8 the corresponding reduction is by \(\times 142\) and \(\times 50\) (\(20\) iterations vs. \(2840\) and \(1000\) iterations respectively). In Fig. 9, we observe a reduction in communication rounds by \(\times 120\) and \(\times 85\) (\(20\) iterations vs. \(2400\) and \(1700\) iterations respectively).
#### V-C2 Shared Hyperparameters
Next, we evaluate unfolded D-ADMM with shared hyperparameters. Our main aim here is to evaluate the transferability induced by unfolding with shared hyperparameters, and particularly the ability to train with graph and operate reliably on another graph. We simulate a communication network using the Erdos-Renyi graph model with proper coloring, and set the graph during training to be comprised of merely \(P=5\) nodes. We apply Algorithm 2 to optimize \(\mathbf{\theta}\) with \(T=20\) iterations based on a labeled dataset as in (2), where each agent has data with size \(L_{p}=200\), and they all share the same hyperparameters set.
While we train on a small network, we evaluate the learned hyperparameters set on bigger networks with \(P\in\{12,20,40,80,200\}\), where all agents has the same learned hyperparameters set. The resulting performance achieved per iteration is depicted in Fig. 10. It is observed that for the case which all the agents have the same hyperparameters, our proposed method is robust to changes in the number of agents in time, and that one can successfully apply an unfolded D-ADMM trained with small graphs to larger graphs.
We next show that the usage of shared hyperparameters does not lead to a notable performance degradation compared to agent-specific hyperparameters, when evaluated and trained on the same graph. To that aim, in Fig. 11 we compare the two approaches over the graph used for training, i.e., \(P=5\). The figure shows that the performance degradation due to reusing hyperparameters, which allows transferability to larger graphs, comes at the cost of only a minor degradation compared with having each agent has its own hyperparameters.
## VI Conclusions
In this work, we proposed a data-aided method for rapid and interpretable distributed optimization. Our approach first unfolds the D-ADMM optimizer of each agent to reach consensus using a fixed small number of iterations. Then, we use the data to tune the hyperparameters of each agent at each iteration, which can either be shared between the agents or learned for each agent separately. We specialized our unfolded D-ADMM for distributed sparse recovery and for distributed machine learning, where we showed the notable gains of the proposed methodology in enabling high performance distributed optimization with few communication rounds.
|
2309.16544 | The DEVStone Metric: Performance Analysis of DEVS Simulation Engines | The DEVStone benchmark allows us to evaluate the performance of
discrete-event simulators based on the DEVS formalism. It provides model sets
with different characteristics, enabling the analysis of specific issues of
simulation engines. However, this heterogeneity hinders the comparison of the
results among studies, as the results obtained on each research work depend on
the chosen subset of DEVStone models. We define the DEVStone metric based on
the DEVStone synthetic benchmark and provide a mechanism for specifying
objective ratings for DEVS-based simulators. This metric corresponds to the
average number of times that a simulator can execute a selection of 12 DEVStone
models in one minute. The variety of the chosen models ensures we measure
different particularities provided by DEVStone. The proposed metric allows us
to compare various simulators and to assess the impact of new features on their
performance. We use the DEVStone metric to compare some popular DEVS-based
simulators. | Román Cárdenas, Kevin Henares, Patricia Arroba, José L. Risco-Martín, Gabriel A. Wainer | 2023-09-28T15:56:05Z | http://arxiv.org/abs/2309.16544v1 | The DEVStone Metric: Performance Analysis of DEVS Simulation Engines
###### Abstract
The DEVStone benchmark allows us to evaluate the performance of discrete-event simulators based on the DEVS formalism. It provides model sets with different characteristics, enabling the analysis of specific issues of simulation engines. However, this heterogeneity hinders the comparison of the results among studies, as the results obtained on each research work depend on the chosen subset of DEVStone models. We define the DEVStone metric based on the DEVStone synthetic benchmark and provide a mechanism for specifying objective ratings for DEVS-based simulators. This metric corresponds to the average number of times that a simulator can execute a selection of 12 DEVStone models in one minute. The variety of the chosen models ensures we measure different particularities provided by DEVStone. The proposed metric allows us to compare various simulators and to assess the impact of new features on their performance. We use the DEVStone metric to compare some popular DEVS-based simulators.
Discrete-Event Simulation, Performance, Benchmarking 1
## 1 Introduction
Different Modeling and Simulation (M&S) techniques and tools have been proposed for studying and analyzing human-made or natural systems, each with different options and specific formalism support [1]. Despite efforts to provide compatibility and reusability of models among different M&S tools [2, 3, 4], porting models from one tool to another is still a major challenge that usually implies the complete reimplementation of the models in the new tool.
Among different M&S formalisms, Discrete Event Systems (DESs) are widely used due to their intuitive yet powerful nature [5]. For a given model, these formalisms define a discrete set of states \(S\), and how the state of the model changes from \(s\in S\) to \(s^{\prime}\in S\) with the occurrence of events. Even though there is a wide variety of DES approaches (e.g., Markov chains or Petri nets), the Discrete Event System Specification (DEVS) formalism [1] stands out as a common denominator for multi-formalism hybrid systems modeling [6]. This feature enables the encapsulation of models described in other formalisms as DEVS models.
A DEVS model is a set of _atomic_ and _coupled_ models that represent a system hierarchically and modularly. An atomic model specifies the behavior of a system component. The state of an atomic model depends on its previous state and any input event. When an atomic model transitions from one state to another, output events may occur. In contrast, coupled models define how the system components are interconnected. By coupling one model to another, coupled models specify which output events of the former correspond to input events of the latter. DEVS has been applied in a variety of application domains, including decision support systems [7], disease prediction [8], logistic of maintenance operations [9], smart grid infrastructures [10], and traffic analysis [11].
There are multiple DEVS-compliant simulation engines that provide different features to the modelers. Therefore, we need to define comparative methods to decide which of these tools is best suited for our needs. A way to do it is using benchmarking software that measure features of the tools under study (e.g., power consumption or performance) and assign them an unbiased score [12]. The DEVStone benchmarking toolset [13] was introduced with the aim of providing a common method to compare the simulation performance of DEVS simulators.
DEVStone presents four model topologies. Depending on the topology and other configuration parameters, DEVStone generates synthetic DEVS models with different degrees of complexity. The performance of a simulation tool is then measured as the time required to simulate the synthetic DEVStone model. However, state-of-the-art studies select a set of DEVStone models in an arbitrary fashion, making it difficult to compare results of different research works.
Here we present a common evaluation method of DEVS-based simulator performance, a key aspect when deciding the most convenient modeling environment. We extend DEVStone and introduce a common model set to define the _DEVStone metric_. We introduce a basic performance metric to be used as a complete benchmarking technique. We allow the generation of objective and shared ratings reflecting the performance of DEVS-based simulators. This is the benchmark available for comparing the performance of DEVS simulators. The contributions of this work are:
* We revisit the topologies of the original DEVStone benchmarking tool and provide equations to compute model-related parameters (e.g., number of couplings or number of events triggered).
* We define the DEVStone metric as the average number of DEVStone units that a DEVStomiant simulation engine can simulate in one minute. A DEVStone unit corresponds to a set of 12 DEVStone models with different characteristics (e.g., number of components, interconnections, or simulation events). These models are intended to stress-test the simulators under study. The selected models represent four DEVStone model types with three different topologies each. To avoid too long execution times, we ran multiple models in multiple simulators before selecting the presented DEVStone model set to ensure that the benchmark metric was in the order of minutes in most simulators.
* The DEVStone metric can be represented as a single number that corresponds to the total execution, or as a matrix of the time spent on each of these models. While the former gives us
a general idea of the performance of the simulation engine, the latter provides insights of how the structure of the model may affect in the performance of the tool.
* We compute this new DEVStone metric for some of the most popular DEVS simulation engines. We run the benchmark on these tools and discuss the obtained results.
The proposed benchmark metric allows DEVS modelers to compare different available simulation tools and decide which tool is more suitable for them. Furthermore, software developers can integrate this metric in the development process to assess how new features and updates of a simulation tool impacts its simulation performance.
It is worth mentioning that our work focuses on sequential simulation exclusively. Furthermore, we only compare simulators according to the time required to execute the simulations under study. Therefore, we do not consider additional features (e.g., integration with other frameworks or unit testing tools) that modelers might find valuable when deciding which simulator to use. Finally, note that execution times depend on the software and the machine used to run the simulations. Thus, the hardware platform used to replicate the experiments may impact the results, and comparison between simulators should be on the same hardware.
The structure of the paper is organized as follows. First, we introduce some related work in Section 2. In Section 3, we present a detailed description of DEVStone and the proposed benchmarking technique. In Section 4, we use this technique to compare some popular DEVS-based simulators and discuss the results. Finally, we present conclusions in Section 5.
## 2 Related Work
This section first provides a brief introduction to the DEVS formalism and different state-of-the-art DEVS-compliant simulation tools. Finally, we present previous approaches for comparing the performance of these tools.
### The DEVS Formalism
DEVS is a formalism for DESs based on set theory [1]. It presents several advantages to analyze and design complex systems (e.g., completeness, verifiability, extensibility, and maintainability). Systems defined using the DEVS theory can be easily implemented using any existing DEVS-compliant computational library. DEVS provides two types of models for defining a system: atomic models, which specify how a system behaves according to an internal state and the occurrence of external events, and coupled models, which defines the structure of the model (i.e., how DEVS subcomponents are interconnected between each other). Chow and Zeigler [14] proposed the Parallel DEVS (PDEVS) formalism, a revision of the original DEVS formalism that enabled the modeling of collisions between internal and external events. It also introduced the concept of message bags, which significantly improved the way modelers could define external transitions when more than one input event happened simultaneously. Currently, PDEVS is the prevalent DEVS variant. In the following, unless it is explicitly noted, the use of DEVS implies PDEVS. Appendix A contains an in-depth description of the DEVS formalism.
### DEVS Simulation Engines
Along with PDEVS, Chow and Zeigler defined a formal construct to enable the parallel and distributed execution of PDEVS models. Nowadays, PDEVS is implemented in numerous DEVS simulation engines based on this algorithm. Some of the most popular DEVS simulators are described in the next paragraph.
The adevs simulator [15] is a C\(++\) library for developing models based on the PDEVS and Dynamic DEVS (dynDEVS) formalisms. In terms of performance, adevs is a useful reference, as it usually presents the best results. Cadmium [16] is the latest simulator presented by the ARSLab research group, after CD\(++\) and CDBoost. It is a strongly typed DEVS simulator written in C\(++\) that focuses on ensuring the described model's validity according to the DEVS formalism. It includes a DEVS-based kernel for embedded systems. PyPDEVS [17] is a popular DEVS simulator developed in Python. It has two simulator implementations: a main simulator that allows more configurations, and a minimal version that restricts the simulation functioning to the basics and presents a higher performance. Finally, the xDEVS simulator [18] supports different programming languages: C\(++\), Java, and Python. While all the implementations present a similar API, each version presents different features (e.g., parallel and/or distributed simulation, model flattening, or transducer modules).
### Measuring the Performance of DEVS Simulators
Performance of DEVS-compliant simulation engines has been a research topic since the definition of the DEVS formalism. However, the methodologies used to measure the performance speedup of new proposals differ depending on the research. For instance, DEVS-C\(++\)[19] is a high-performance environment that focuses on modeling large-scale systems with high resolution. The authors simulate a watershed with different degrees of detail to compare the speedup of using this tool on different High-Performance Computing (HPC) clusters. Hu and Zeigler [20] proposed an alternative simulation engine for improving the performance of large-scale cellular DEVS models by implementing a data structure that allows less time-consuming searches of active models. The performance analysis consists of a comparison between state-of-the-art simulation engines and the new approach when simulating two example models. Muzy and Nutaro [21] detected that the classical implementation of DEVS simulators could lead to memory inefficiencies resulting from an excessive number of nodes. The authors proposed different simulation algorithms to overcome these deficiencies.
To illustrate the obtained speedup, each of these contributions included a performance comparison between previous simulation engines and simulation engines that implemented the proposed algorithms. These comparisons measured the time required to simulate an arbitrary model with a considerable degree of complexity. However, each contribution used a different model to illustrate this performance enhancement. Depending on the model under study, the number of events, couplings between components, and processing time of the transition functions can vary significantly. Thus, a performance comparative considering only one model is not enough to evaluate a DEVS simulation engine. Additionally, as each contribution used a completely different model to illustrate its performance enhancement, it is not possible to compare the performance enhancement achieved by these contributions.
The DEVStone benchmark [13] was introduced to overcome these limitations. DEVStone describes four types of model structures that can adjust the complexity of the model by depending on four configuration parameters. Section 3 provides an in-depth description of the DEVStone benchmark. Since its introduction, DEVStone has become a 'de-facto' standard for analyzing the performance of DEVS-based simulators. Some authors have used it as a metric to evaluate their DEVS implementations [18, 22]. Others used it for measuring the impact of original proposals for performance improvement [23, 24]. Several works have compared some of the most relevant DEVS-based simulators of the state of the art using this benchmark [25, 26]. Therefore, the DEVStone benchmark can be useful for different purposes. It can help new modelers by giving
them a performance insight of the most popular DEVS simulators, facilitating the task of selecting the simulator that fits best for their interests. Also, it can be used for developers to compare their implementations and to evaluate the performance of new proposals and methods for lowering the overhead introduced by the simulation tools.
However, despite the growing popularity of this benchmark, a common metric has not been proposed yet. As DEVStone defines different models and allows to parametrize them to vary their size and complexity, authors must select specific combinations of models and parameters for comparing their implementations. In this way, some of them opt to explore all the combinations in a predefined range. Others establish the reference using a small set of heterogeneous models.
Different DEVStone models variations have also been presented to explore specific simulations aspects. Risco et al. [27] proposed an HOmod model variation called HOmem, that presents a straightforward mathematical way of incrementing the traffic of events with respect to the three simpler DEVStone models. Van Tendeloo and Vangheluwe [26] introduced an HI model variation that removes the recursiveness present in the DEVStone definitions. This alternative model is four atomic fully connected atomic models. Thus, they increment the number of interconnections, and show a quadratic growth in the number of events. In this model, a single parameter is provided for defining whether collisions should happen, being able to measure the bag merging algorithms if this is activated.
This variety of references makes the comparison between works difficult and is contrary to the concept of benchmark as such. This paper aims to solve this heterogeneity of references by defining a common metric for evaluating DEVS-based simulators.
## 3 The DEVStone Metric
In this section, we first introduce the classic DEVStone models. We describe their components and parameters, and their differences are discussed. Then, we select a diverse set of these models to compose the DEVStone metric. This metric is used later in Section 4 as a standard point of reference to compare the most popular state-of-the-art DEVS-compliant simulators.
### DEVStone Models
DEVStone [13] enables the generation of multiple synthetic DEVS models with varying shapes and sizes. DEVStone models are simulated to test different features of the simulator under study. DEVStone models are uniquely defined with five parameters:
* _Type_: it defines the number of input/output ports of the model, its structure, and how the model's subcomponents are interconnected with each other.
* _Depth_: it specifies the number of nested models (i.e., levels) in the model hierarchy. The same DEVStone model is repeated iteratively in each of these levels, except for the last one. This inner-most coupled model consists of a single atomic model.
* _Width_: it determines the number of components in each layer of the DEVStone model.
* _Internal transition delay_: it specifies the wall-clock time (in milliseconds) that an atomic model must spend computing its next state when its internal transition function is triggered. When an atomic model's internal transition function is activated, the simulator will run CPU-intensive arithmetic computations for as much time as defined by the internal transition delay, regardless of the DEVS simulator under use. The Dhrystone benchmark [28] is used to emulate CPU-intensive integer arithmetic operations.
* _External transition delay_: this parameter is equivalent to the internal transition delay, but for atomic models' external transition functions. It specifies the wall-clock time (in milliseconds)
that an atomic model must spend computing its next state when its external transition function is triggered.
Atomic models have one input port and one output port. When triggered, their output function \(\lambda\) produces a message bag with a single integer value, regardless of the number of messages present in their input bag. This measure avoids excessive memory consumption, resulting from the accumulation of the outputs of different atomic models at different coupled models.
At the beginning of the simulation, a single integer value is injected in all the input ports of the topmost coupled model. The simulation time of the model is measured since the introduction of this stimuli until there is no pending events in any atomic model.
The particularities of each DEVStone type are presented below. For each of them, some equations are shown to describe their structure and behavior. Considering the different parameters, they allow to calculate the number of atomic models (N\({}_{\mathrm{ATOMIC}}\)), events (N\({}_{\mathrm{EVENTS}}\)), External Input Couplings (N\({}_{\mathrm{EIC}}\)), External Output Couplings (N\({}_{\mathrm{EOC}}\)), and Internal Couplings (N\({}_{\mathrm{IC}}\)).
#### 3.1.1 Li models
Low Interconnectivity (LI) models present the simplest DEVStone structure, where the only couplings of each depth level are the ones that connect the parent input port with both the coupled and the atomic input ports. There is a single parent output port, connected to the output of the internal coupled model.
This composition is depicted in Figure 1.
LI models have \(d-1\) layers containing a coupled model and \(w-1\) atomic models, where \(d\) is the depth and \(w\) is the width of the model. Also, the innermost level contains a single atomic model, as shown in Figure 1(a). Due to the configuration of the couplings, only one internal event and one external event are produced in each atomic model. Among the couplings, the major part corresponds to EIC couplings (there is one for each component, both atomic and coupled). The rest of them correspond to EOC couplings (one per depth level). This model does not present IC couplings. Equation (1) summarizes the LI model's characteristics.
\[\begin{split} N_{ATOMIC}&=(w-1)\times(d-1)+1\\ N_{EIC}&=w\times(d-1)+1\\ N_{EOC}&=d\\ N_{IC}&=0\\ N_{EVENTS}&=(w-1)\times(d-1)+1\end{split} \tag{1}\]
Figure 1: Structure of the LI DEVStone model.
#### 3.1.2 HI models
High Interconnectivity (HI) models have the structure shown in Figure 2. They extend the LI model definition adding additional internal couplings between each pair of adjacent atomic models. Figure 2(b) depicts these new internal couplings in gray.
Hence, the number of atomic components, EIC, and EOC couplings remains the same. However, these extensions alter the number of events and IC couplings. In each depth level (except for the last one, that only has an atomic model), \(\sum_{i=1}^{w-1}i\) events are produced due to the chaining of the atomic models, as shown in Equation (2). Moreover, there are \(w-2\) IC couplings for each remaining level if \(w>2\). If \(w\leq 2\), there are no IC couplings.
\[\begin{array}{l}N_{ATOMIC}=(w-1)\times(d-1)+1\\ N_{EIC}=w\times(d-1)+1\\ N_{EOC}=d\\ N_{IC}=\left\{\begin{array}{ll}(w-2)\times(d-1),&\mbox{if }w>2\\ 0&\mbox{otherwise}\end{array}\right.\\ N_{EVENTS}=1+(d-1)\times\frac{(w-1)\times w}{2}\end{array} \tag{2}\]
#### 3.1.3 HO models
High Output (HO) models (depicted in Figure 3) extend HI models by adding one output coupling for every atomic model. Also, coupled models have two input and two output ports instead of one.
For a given coupled model, there are two external input couplings connecting its two input ports with the matching input ports of its child coupled model. Additionally, there is one external input coupling between the second input port of the parent coupled model and each of its \(w-1\) child atomic models. There is one external output coupling that interconnects the first output of the child coupled model with the first output of its parent coupled model. On the other hand, the second output port of coupled models remains unconnected. This can help us to detect memory leakage issues of simulation engines that do not clean events of unconnected ports. The overall number of internal couplings, atomic models, and events remains the same as in HI models. Equation (3) shows the characteristics of the HO models.
Figure 2: Structure of the HI DEVStone model.
\[N_{ATOMIC} =(w-1)\times(d-1)+1 \tag{3}\] \[N_{EIC} =(w+1)\times(d-1)+1\] \[N_{EOC} =w\times(d-1)+1\] \[N_{IC} =\left\{\begin{array}{ll}(w-2)\times(d-1),&\text{if }w>2\\ 0&,&\text{otherwise}\end{array}\right.\] \[N_{EVENTS} =1+(d-1)\times\frac{(w-1)\times w}{2}\]
#### 3.1.4 HOmod models
The structure of modified HO models (HOmod) is shown in Figure 4.
Figure 4: Structure of the HOmod DEVStone model.
Figure 3: Structure of the HO DEVStone model.
In HOmod models, coupled components have two input ports and one output port. As depicted in Figure 4(a), the innermost coupled model only contains one atomic model. However, the remaining \(d-1\) coupled components arrange their atomic subcomponents in \(w\) rows and \(w-1\) columns, as shown in Figure 4(b). The first row has \(w-1\) atomic models (i.e., atomic models \((1,1)\) to \((1,w-1)\) in Figure 4(b)). On the other hand, the \(w-1\) remaining rows resemble an upper triangular matrix, in which all the elements below the main diagonal are empty and the other elements contain an atomic model. For instance, the second row contains \(w-1\) elements (atomic models \((2,1)\) to \((2,w-1)\)), while the third row contains \(w-2\) elements (atomic models \((3,2)\) to \((3,w-1)\)). The number of atomic models per row decreases one by one until row \(w\), which has only one atomic model (in Figure 4(b), the atomic model \((w,w-1)\)).
The first input port of coupled models is only connected to the first input port of their child coupled model. On the other hand, the second input port is connected to the \(w-1\) atomic models of the first row and to the atomic models placed at the main diagonal of the upper triangular matrix comprised by the remaining \(w-1\) rows (e.g., atomic models \((2,1)\), \((3,2)\), or \((w,w-1)\)).
The output of all the atomic models in the first row is connected to the second input port of the child coupled component. Additionally, the output of every atomic model in the second row is connected to the input of all atomic models in the first row. The output of each atomic model in the remaining \(w-2\) rows is only coupled to the input of the atomic model placed in the row above and the same column. For instance, the output of atomic model \((4,w-1)\) is coupled to the input of atomic model \((3,w-1)\), whose output is coupled to atomic model \((2,w-1)\). However, as the atomic model \((2,w-1)\) belongs to the second row, its output is coupled not only to the input of the atomic model \((1,w-1)\), but to all the atomic models in the first row.
The output port of all the coupled models is connected to the output port of their parent coupled model. This configuration results in the following number of atomic models, events and couplings:
\[\begin{split} N_{ATOMIC}&=\left[w-1+\frac{(w-1) \times w}{2}\right]\times(d-1)+1\\ N_{EIC}&=\left[2\times(w-1)+1\right]\times(d-1)+1\\ N_{EOC}&=d\\ N_{IC}&=\left[(w-1)^{2}+\frac{(w-1)\times w}{2} \right]\times(d-1)\\ N_{EVENTS}&=1+\sum_{i=1}^{d-1}\left[(1+(i-1)\times(w- 1))\times\frac{(w-1)\times w}{2}+(w-1)\times\left(w+(i-1)\times(w-1)\right) \right]\end{split} \tag{4}\]
### The DEVStone Benchmarking Metric
The _DEVStone benchmarking metric_, \(D_{i,j}\), is defined as the average number of _DEVStone units_ that a machine \(i\) can execute in one minute using the DEVS simulator \(j\). A DEVStone unit is defined as the 12 DEVStone models that comprise the _DEVStone benchmark set_, \(\mathcal{B}\).
The benchmark set \(\mathcal{B}\) intends to stress-test the simulation engines under study using DEVStone models with different characteristics (e.g., couplings, subcomponents, or events). On the other hand, we also wanted the DEVStone metric to be in the order of minutes in most simulators, so running the benchmark would not take too long. Thus, we ran multiple DEVStone models in different simulators before deciding which models would be part of the benchmark set. The selected models are presented in Table 1.
We use three models per DEVStone model type. For the LI, HI, and HO models we use one "_balanced_" model configured to have depth of 200 levels and a width of 200 components per level. Then, we define two additional models: (i) a "_deep_" model (i.e., a model with a depth of 200 levels but a width of only 40 components per level), and (ii) a "_wide_" model whose depth is reduced to 40 levels, but the width remains in 200 components per level. Using models with different shapes allows a detailed study of how the structure of a model may affect the performance of the simulation engine. The three HOmod models follow a similar pattern (i.e., they represent a balanced, a deep, and a wide version of this DEVStone type). However, due to the complexity of these models, using the same configuration for the width and the depth would imply unbearable simulation times even for the fastest simulators. Thus, the depth and width are set to 20 and 20 for the balanced model, 20 and 4 for the deep model, and 4 and 20 for the wide model.
The execution of each one of the models is triggered by inserting a single integer value in all the input ports of their top-most coupled model. It is worth noting that only the simulation time is considered, ignoring the time required for creating the DEVS model and building its corresponding simulators/coordinators hierarchy tree.
In this benchmark, the internal and external transition delays are set to 0. These delays emulate the complexity of computing the next state of the atomic models, which depends strongly on particular use cases, and is not related to the performance of the underlying simulation engine. The proposed DEVStone metric captures the maximal performance differences between the simulators under study. With zero delays, we only measure simulator-related execution times. Otherwise, non-zero transition delays would disguise performance differences between the simulators under study. Non-zero delays might be of interest when working with parallel simulators. These use thread-safe operations, leading to higher execution overheads than sequential simulators. However, parallel simulators can compute state transitions in parallel, outweighing this difference and outperforming sequential simulators. As future work, we will develop an alternative benchmark with non-zero transition delays to compare parallel simulation engines.
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Model Type** & **Depth** & **Width** \\ \hline \multirow{3}{*}{LI} & 200 & 200 \\ & 200 & 40 \\ & 40 & 200 \\ \hline \multirow{3}{*}{HI} & 200 & 200 \\ & 200 & 40 \\ & 40 & 200 \\ \hline \multirow{3}{*}{HO} & 200 & 200 \\ & 200 & 40 \\ & 40 & 200 \\ \hline \multirow{3}{*}{HOmod} & 20 & 20 \\ & 20 & 4 \\ \cline{1-1} & 4 & 20 \\ \hline \hline \end{tabular}
\end{table}
Table 1: DEVStone metric model set. All the models are configured to have 0 internal and external transition delay.
The execution time for each model \(m\in\mathcal{B}\) using the machine \(i\) and the DEVS simulator \(j\), \(T_{i,j}^{m}\), is computed as an average over N simulation replications:
\(T_{i,j}^{m}\ =\ \frac{1}{N}\sum_{n=1}^{N}T_{i,j}^{m,n},\) (5)
where \(T_{i,j}^{m,n}\) is the simulation time (in seconds) for the model \(m\in\mathcal{B}\) during the n\({}^{\text{th}}\) replication using the machine \(i\) and the DEVS simulator \(j\). Every model must be executed enough times to provide acceptable confidence bounds. Thus, we propose that N must be greater than or equal to 30 [29]. The _DEVStone metric_ is then defined as the number of DEVStone units that a given computer \(i\) can execute per minute using the DEVS simulator \(j\):
\(D_{i,j}\ =\ \frac{60}{\sum_{m\in\mathcal{B}}T_{i,j}^{m}}\ \ [\text{DEVStones / minute}].\) (6)
The resulting DEVStone execution times depend on both the implementation details of the DEVS simulation engine and the architecture of the workstation that executes the simulations. Thus, to compare the performance of different DEVS simulators, this metric must be measured in the same machine. Note that \(D_{i,j}\) only considers the execution time, ignoring load times.
## 4 Comparison of DEVS Simulators
In this section, we use the benchmarking definition to evaluate the performance of some popular DEVS simulators. For each of them, a DEVStone implementation has been implemented (available in a public repository [30]). Specifically, the following simulators are considered for this comparison: adevs [27], Cadmium [28], PyPDEVS [17] (for both the main and minimal versions using the PyPy Python implementation, as suggested by the authors), and the C\({}^{++}\), Java, and Python implementations of xDEVS [18]. For the Python implementation, we show the simulation times for the basic simulation mode and including a recent feature for applying shared-memory techniques in the model ports (the so-called chained simulation algorithm [24]).
All these simulators present a port-based implementation. Therefore, the models include message entry/exit points (ports) that are linked by specifying source-destination links (couplings). Some additional details about the simulators and interpreters/compilers used for executing the simulations are shown in Table 2. The _Events container_ column refers to the data structure used by the different simulators to store the set of new messages in the ports (message bag). The _Components container_ column refers to the data structure used to store the different child components in the coupled models.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Engine** & **Version** & **Language** & \begin{tabular}{c} **Interpreter /** \\ **Compiler** \\ \end{tabular} & \begin{tabular}{c} **Events** \\ **container** \\ \end{tabular} &
\begin{tabular}{c} **Components** \\ **container** \\ \end{tabular} \\ \hline adevs & 3.3 & C\({}^{++}\) 17 & g\({}^{++}\) 7.5.0 -O3 & array & std::set \\ Cadmium & 0.2.5 & C\({}^{++}\) 17 & g\({}^{++}\) 7.5.0 -O3 & std::vector & std::vector \\ PyPDEVS & 2.4.1 & Python 3 & Pypy 7.3.1 & dict & list \\ xDEVS (1) & 1.0.0 & C\({}^{++}\) 11 & g\({}^{++}\) 7.5.0 -O3 & std::list & std::list \\ xDEVS (2) & 1.0.0 & Java 11 & OpenJDK 11.0.7 & LinkedList & LinkedList \\ xDEVS (3) & 1.1 & Python 3 & CPython 3.6.9 & deque & list \\ \hline \hline \end{tabular}
\end{table}
Table 2: Summary of simulator versions and main data types and used interpreters / compilers.
The DEVStone implementations for all these simulators were run in an Ubuntu 18.04, Intel Core i7-9700 and 64 GB RAM workstation. The benchmark was executed sequentially (i.e., using a single processor). The results are shown in Table 3. The second column shows the accumulated simulation time for all the models in the model set. The third column shows the corresponding DEVStones per minute considering the previous average times. Results are shown with a confidence interval of 95% considering that samples follow a T-distribution for 30 samples [29].
The adevs simulator obtained the best results, being able to perform more than 21 DEVStones per minute. The Java and C++ versions of xDEVS were the next fastest simulators, with 9.879 and 7.729 DEVStones per minute, respectively. On the other hand, Cadmium and all the Python simulators with no optimizations ran less than 1 DEVStone per minute. While the performance of the Python simulators seems reasonable (Python is an interpreted language with higher execution overheads compared to C++ or Java), the results for Cadmium may appear surprising. Cadmium was conceived as an educational tool for learning the DEVS formalism. It performs multiple checks throughout the execution of the simulation to ensure that the model strictly follows the DEVS specification (e.g., events are only generated when the \(\lambda\) function is triggered). All these checks imply execution overheads that are higher as the complexity of the model increases. The DEVStone times allows us to measure the effect of any optimization in the simulation engine. For example, the minimal version of PyPDEVS was able to run 1.978 DEVStones per minute (i.e., 4.172 times faster than its standard version).
The DEVStone benchmark presented in this research can be used to compare how different model complexity aspects affect the performance of the simulator under study. Figure 5 shows the percentage of execution time spent on each DEVStone model type for all the selected simulators.
The time required for executing the LI models is negligible compared to the other three types for all the simulators except PyPDEVS and the Java implementation of xDEVS. This implies that the simulation algorithms of these simulators potentially present a higher overhead compared to others, reducing the impact of the model complexity on the overall simulation time. In contrast, Cadmium spends 68.16% of the total execution time running the HO and HOmod models (more than any of the other simulators). This indicates that Cadmium's performance is more sensitive to the complexity of the model under study than the rest.
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Engine** & **Seconds / DEVStone** & **DEVStones / minute** \\ \hline adevs & \(2.831\pm 0.002\) & \(21.194\pm 0.014\) \\ Cadmium & \(76.359\pm 0.081\) & \(0.786\pm 0.001\) \\ PyPDEVS & \(126.534\pm 0.147\) & \(0.474\pm 0.001\) \\ PyPDEVS (min) & \(30.330\pm 0.031\) & \(1.978\pm 0.002\) \\ xDEVS (C++) & \(7.763\pm 0.002\) & \(7.729\pm 0.002\) \\ xDEVS (Java) & \(6.074\pm 0.020\) & \(9.879\pm 0.032\) \\ xDEVS (Python) & \(72.720\pm 0.127\) & \(0.825\pm 0.001\) \\ xDEVS (Python, chained) & \(46.838\pm 0.106\) & \(1.281\pm 0.003\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: DEVStone times for several popular DEVS M&S simulators.
It is possible to expand even more the results obtained running the DEVStone benchmark to see how the width and the depth of the DEVS model impacts on the performance of the simulator. Table 4 displays the execution time for each LI and HI model. There, we can see how the simulators perform for the different model configurations of these DEVStone model types.
The adevs simulator showed the best performance in all the included model configurations, followed by the C++ implementation of xDEVS in most of the DEVStone configurations. On the other hand, the base PyPDEVS simulator was the simulation engine that required more time for
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline
**Engine** & \begin{tabular}{c} **LI 200-** \\ **200** \\ \end{tabular} & **LI 200-40** & **LI 40-200** &
\begin{tabular}{c} **HI 200-** \\ **200** \\ \end{tabular} & **HI 200-40** & **HI 40-200** \\ \hline adevs & \(0.010\pm\) & \(0.001\pm\) & \(0.001\pm\) & \(1.133\pm\) & \(0.019\pm\) & \(0.099\pm\) \\ & \(0.000\) & \(0.000\) & \(0.000\) & \(0.001\) & \(0.000\) & \(0.000\) \\ Cadmium & \(0.149\pm\) & \(0.029\pm\) & \(0.029\pm\) & \(19.843\pm\) & \(0.709\pm\) & \(3.552\pm\) \\ & \(0.001\) & \(0.000\) & \(0.000\) & \(0.034\) & \(0.001\) & \(0.019\) \\ PyPDEVS & \(17.859\pm\) & \(0.749\pm\) & \(0.730\pm\) & \(45.809\pm\) & \(1.499\pm\) & \(4.263\pm\) \\ & \(0.017\) & \(0.002\) & \(0.001\) & \(0.100\) & \(0.006\) & \(0.008\) \\ PyPDEVS & \(0.148\pm\) & \(0.020\pm\) & \(0.020\pm\) & \(11.958\pm\) & \(0.330\pm\) & \(1.595\pm\) \\ (min) & \(0.000\) & \(0.000\) & \(0.000\) & \(0.015\) & \(0.001\) & \(0.005\) \\ xDEVS & \(0.020\pm\) & \(0.003\pm\) & \(0.003\pm\) & \(2.804\pm\) & \(0.055\pm\) & \(0.297\pm\) \\ (C++) & \(0.000\) & \(0.000\) & \(0.000\) & \(0.004\) & \(0.000\) & \(0.001\) \\ xDEVS & \(0.064\pm\) & \(0.023\pm\) & \(0.022\pm\) & \(2.308\pm\) & \(0.098\pm\) & \(0.260\pm\) \\ (Java) & \(0.002\) & \(0.000\) & \(0.000\) & \(0.015\) & \(0.002\) & \(0.002\) \\ xDEVS & \(0.197\pm\) & \(0.038\pm\) & \(0.036\pm\) & \(25.428\pm\) & \(0.954\pm\) & \(4.561\pm\) \\ (Python) & \(0.001\) & \(0.000\) & \(0.000\) & \(0.113\) & \(0.002\) & \(0.013\) \\ xDEVS & \(0.159\pm\) & \(0.029\pm\) & \(0.029\pm\) & \(17.877\pm\) & \(0.644\pm\) & \(3.184\pm\) \\ (Python, & \(0.001\) & \(0.000\) & \(0.000\) & \(0.067\) & \(0.003\) & \(0.013\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Simulation times (in seconds) the LI and HI combinations in the DEVStone benchmark model set. Shapes are provided in the format depth-width.
Figure 5: Percentage of the execution time spent on each DEVStone model type.
simulating the LI models, being up to 90.655 times slower than the Python implementation of xDEVS (second slowest simulator for LI models) and up to 120.669 times slower than the minimal implementation of PyPDEVS. Also, due to the simple structure of LI models, all the simulators listed here obtained similar times for the deep and wide models.
In contrast, the increased complexity of the HI DEVStone significantly impacts on the simulation time. In HI models, wide models require higher simulation times than deep models due to the additional internal couplings (and, therefore, event propagations). In the balanced configuration of HI models, the Java implementation of xDEVS can outperform its C\(++\) counterpart. At the other end, the simulators with worst performance are again the base Python implementations (PyPDEVS and Python xDEVS), with a reduced time difference for this model. Cadmium is the simulator with a more significant simulation time increase comparing LI and HI models. This confirms that Cadmium is the most sensitive to the complexity of the running model.
Table 5 shows the execution time for the models with HO and HOmod structures. There, we can see how the simulators perform for the most complex model configurations that conform the DEVStone benchmark.
Again, the adevs simulator outperformed the other simulators in all the included model configurations. However, the Java version of xDEVS obtained better results than its C\(++\) counterpart for all the configurations except the deep configurations of the HO and HOmod model types (i.e., the models with less internal couplings). In HO and HOmod models, consequently with the specification, wide models also require higher simulation times compared to their deep peer.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline
**Engine** & \begin{tabular}{c} **HO** \\ **200-200** \\ \end{tabular} & \begin{tabular}{c} **HO** \\ **200-40** \\ \end{tabular} & \begin{tabular}{c} **HO** \\ **40-200** \\ \end{tabular} & \begin{tabular}{c} **HOmod** \\ **20-20** \\ \end{tabular} & \begin{tabular}{c} **HOmod** \\ **20-4** \\ \end{tabular} &
\begin{tabular}{c} **HOmod** \\ **4-20** \\ \end{tabular} \\ \hline \multirow{3}{*}{adevs} & 1.288 \(\pm\) & 0.022 \(\pm\) & 0.116 \(\pm\) & 0.138 \(\pm\) & 0.000 \(\pm\) & 0.002 \(\pm\) \\ & 0.001 & 0.000 & 0.000 & 0.000 & 0.000 \\ \cline{1-1} & 38.250 \(\pm\) & 1.023 \(\pm\) & 7.077 \(\pm\) & 5.548 \(\pm\) & 0.034 \(\pm\) & 0.114 \(\pm\) \\ & 0.055 & 0.003 & 0.014 & 0.008 & 0.000 & 0.000 \\ \cline{1-1} & 45.679 \(\pm\) & 1.500 \(\pm\) & 4.276 \(\pm\) & 4.063 \(\pm\) & 0.027 \(\pm\) & 0.079 \(\pm\) \\ \cline{1-1} & 0.107 & 0.003 & 0.013 & 0.011 & 0.000 & 0.000 \\ \cline{1-1} & 12.115 \(\pm\) & 0.330 \(\pm\) & 1.598 \(\pm\) & 2.164 \(\pm\) & 0.012 \(\pm\) & 0.040 \(\pm\) \\ \cline{1-1} & 0.025 & 0.001 & 0.005 & 0.009 & 0.000 & 0.000 \\ \cline{1-1} & xDEVS & 3.547 \(\pm\) & 0.066 \(\pm\) & 0.370 \(\pm\) & 0.584 \(\pm\) & 0.003 \(\pm\) & 0.011 \(\pm\) \\ \cline{1-1} & 0.005 & 0.000 & 0.001 & 0.001 & 0.000 & 0.000 \\ \cline{1-1} & xDEVS & 2.532 \(\pm\) & 0.104 \(\pm\) & 0.298 \(\pm\) & 0.313 \(\pm\) & 0.021 \(\pm\) & 0.031 \(\pm\) \\ \cline{1-1} & 0.015 & 0.001 & 0.002 & 0.002 & 0.000 & 0.000 \\ \cline{1-1} & xDEVS & 29.609 \(\pm\) & 1.129 \(\pm\) & 5.417 \(\pm\) & 5.184 \(\pm\) & 0.045 \(\pm\) & 0.121 \(\pm\) \\ \cline{1-1} & 0.066 & 0.003 & 0.015 & 0.021 & 0.000 & 0.000 \\ \cline{1-1} & xDEVS & 17.906 \(\pm\) & 0.642 \(\pm\) & 3.207 \(\pm\) & 3.063 \(\pm\) & 0.025 \(\pm\) & 0.071 \(\pm\) \\ \cline{1-1} & 0.067 & 0.003 & 0.020 & 0.012 & 0.000 & 0.000 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Simulation times (in seconds) for the HO and HOmod combinations in the DEVStone benchmark model set. Shapes are provided in the format depth-width.
The increase in complexity of these models also leads to poorer performance results for Cadmium, getting times comparable with the ones obtained by PyPDEVS and the Python versions of xDEVS.
Figure 6 compares the execution time of the selected DEVS-compliant simulation engines for the balanced models that comprise the DEVStone benchmark. For sake of simplicity, we only show the results of the optimized versions of the PyPDEVS and the Python xDEVS simulators. The adevs simulator obtained the best results for all these configurations.
Figure 6(a) depicts the execution time for the LI balanced model. Cadmium and the minimal version of PyPDEVS obtained similar results. However, as the complexity of the simulated model increases, the performance of Cadmium decreases. For example, for simulating the balanced HI model (see Figure 6(b)), Cadmium required 19.843 seconds, whereas the minimal version of PyPDEVS finished in 11.958 seconds. In fact, Cadmium ends up being the slowest simulator for the HO and HOmod balanced models (Figure 6(c) and Figure 6(d), respectively). Note that this performance downgrade is mainly due to the multiple runtime checks performed by Cadmium to ensure that the model strictly follows the DEVS formalism. The C\(++\) version of xDEVS obtained the second-best result for the LI balanced model. In contrast, for the rest of the balanced configurations, its Java counterpart outperforms it, showing a better resilience to model complexity.
Figure 7 depicts the mean wall-clock time required by the simulators understudy to execute the deep models comprising the DEVStone model set.
Figure 6: Simulation time of DEVS simulators for the balanced models that comprise the DEVStone benchmark.
Once more, adevs outperforms the rest of the simulators in all the deep models. As a rule, the C\(++\) and Java implementations of xDEVS show the second and third best results, respectively. The minimal implementation of PyPDEVS, Python xDEVS, and Cadmium obtained the worst results. However, it is worth mentioning that, for the LI and HOmod models (see Figure 7(a) and Figure 7(d), respectively), the minimal implementation of PyPDEVS achieves better results than Java xDEVS. Note that these are the simplest models in the DEVStone benchmark model set, which indicates a slightly higher computation overhead in the Java version of xDEVS. This overhead becomes negligible as the model complexity increases, requiring more event propagation and model synchronization actions.
Figure 8 displays the execution time of the wide models in the DEVStone model set. If we compare the results of executing the deep and wide LI models (i.e., Figure 7(a) and Figure 8(a)), the results are practically identical. In contrast, for the HI, HO, and HOmod types, wide models take longer to execute than the deep ones, regardless of the simulator under study.
As deep models are more complex than wide models, besides Cadmium, the minimal implementation of PyPDEVS manages to outperform only the Java version of xDEVS (see Figure 8(a)). Furthermore, xDEVS Java shows better results than its C\(++\) analogous for the HI and HOmod deep models (see Figures 8(b) and 8(c), respectively). Again, this proves that the Java implementation shows greater resilience than xDEVS C\(++\).
Figure 7: Simulation time of DEVS simulators for the deep models that comprise the DEVStone benchmark.
It is important to remark that a simulation performance comparison is not enough to decide which DEVS simulation tool suits better for a new project. We also must consider additional features provided by each simulator, as these can be very convenient depending on the use case. For example, adevs supports using QEMU [31], Cadmium integrates a DEVS-based Real-Time (RT) kernel for embedded systems [32], and xDEVS provides tools for model unit testing [33].
## 5 Conclusion
The DEVStone benchmark facilitates the process of evaluating the performance of DEVS-based simulators. It allows defining four customizable models, each of them with valuable particularities that allow us to measure specific issues of the simulation. However, this non-homogeneous selection made it difficult to compare the results among studies and conditioned the results according to the chosen models.
The DEVStone metric allows us generating individual and objective ratings for specific pairs of DEVS-based simulators and workstations. The DEVStone metric adds of the simulation times of a fixed mix of synthetic models, which includes a variety of DEVStone models, composed by a selection of balanced, wide, and multi-layer models. This metric was used to compare some popular DEVS-based simulators. Both the global DEVStones/minutes and the specific simulation times per each model have been shown, highlighting the strengths and weaknesses of the different simulators. These results can be used by the authors to enhance the performance of their implementations.
Figure 8: Simulation time of DEVS simulators for the wide models that comprise the DEVStone benchmark.
All the DEVStone implementations, for each one of the simulators considered in this article, are available in a public repository [30]. This repository is prepared so that all the simulations can be executed easily from a main script. As future work, we will extend the set of supported simulators present in this repository to cover more popular DEVS-based simulators. Additionally, we will define an alternative DEVStone benchmark to compare parallel and distributed DEVS simulation engines considering the internal and external transition delays of the DEVStone models that comprise this benchmark.
## Acknowledgments
This project has been partially supported by the Spanish Ministry of Science and Innovation under research grant PID2019-110866RB-I00.
|
2309.06757 | Novel relations for twist-3 tensor-polarized fragmentation functions in
spin-1 hadrons | There are three types of fragmentation functions (FFs) which are used to
describe the twist-3 cross sections of the hard semi-inclusive processes under
QCD collinear factorization, and they are called intrinsic, kinematical, and
dynamical FFs. In this work, we investigate the theoretical relations among
these FFs for a tensor-polarized spin-1 hadron. Three Lorentz-invariance
relations are derived by using the identities between the nonlocal quark-quark
and quark-gluon-quark operators, which guarantee the frame independence of the
twist-3 spin observables. The QCD equation of motion relations are also
presented for the tensor-polarized FFs. In addition, we show that the intrinsic
and kinematical twist-3 FFs can be decomposed into the contributions of twist-2
FFs and twist-3 three-parton FFs, and the latter are also called dynamical FFs.
If one neglects the dynamical FFs, we can obtain relations which are analogous
to the Wandzura-Wilczek relation. Then, the intrinsic and kinematical twist-3
FFs are expressed in terms of the leading-twist ones. Since the FFs of a spin-1
hadron can be measured at various experimental facilities in the near future,
these theoretical relations will play an important role in the analysis of the
collinear tensor-polarized FFs. | Qin-Tao Song | 2023-09-13T07:07:09Z | http://arxiv.org/abs/2309.06757v2 | # Novel relations for twist-3 tensor-polarized fragmentation functions in spin-1 hadrons
###### Abstract
There are three types of fragmentation functions (FFs) which are used to describe the twist-3 cross sections of the hard semi-inclusive processes under QCD collinear factorization, and they are called intrinsic, kinematical, and dynamical FFs. In this work, we investigate the theoretical relations among these FFs for a tensor-polarized spin-1 hadron. Three Lorentz-invariance relations are derived by using the identities between the nonlocal quark-quark and quark-gluon-quark operators, which guarantee the frame independence of the twist-3 spin observables. The QCD equation of motion relations are also presented for the tensor-polarized FFs. In addition, we show that the intrinsic and kinematical twist-3 FFs can be decomposed into the contributions of twist-2 FFs and twist-3 three-parton FFs, and the latter are also called dynamical FFs. If one neglects the dynamical FFs, we can obtain relations which are analogous to the Wandzura-Wilczek relation. Then, the intrinsic and kinematical twist-3 FFs are expressed in terms of the leading-twist ones. Since the FFs of a spin-1 hadron can be measured at various experimental facilities in the near future, these theoretical relations will play an important role in the analysis of the collinear tensor-polarized FFs.
## I Introduction
Parton distribution functions (PDFs) are key physical quantities in hadron spin physics, since they are used to solve the proton spin puzzle and to understand the inner structure of hadrons. For a spin-1/2 hadron, the theoretical relations of PDFs and fragmentation functions (FFs) have been well studied. Starting with the Wandzura-Wilczek (WW) relation, it is known that if one neglects the three-parton PDFs, the twist-3 PDF \(g_{2}\) can be expressed in terms of the leading-twist one \(g_{1}\) which has been well measured [1]. The violation of the WW relation comes from the three-parton PDFs, and it was shown that such violation can be as large as 15%-40% of the size of \(g_{2}\)[2]. There also exist the so-called Lorentz-invariance relations (LIRs) for the PDFs in a spin-1/2 hadron, which were investigated in Refs. [2; 3; 4; 5; 6; 7; 8; 9]. In addition to PDFs, LIRs were also derived for the quark FFs [9]. Recently, the authors of Ref. [10] performed a systematic study on the gluon PDFs and FFs, where the intrinsic and kinematical twist-3 gluon distributions are written in terms of the twist-2 distributions and the twist-3 dynamical distributions, and the latter are actually three-parton distributions; moreover, the complete LIRs are also listed for the gluon part. On the one hand, these interesting relations can be used as constraints for the analysis of twist-3 distributions. On the other hand, they are also crucial to describe the spin observables, for example, the LIRs can be used to guarantee the frame independence of the twist-3 cross sections, such as the single-spin asymmetries (SSAs) in the hadron production of lepton-nucleon collisions and the hadron production of hadronic collisions (\(pp\to\Lambda^{\dagger}X\)) [9; 11; 12].
For a spin-1 hadron, there are unpolarized, vector-polarized and tensor-polarized distributions. The former two also exist for a spin-1/2 hadron, while the tensor-polarized distributions are the new ones. Among the tensor-polarized PDFs, \(b_{1}(x)\) [or \(f_{LLL}(x)\)] [13; 14] and the gluon transversity \(\Delta_{T}g(x)\)[15; 16] are the most interesting ones. The sum rule of \(\int dxb_{1}(x)=0\) was derived for an isoscalar object such as the deuteron, and the breaking of this sum rule is related to the contribution of a tensor-polarized component of the sea quarks and antiquarks [17]. In 2005, the HERMES collaboration performed the first measurement of \(b_{1}(x)\) for deuteron with slightly large uncertainties [18], and it indicates that \(b_{1}(x)\) is much larger than the theoretical prediction [19]. Since the theoretical estimate of \(b_{1}(x)\) was given by considering deuteron as a weakly bound state of proton and neutron, the large \(b_{1}(x)\) could indicate exotic components of deuteron such as a six-quark state and a hidden color state [20]. As for the gluon transversity \(\Delta_{T}g(x)\), it is related to the helicity flipped amplitude, so it only exists in a hadron with spin more than or equal to 1 due to the angular momentum conservation. In this case, one can infer that there are nonnucleonic components in the deuteron by the nonzero \(\Delta_{T}g(x)\), which means that it is interesting to investigate the gluon transversity by experiment; for example, it can be extracted from the cross sections of deep-inelastic scattering (DIS) [21; 15] and Drell-Yan process [22; 23] with a tensor-polarized deuteron target. In the near future, \(b_{1}(x)\) and \(\Delta_{T}g(x)\) will be measured at the Thomas Jefferson National Accelerator Facility (JLab) [24; 25], Fermilab (Fermi National Accelerator Laboratory) [26; 27; 28], and Nuclotron-based Ion Collider fAcility (NICA)[29]. There are also interesting theoretical relations for
the tensor-polarized PDFs; in Ref. [30] the twist-3 PDF \(f_{LT}(x)\) was decomposed into the contributions of a twist-2 PDF \(b_{1}(x)\) [\(f_{1LL}(x)\)] and the three-parton PDFs. Moreover, the WW-type relation was obtained by dropping the latter. The QCD equation of motion (e.o.m.) relations and LIR were derived in Ref. [31] for tensor-polarized PDFs. Recently, the gluon transversity generalized parton distribution was also investigated for a spin-1 hadron [32], which becomes the gluon transversity \(\Delta_{T}g(x)\) in the forward limit. In addition to the collinear PDFs, one can find the tensor-polarized transverse-momentum dependent (TMD) PDFs up to twist 4 for a spin-1 hadron in Refs. [33; 34; 35; 36].
The spin-1 hadrons are produced in the hard semi-inclusive processes, such as \(\rho\), \(\phi\), \(K^{*}\) and a deuteron. In order to describe those processes, the tensor-polarized FFs are needed. The quark collinear FFs are defined in Ref. [37] up to twist 4 for a spin-1 hadron, and the tensor-polarized TMD FFs can be also found in Refs. [38; 33]. In the future, the tensor-polarized FFs can be measured at BESIII and Belle II. Actually, such measurement is now in progress, for example, the FFs of \(\phi\) in the process \(e^{+}e^{-}\to\phi X\) by the BESIII Collaboration [39]. However, the theoretical relations of tensor-polarized FFs have not been completely investigated. In this work, we intend to derive the LIRs, QCD e.o.m., and WW-type relations for the tensor-polarized FFs in a spin-1 hadron, which can provide constraints for the future experimental and theoretical studies of these FFs.
This paper is organized as follows. In Sec. II, we define the intrinsic, kinematical, and dynamical twist-3 FFs, and general properties of them are discussed. We derive the theoretical relations among tensor-polarized FFs using QCD e.o.m. for quarks in Sec. III. The operator identities are obtained for the nonlocal quark-quark and quark-gluon-quark operators, then LIRs and WW-type relations are also given based on the matrix elements of the operator identities in Sec. IV. A brief summary of this work is presented in Sec. V.
## II Tensor-polarized fragmentation functions
The tensor polarization is often indicated by the matrix \(T\) for a spin-1 hadron, and the covariant form of \(T^{\mu\nu}\) is expressed as [33; 34]
\[T^{\mu\nu}=\frac{1}{2}\left[\frac{4}{3}S_{LL}\frac{(P_{h}^{-})^{2}}{M^{2}}n^{ \mu}n^{\nu}-\frac{2}{3}S_{LL}(n^{\{\mu}\bar{n}^{\nu\}}-g_{T}^{\mu\nu})+\frac{ 1}{3}S_{LL}\frac{M^{2}}{(P_{h}^{-})^{2}}\bar{n}^{\mu}\bar{n}^{\nu}+\frac{P_{h }^{-}}{M}n^{\{\mu}S_{LT}^{\nu\}}-\frac{M}{2P_{h}^{-}}\bar{n}^{\{\mu}S_{LT}^{ \nu\}}+S_{TT}^{\mu\nu}\right], \tag{1}\]
where \(P_{h}\) and \(M\) are momentum and mass for the produced hadron, respectively. \(a^{\{\mu\,b\,\nu\}}=a^{\mu}b^{\nu}+a^{\nu}b^{\mu}\) denotes symmetrization of the indices. The lightcone vectors \(n\) and \(\bar{n}\) are given by
\[n^{\mu}=\frac{1}{\sqrt{2}}(\,1,\,0,\,0,\,-1\,),\,\,\,\,\bar{n}^{\mu}=\frac{1} {\sqrt{2}}(\,1,\,0,\,0,\,1\,), \tag{2}\]
and \(P_{h}\) can be written as \(P_{h}=P_{h}^{-}n+\frac{M^{2}}{2P_{h}^{-}}\bar{n}\). For a Lorentz vector \(a^{\mu}\), the lightcone components \(a^{\pm}\) and transverse component \(a_{T}\) are defined by
\[a^{+}=a\cdot n,\;a^{-}=a\cdot\bar{n},\;a_{T}^{\mu}=g_{T}^{\mu\nu}a_{\nu} \tag{3}\]
with
\[g_{T}^{\mu\nu}=g^{\mu\nu}-n^{\mu}\bar{n}^{\nu}-n^{\nu}\bar{n}^{\mu}. \tag{4}\]
In Eq. (1), \(S_{LL}\), \(S_{LT}^{\mu}\) and \(S_{TT}^{\mu\nu}\) are the parameters which indicate different types of tensor polarization.
For a spin-1 hadron, the fragmentation correlator is defined as [33; 36; 37; 38]
\[\Delta_{ij}(z)= \frac{1}{N_{c}}\int\!\frac{d\xi^{+}}{2\pi}\,e^{i\frac{P_{h}^{-} \xi^{+}}{z}}(0\left|\mathcal{W}\left[\infty^{+};\xi^{+}\right]q_{i}(\xi^{+}) \right|P_{h},T;X)\langle\,P_{h},T;X\left|\,\bar{q}_{j}(0)\,\mathcal{W}\left[0^ {+};\infty^{+}\right]\right|0\rangle,\] \[= \frac{1}{z}\Big{\{}S_{LL}\#F_{1LL}(z)+\frac{M}{P_{h}^{-}}\left[ \not{S}_{LT}F_{LT}(z)+S_{LL}E_{LL}(z)\right]+\sigma^{i+}S_{LT,i}H_{1LT}\] \[+\frac{M}{P_{h}^{-}}\left[S_{LL}\sigma^{-+}H_{LL}(z)+\gamma_{5} \gamma_{i}\epsilon_{T}^{ij}S_{LT,j}G_{LT}\right]\Big{\}} \tag{5}\]
where \(z\) is the longitudinal momentum fraction carried by the produced hadron, \(N_{c}\) is the number of color, and \(\mathcal{W}\) is a Wilson line which ensures color gauge invariance. The transverse tensor \(\epsilon_{T}^{\alpha\beta}\) is given by
\[\epsilon_{T}^{\alpha\beta}=\epsilon^{\alpha\beta\mu\nu}n_{\mu}\bar{n}_{\nu} \tag{6}\]
with the convention \(\epsilon^{0123}=1\). In Eq. (5), the correlator is expressed in terms of six tensor-polarized FFs up to twist 3, and the FFs are real functions with the support region of \(0<z<1\). \(F_{1LL}(z)\) and \(H_{1LT}(z)\) are leading-twist FFs, and the rest are also called intrinsic twist-3 FFs [9]. Since time-reversal invariance is not a necessary constraint for the fragmentation correlator, the last three FFs are actually time-reversal odd FFs. Note that there are also unpolarized and vector-polarized FFs in the correlator, which are neglected here since we are interested in the tensor-polarized ones.
The kinematical twist-3 FFs are related to the TMD FFs. In case of a tensor-polarized hadron, the TMD fragmentation correlator reads [40; 41; 42; 43; 44]
\[\Delta_{ij}(z,k_{T})= \frac{1}{N_{c}}\int\!\frac{d\xi^{+}d^{2}\xi_{T}}{(2\pi)^{3}}\,e^{ i(k^{-}\xi^{+}+k_{T}\cdot\xi_{T})}\langle 0\left|\mathcal{W}_{1}\left[\infty; \xi\right]q_{i}(\xi)\right|P_{h},T;X\rangle\langle\,P_{h},T;X\left|\,\bar{q}_ {j}(0)\,\mathcal{W}_{2}\left[0;\infty\right]\right|0\rangle_{\xi^{-}=0} \tag{7}\]
with
\[\mathcal{W}_{1}\left[\infty;\xi\right] =\mathcal{W}\left[\infty^{+},\infty_{T};\infty^{+},\xi_{T} \right]\mathcal{W}\left[\infty^{+},\xi_{T};\xi^{+},\xi_{T}\right],\] \[\mathcal{W}_{2}\left[0;\infty\right] =\mathcal{W}\left[0^{+},0_{T};\infty^{+},0_{T}\right]\mathcal{W} \left[\infty^{+},0_{T};\infty^{+},\infty_{T}\right], \tag{8}\]
and the correlator can be written in terms of TMD FFs [33; 38; 45]. The \(k_{T}\)-weighted FFs are defined with the help of the TMD fragmentation correlator,
\[\Delta^{\nu}_{0,ij}(z)=\int d^{2}k_{T}k_{T}^{\nu}\Delta_{ij}(z,k_{T}), \tag{9}\]
which is parametrized by four \(k_{T}\)-weighted FFs at twist 3 [33],
\[\Delta^{\nu}_{0}(z)=\frac{M}{z}\left[-S^{\nu}_{LT}\#F^{(1)}_{1LT}(z)-\epsilon^ {\nu\rho}_{T}S_{LT\rho}\gamma_{5}\#G^{(1)}_{1LT}(z)+S_{LL}\sigma^{\nu\alpha}n _{\alpha}H^{(1)}_{1LL}(z)-S^{\nu\alpha}_{TT}\sigma_{\alpha\beta}n^{\beta}H^{( 1)}_{1TT}(z)\right], \tag{10}\]
and these FFs are also called kinematical twist-3 FFs in Ref. [9]. Due to Eq. (9), the kinematical twist-3 FFs are given by TMD FFs,
\[F^{(1)}(z)=-z^{2}\int d^{2}k_{T}\frac{k_{T}^{2}}{2M^{2}}F(z,z^{2}k_{T}^{2}), \tag{11}\]
where \(F(z,z^{2}k_{T}^{2})\) is a TMD FF.
Similarly, we define the collinear three-parton fragmentation correlator [30],
\[\Delta^{\nu}_{F,ij}(z,z_{1})= \frac{1}{N_{c}}\int\!\frac{d\xi^{+}}{2\pi}\frac{d\xi^{+}_{1}}{2 \pi}\,e^{iP^{-}_{h}\xi^{+}\frac{1}{z_{1}}+iP^{-}_{h}\xi^{+}_{1}(\frac{1}{z}- \frac{1}{z_{1}})}\langle 0\left|\mathcal{W}\left[\infty^{+};\xi^{+}_{1}\right]igF^{- \nu}(\xi^{+}_{1})\mathcal{W}\left[\xi^{+}_{1};\xi^{+}\right]q_{i}(\xi^{+}) \right|P_{h},T;X\rangle\] \[\times\langle\,P_{h},T;X\left|\,\bar{q}_{j}(0)\mathcal{W}\left[0^ {+};\infty^{+}\right]\right|0\rangle. \tag{12}\]
By inserting a complete set of intermediate states, one can prove that
\[\Delta^{\nu}_{F}(z,z)=0,\quad\Delta^{\nu}_{F}(z,0)=0, \tag{13}\]
and this corresponds to the vanishing partonic pole matrix elements which are important to understand the SSAs in the hard semi-inclusive processes [46]. Then, the support region of \(\Delta^{\nu}_{F}(z,z_{1})\) is
\[0\leq z\leq 1,\quad 0<\frac{z}{z_{1}}<1. \tag{14}\]
Taking the derivative of this correlator with respect to \(1/z_{1}\) and then setting \(z_{1}=z\), one can also obtain [9]
\[\frac{\partial\Delta^{\nu}_{F}(z,z_{1})}{\partial(1/z_{1})}|_{z_{1}=z}=0. \tag{15}\]
The parametrization of \(\Delta^{\nu}_{F}(z,z_{1})\) is just a copy of the corresponding three-parton distribution correlator [30], and it can be expressed in terms of four dynamical FFs at twist 3,
\[\Delta^{\nu}_{F,ij}(z,z_{1})=\frac{M}{z}\left[-S^{\nu}_{LT}\#\hat{F}_{LT}(z,z_ {1})-i\epsilon^{\nu\rho}_{T}S_{LT\rho}\gamma_{5}\#\hat{G}_{LT}(z,z_{1})-S_{LL} \gamma^{\nu}\#\hat{H}^{\perp}_{LL}(z,z_{1})-S^{\nu\rho}_{TT}\gamma_{\rho}\# \hat{H}_{TT}(z,z_{1})\right]. \tag{16}\]
Note that the dynamical FFs are complex functions which are different from the intrinsic and kinematical ones.
## III Equation of motion relations for FFs
The intrinsic, kinematical and dynamical FFs are not independent functions, since they can be related to each other by the e.o.m. relations. For a spin-1/2 hadron, the e.o.m. relations for FFs were derived in Refs. [9; 47] based on the QCD e.o.m. for quarks, namely, \((i\not{D}-m_{q})q(x)=0\). In the following, we will investigate the e.o.m. relations for tensor-polarized FFs. After some algebra, the QCD e.o.m. for quarks becomes
\[(iD^{\mu}+\sigma^{\mu\nu}D_{\nu}+m_{q}\gamma^{\mu})q(x)=0, \tag{17}\]
where \(m_{q}\) is the mass of the quark. If we set \(\mu=-\) and take the corresponding matrix element for Eq. (17), an e.o.m. relation can be obtained for the intrinsic, kinematical and dynamical FFs,
\[\frac{E_{LL}(z)}{z}+\frac{iH_{LL}(z)}{z}-\frac{m_{q}}{M}F_{1LL}(z)=2\left[-iH_ {1LL}^{(1)}(z)+\mathcal{P}\int_{z}^{\infty}\frac{dz_{1}}{(z_{1})^{2}}\frac{ \hat{H}_{LL}^{\perp}(z,z_{1})}{\frac{1}{z}-\frac{1}{z_{1}}}\right], \tag{18}\]
where \(\mathcal{P}\) stands for the principal integral, and it can be neglected due to Eq. (13). All the FFs in Eq. (18) are related to the \(S_{LL}\)-type tensor polarization. Furthermore, this relation can be reexpressed in terms of the real and complex parts of the dynamical FF,
\[\frac{E_{LL}(z)}{z}= 2\int_{z}^{\infty}\frac{dz_{1}}{(z_{1})^{2}}\frac{\text{Re} \left[\hat{H}_{LL}^{\perp}(z,z_{1})\right]}{\frac{1}{z}-\frac{1}{z_{1}}}+ \frac{m_{q}}{M}F_{1LL}(z), \tag{19}\] \[\frac{H_{LL}(z)}{z}= 2\int_{z}^{\infty}\frac{dz_{1}}{(z_{1})^{2}}\frac{\text{Im} \left[\hat{H}_{LL}^{\perp}(z,z_{1})\right]}{\frac{1}{z}-\frac{1}{z_{1}}}-2H_{ 1LL}^{(1)}(z). \tag{20}\]
We can see that the time-reversal even and odd intrinsic FFs are related to the real and imaginary parts of the dynamical FFs, respectively. If we neglect the quark mass, the intrinsic twist-3 FFs \(E_{LL}(z)\) and \(H_{LL}(z)\) are given by the kinematical and dynamical twist-3 FFs.
Multiplying \(\gamma^{\nu}\) on the l.h.s. of Eq. (17), then antisymmetrizing \(\mu\) and \(\nu\), we can obtain the identity as
\[\left[i(\gamma^{\mu}D^{\nu}-\gamma^{\nu}D^{\mu})-\epsilon^{\mu\nu\rho\sigma} \gamma_{\sigma}\gamma_{5}D_{\rho}+im_{q}\sigma^{\mu\nu}\right]q(x)=0. \tag{21}\]
Analogously, we set \(\mu=-\) and consider \(\nu\) as a transverse component, then the matrix element of Eq. (21) leads to
\[\frac{F_{LT}(z)}{z}+\frac{iG_{LT}(z)}{z}+\frac{im_{q}}{M}H_{1LT}(z)\] \[= -iG_{1LT}^{(1)}(z)+\int_{z}^{\infty}\frac{dz_{1}}{(z_{1})^{2}} \frac{\hat{G}_{LT}(z,z_{1})}{\frac{1}{z}-\frac{1}{z_{1}}}-\left[F_{1LT}^{(1)} (z)+\int_{z}^{\infty}\frac{dz_{1}}{(z_{1})^{2}}\frac{\hat{F}_{LT}(z,z_{1})}{ \frac{1}{z}-\frac{1}{z_{1}}}\right], \tag{22}\]
and it indicates the relation among the intrinsic, kinematical and dynamical FFs for the \(S_{LT}\)-type tensor polarization. Furthermore, Eq. (22) can be divided into two identities,
\[\frac{F_{LT}(z)}{z}= -\int_{z}^{\infty}\frac{dz_{1}}{(z_{1})^{2}}\frac{\text{Re} \left[\hat{F}_{LT}(z,z_{1})-\hat{G}_{LT}(z,z_{1})\right]}{\frac{1}{z}-\frac{1 }{z_{1}}}-F_{1LT}^{(1)}(z), \tag{23}\] \[\frac{G_{LT}(z)}{z}= -\int_{z}^{\infty}\frac{dz_{1}}{(z_{1})^{2}}\frac{\text{Im} \left[\hat{F}_{LT}(z,z_{1})-\hat{G}_{LT}(z,z_{1})\right]}{\frac{1}{z}-\frac{1 }{z_{1}}}-G_{1LT}^{(1)}(z)-\frac{m_{q}}{M}H_{1LT}(z). \tag{24}\]
As indicated by Eq. (5), there are no intrinsic FFs for the \(S_{TT}\)-type tensor polarization. However, we can also derive the following identity using Eq. (21):
\[iH_{1TT}^{(1)}(z)+\int_{z}^{\infty}\frac{dz_{1}}{(z_{1})^{2}}\frac{\hat{H}_{ TT}(z,z_{1})}{\frac{1}{z}-\frac{1}{z_{1}}}=0, \tag{25}\]
and it implies
\[\int_{z}^{\infty}\frac{dz_{1}}{(z_{1})^{2}}\frac{\text{Re}\left[\hat{H}_{TT}(z, z_{1})\right]}{\frac{1}{z}-\frac{1}{z_{1}}}= 0, \tag{26}\]
\[H^{(1)}_{1TT}(z)+\int_{z}^{\infty}\frac{dz_{1}}{(z_{1})^{2}}\frac{ \mathrm{Im}\left[\hat{H}_{TT}(z,z_{1})\right]}{\frac{1}{z}-\frac{1}{z_{1}}}= 0, \tag{27}\]
which complete the derivation of the QCD e.o.m. relations for tensor-polarized FFs.
## IV Lorentz invariance and Wandzura-Wilczek-type relations
Taking the derivative of nonlocal quark-quark operators, one can obtain the identities where the quark-quark operators are expressed in terms of the quark-gluon-quark ones. The theoretical relations have been investigated for PDFs, FFs and distribution amplitudes by using these identities of nonlocal operators, and this method was well explained in Refs. [48; 49; 50; 51; 52; 53; 54; 55; 56]. In this section, we adopt the same method to derive the theoretical relations for twist-3 tensor-polarized FFs such as LIRs and WW-type relations. We first consider the derivative of the nonlocal quark-quark operator [9],
\[\frac{\partial}{\partial\xi_{\alpha}}\langle 0|\mathcal{W} \left[\infty\xi;-\xi\right]q(-\xi)|P_{h},T;X\rangle\langle\,P_{h},T;X|\, \overline{q}(\xi)\Gamma\mathcal{W}\left[\xi;\infty\xi\right]|0\rangle\] \[= -\langle 0|\mathcal{W}\left[\infty\xi;-\xi\right]\overset{\rightarrow }{D}^{\alpha}(-\xi)q(-\xi)|P_{h},T;X\rangle\langle\,P_{h},T;X|\,\overline{q}( \xi)\Gamma\mathcal{W}\left[\xi;\infty\xi\right]|0\rangle\] \[+\langle 0\,|\mathcal{W}\left[\infty\xi;-\xi\right]q(-\xi)|\,P_{h},T ;X\rangle\langle\,P_{h},T;X|\,\overline{q}(\xi)\overset{\leftarrow}{D}^{ \alpha}(\xi)\Gamma\mathcal{W}\left[\xi;\infty\xi\right]|0\rangle\] \[+i\int_{-1}^{\infty}dt\langle 0\left|\mathcal{W}\left[\infty\xi; t\xi\right]gF^{\alpha\xi}(t\xi)\mathcal{W}\left[t\xi;-\xi\right]q(-\xi)\right|P_{h},T;X \rangle\langle\,P_{h},T;X|\,\overline{q}(\xi)\Gamma\mathcal{W}\left[\xi; \infty\xi\right]|0\rangle\] \[+i\int_{\infty}^{1}dtt\langle 0|\mathcal{W}\left[\infty\xi;-\xi \right]q(-\xi)|P_{h},T;X\rangle\langle\,P_{h},T;X\,\big{|}\,\overline{q}(\xi) \Gamma\mathcal{W}\left[\xi;t\xi\right]gF^{\alpha\xi}(t\xi)\mathcal{W}\left[t \xi;\infty\xi\right]\big{|}0\rangle, \tag{28}\]
where \(\xi\) is not necessarily a lightcone vector and \(\Gamma\) is a gamma matrix. In Eq. (IV), the terms with the covariant derivative \(D^{\alpha}\) can be replaced by the total derivative of the nonlocal quark-quark operator, which is related to the translation of the operator, and its matrix element can be expressed as [9]
\[\bar{\partial}^{\rho}\langle 0\,|\mathcal{W}\left[\infty\xi;-\xi \right]q(-\xi)|\,P_{h},T;X\rangle\langle\,P_{h},T;X\,|\,\overline{q}(\xi) \Gamma_{1}\mathcal{W}\left[\xi;\infty\xi\right]|0\rangle\] \[= \lim_{x_{\rho}\to 0}\frac{d}{dx_{\rho}}\langle 0\,| \mathcal{W}\left[\infty\xi+x;-\xi+x\right]q(-\xi+x)|\,P_{h},T;X\rangle \langle\,P_{h},T;X\,|\,\overline{q}(\xi+x)\Gamma_{1}\mathcal{W}\left[\xi+x; \infty\xi+x\right]|\,0\rangle\] \[= \langle 0|\mathcal{W}\left[\infty\xi;-\xi\right]\overset{\rightarrow }{D}^{\rho}(-\xi)q(-\xi)|P_{h},T;X\rangle\langle\,P_{h},T;X|\,\overline{q}( \xi)\Gamma_{1}\mathcal{W}\left[\xi;\infty\xi\right]|0\rangle\] \[+\langle 0\,|\mathcal{W}\left[\infty\xi;-\xi\right]q(-\xi)|\,P_{h},T; X\rangle\langle\,P_{h},T;X|\,\overline{q}(\xi)\overset{\leftarrow}{D}^{\rho}(\xi) \Gamma_{1}\mathcal{W}\left[\xi;\infty\xi\right]|0\rangle\] \[+\int_{-1}^{\infty}dt\langle 0\,|\mathcal{W}\left[\infty\xi;t\xi \right]igF^{\rho\xi}(t\xi)\mathcal{W}\left[t\xi;-\xi\right]q(-\xi)\big{|}\,P_{h },T;X\rangle\langle\,P_{h},T;X\,|\,\bar{q}(\xi)\Gamma_{1}\mathcal{W}\left[\xi; \infty\xi\right]|0\rangle\] \[+\int_{\infty}^{1}dt\langle 0\,|\mathcal{W}\left[\infty\xi;-\xi \right]q(-\xi)|\,P_{h},T;X\rangle\langle\,P_{h},T;X\,\big{|}\,\bar{q}(\xi) \Gamma_{1}\mathcal{W}\left[\xi;t\xi\right]igF^{\rho\xi}(t\xi)\mathcal{W}\left[ t\xi;\infty\xi\right]|0\rangle, \tag{29}\]
where \(\Gamma_{1}\) stands for a gamma matrix such as \(\gamma^{\mu}\) and \(\sigma^{\mu\nu}\). Due to the translation invariance, the matrix element in Eq. (IV) should vanish.
In the following, the Wilson lines are neglected in the operator identities, since this will not cause confusion. We derive a relation between quark-quark and quark-gluon-quark operators by choosing \(\Gamma=(g^{\rho\alpha}g^{\lambda}_{\ \sigma}-g^{\alpha}_{\ \sigma}g^{\rho\lambda})\gamma_{\lambda}\) in Eq. (IV) and \(\Gamma_{1}=(\sigma^{\sigma\beta}\gamma^{\rho}+\gamma^{\rho}\sigma^{\sigma\beta})\) in Eq. (IV),
\[\xi_{\alpha}\left[\frac{\partial}{\partial\xi_{\alpha}}\langle 0|q(-\xi)|P_{h},T;X \rangle\langle\,P_{h},T;X|\,\overline{q}(\xi)\gamma^{\sigma}|0\rangle-\frac{ \partial}{\partial\xi_{\sigma}}\langle 0|q(-\xi)|P_{h},T;X\rangle\langle\,P_{h},T;X|\, \overline{q}(\xi)\gamma^{\alpha}|0\rangle\right]\] \[= \left[\int_{-1}^{\infty}dt\langle 0\,|gF_{\rho\xi}(t\xi)q(-\xi)|\,P_{h},T;X \rangle\langle\,P_{h},T;X|\,\overline{q}(\xi)\gamma_{\tau}\gamma_{5}|0\rangle+ \int_{\infty}^{1}dt\langle 0|q(-\xi)|P_{h},T;X\rangle\langle\,P_{h},T;X\,|\,\overline{q}(\xi) \gamma_{\tau}\gamma_{5}gF_{\rho\xi}(t\xi)|0\rangle\right]\] \[\times\epsilon^{\sigma\xi\rho\tau}-i\int_{-1}^{\infty}dtt\langle 0 \,\big{|}gF^{\sigma\xi}(t\xi)q(-\xi)\big{|}\,P_{h},T;X\rangle\langle\,P_{h},T;X |\,\overline{q}(\xi)\xi|0\rangle-i\int_{\infty}^{1}dtt\langle 0|q(-\xi)|P_{h},T;X\rangle\] \[\times\langle\,P_{h},T;X|\,\overline{q}(\xi)\xi gF^{\sigma\xi}(t \xi)\big{|}\,0\rangle, \tag{30}\]
where the matrix element of the total derivative operator is neglected, and the quark mass terms vanish. The quark-quark operator appears in the l.h.s. of Eq. (30), which can be written in terms of the intrinsic tensor-polarized FFs as shown in Eq. (5). If the vector \(\xi\) is not necessarily on the lightcone, the matrix element of the nonlocal quark-quark operator can be expressed as
\[\langle 0|q(-\xi)|P_{h},T;X\rangle\langle\,P_{h},T;X|\,\overline{q} \langle\xi\rangle\gamma^{\sigma}|0\rangle\] \[= 8N_{c}M^{2}\int d\left(\frac{1}{z}\right)e^{\frac{2iP_{L}}{z}} \left[\frac{3}{4}A^{\sigma}\frac{F_{LLL}(z)}{z}+B^{\sigma}\frac{F_{LT}(z)}{z}\right] \tag{31}\]
with
\[A^{\sigma}= \frac{\xi\cdot T\cdot\xi}{(P_{h}\cdot\xi)^{2}}P_{h}^{\sigma}-M^{ 2}\frac{\xi\cdot T\cdot\xi}{(P_{h}\cdot\xi)^{3}}\xi^{\sigma}, \tag{32}\] \[B^{\sigma}= \frac{T^{\sigma\mu}\xi_{\mu}}{P_{h}\cdot\xi}-\frac{\xi\cdot T \cdot\xi}{(P_{h}\cdot\xi)^{2}}P_{h}^{\sigma}+M^{2}\frac{\xi\cdot T\cdot\xi}{( P_{h}\cdot\xi)^{3}}\xi^{\sigma}. \tag{33}\]
Eq. (31) is exact at twist 3 since the twist-4 FFs are not included. We substitute Eq. (31) into Eq. (30) to estimate the derivative, and take the lightcone limit of \(\xi^{2}\to 0\), then, the l.h.s. of Eq. (30) is given by \(F_{1LL}(z)\) and \(F_{LT}(z)\). The r.h.s. of Eq. (30) can be directly calculated with the help of Eq. (16), and we obtain the following relation by combining the l.h.s. and r.h.s.,
\[\frac{3}{2}\tilde{F}_{1LL}(z)+\frac{1}{z}\frac{d\tilde{F}_{LT}(z) }{d(1/z)}\] \[= \int d\left(\frac{1}{z_{1}}\right)\mathcal{P}\left(\frac{1}{ \frac{1}{z}-\frac{1}{z_{1}}}\right)\left\{\Big{(}\frac{\partial}{\partial(1/ z)}+\frac{\partial}{\partial(1/z_{1})}\Big{)}\text{Re}\left[\tilde{G}_{LT}(z,z_{1}) \right]-\Big{(}\frac{\partial}{\partial(1/z)}-\frac{\partial}{\partial(1/z_{1 })}\Big{)}\text{Re}\left[\tilde{F}_{LT}(z,z_{1})\right]\right\}, \tag{34}\]
where the convention of \(\tilde{F}(z)=F(z)/z\) is used for a intrinsic or kinematical FF \(F(z)\), and \(\tilde{F}(z,z_{1})=\hat{F}(z,z_{1})/z\) for a dynamical one. Combining Eq. (34) with the e.o.m relation of Eq. (23), one can obtain
\[\frac{3}{2}\tilde{F}_{1LL}(z)-\tilde{F}_{LT}(z)-(1-z\frac{d}{dz})F_{1LT}^{(1)} (z)=-2\int_{z}^{\infty}\frac{dz_{1}}{(z_{1})^{2}}\frac{\text{Re}\left[\tilde{ F}_{LT}(z,z_{1})\right]}{(\frac{1}{z}-\frac{1}{z_{1}})^{2}}, \tag{35}\]
and this is a new LIR for tensor-polarized FFs. If we integrate Eq. (34) over the momentum fraction \(z\), one can have
\[F_{LT}(z)= -\frac{3z}{2}\int_{z}^{1}dz_{1}\frac{F_{1LL}\left(z_{1}\right)}{( z_{1})^{2}}+z\int_{z}^{1}\frac{dz_{1}}{z_{1}}\int_{z_{1}}^{\infty}\frac{dz_{2}}{(z_{2} )^{2}}\Bigg{\{}\frac{\left[1+\frac{1}{z_{1}}\delta(\frac{1}{z_{1}}-\frac{1}{z })\right]\text{Re}\left[\hat{G}_{LT}(z_{1},z_{2})\right]}{\frac{1}{z_{1}}- \frac{1}{z_{2}}}\] \[-\frac{\left[\frac{3}{z_{1}}-\frac{1}{z_{2}}+\frac{1}{z_{1}}( \frac{1}{z_{1}}-\frac{1}{z_{2}})\delta(\frac{1}{z_{1}}-\frac{1}{z})\right] \text{Re}\left[\hat{F}_{LT}\left(z_{1},z_{2})\right]}{(\frac{1}{z_{1}}-\frac{ 1}{z_{2}})^{2}}\Bigg{\}}, \tag{36}\]
where it should be understood that \(z_{1}\) falls within the range of integration \((z,1)\), namely, \(\int_{z}^{1}dz_{1}F(z_{1})\delta(1/z_{1}-1/z)=z^{2}F(z)\). The intrinsic twist-3 FF \(F_{LT}(z)\) is decomposed into the contributions of a twist-2 FF \(F_{1LL}\) and the dynamical FFs. We can obtain a similar expression for the kinematical twist-3 FF \(F_{1LT}^{(1)}(z)\) by inserting Eq. (36) into the e.o.m. relation of Eq. (23),
\[F_{1LT}^{(1)}(z)=\frac{3}{2}\int_{z}^{1}dz_{1}\frac{F_{1LL}\left(z_{1}\right)} {(z_{1})^{2}}+\int_{z}^{1}\frac{dz_{1}}{z_{1}}\int_{z_{1}}^{\infty}\frac{dz_{2} }{(z_{2})^{2}}\Bigg{\{}\frac{\left(\frac{3}{z_{1}}-\frac{1}{z_{2}}\right) \text{Re}\left[\hat{F}_{LT}\left(z_{1},z_{2}\right)\right]}{\left(\frac{1}{z_ {1}}-\frac{1}{z_{2}}\right)^{2}}-\frac{\text{Re}\left[\hat{G}_{LT}\left(z_{1}, z_{2}\right)\right]}{\frac{1}{z_{1}}-\frac{1}{z_{2}}}\Bigg{\}}. \tag{37}\]
By dropping the contributions of dynamical FFs into Eqs. (36) and (37), they become the WW-type relations. Then, the twist-3 intrinsic and kinematical FFs can be estimated by using the twist-2 FF \(F_{1LL}(z)\), and the latter should be much easier to be extracted from experimental measurements compared with the former.
If we choose \(\Gamma=\sigma^{\mu\alpha}\) in Eq. (28) and \(\Gamma_{1}=1\) in Eq. (29), the following identity can be derived [9],
\[\frac{\partial}{\partial\xi_{\alpha}}\langle 0|q(-\xi)|P_{h},T;X\rangle\langle\,P_{h},T;X|\,\overline{q}\langle\xi\rangle\sigma^{\xi\alpha}|0\rangle\]
\[= \int_{-1}^{\infty}dtt\langle 0\left|igF_{\alpha\xi}(t\xi)q(-\xi) \right|P_{h},T;X\rangle\langle\,P_{h},T;X|\,\overline{q}(\xi)\sigma^{\xi\alpha} |0\rangle \tag{38}\] \[+\int_{\infty}^{1}dtt\langle 0|q(-\xi)|P_{h},T;X\rangle\langle\,P_{h},T;X\left|\,\overline{q}(\xi)\sigma^{\xi\alpha}igF_{\alpha\xi}(t\xi)\right|0\rangle.\]
Similarly, the matrix element of the nonlocal operator \(\overline{q}(\xi)\sigma^{\xi\sigma}q(-\xi)\) is expressed in terms of the FFs \(\tilde{H}_{1LT}(z)\) and \(\tilde{H}_{LT}(z)\) at twist 3,
\[\langle 0|q(-\xi)|P_{h},T;X\rangle\langle\,P_{h},T;X|\, \overline{q}(\xi)\sigma^{\xi\sigma}|0\rangle \tag{39}\] \[= -4N_{c}M\int d\left(\frac{1}{z}\right)e^{\frac{2i\,p_{\xi}}{z}} \left[2(W^{\sigma}+V^{\sigma})\tilde{H}_{1LT}(z)+\frac{3}{2}V^{\sigma}\tilde{ H}_{LL}(z)\right],\]
where \(W^{\sigma}\) and \(V^{\sigma}\) are defined as
\[W^{\sigma}= T^{\sigma\mu}\xi_{\mu}-\frac{\xi\cdot T\cdot\xi}{P_{h}\cdot\xi}P_ {h}^{\sigma}, \tag{40}\] \[V^{\sigma}= M^{2}\frac{\xi\cdot T\cdot\xi}{(P_{h}\cdot\xi)^{2}}\left[\xi^{ \sigma}-\frac{\xi^{2}}{P_{h}\cdot\xi}P_{h}^{\sigma}\right], \tag{41}\]
and they satisfy the relations of \(W\cdot\xi=0\) and \(V\cdot\xi=0\). In the lightcone limit \(\xi^{2}\to 0\), Eq. (39) goes back to Eq. (5). We obtain the following identity by calculating the matrix element in Eq. (38):
\[4\tilde{H}_{1LT}(z)-\frac{1}{z^{2}}\frac{dH_{LL}(z)}{d(1/z)}=-2\int d\left( \frac{1}{z_{1}}\right)\mathcal{P}\left(\frac{1}{\frac{1}{z}-\frac{1}{z_{1}}} \right)\Big{(}\frac{\partial}{\partial(1/z)}-\frac{\partial}{\partial(1/z_{1 })}\Big{)}\text{Im}\left[\tilde{H}_{LL}^{\perp}(z,z_{1})\right]. \tag{42}\]
Moreover, we obtain \(d(\tilde{H}_{LL}(z)/z)/d(1/z)\) by using the expression in Eq. (20), and the sum of \(d(\tilde{H}_{LL}(z)/z)/d(1/z)\) and Eq. (42) leads to
\[\tilde{H}_{LL}(z)+2\tilde{H}_{1LT}(z)+(1-z\frac{d}{dz})H_{1LL}^{(1)}(z)=-2\int _{z}^{\infty}\frac{dz_{1}}{(z_{1})^{2}}\frac{\text{Im}\left[\tilde{H}_{LL}^{ \perp}(z,z_{1})\right]}{(\frac{1}{z}-\frac{1}{z_{1}})^{2}}, \tag{43}\]
which is also a LIR for tensor-polarized FFs. The integration of Eq. (42) gives
\[H_{LL}(z)= 4\int_{z}^{1}dz_{1}\frac{H_{1LT}\left(z_{1}\right)}{z_{1}}+4\int _{z}^{1}dz_{1}\int_{z_{1}}^{\infty}\frac{dz_{2}}{(z_{2})^{2}}\frac{\frac{2}{z _{1}}-\frac{1}{z_{2}}+\frac{1}{2z_{1}}(\frac{1}{z_{1}}-\frac{1}{z_{2}})\delta (\frac{1}{z_{1}}-\frac{1}{z})}{(\frac{1}{z_{1}}-\frac{1}{z_{2}})^{2}}\text{ Im}\left[\hat{H}_{LL}^{\perp}\left(z_{1},z_{2}\right)\right], \tag{44}\]
and the intrinsic twist-3 FF \(H_{LL}(z)\) is expressed in terms of the twist-2 FF \(H_{1LT}(z)\) and the dynamical FF \(\hat{H}_{LL}^{\perp}\left(z_{1},z_{2}\right)\). If we combine Eq. (44) with the e.o.m. relation of Eq. (18),
\[H_{1LL}^{(1)}(z)= -\frac{2}{z}\int_{z}^{1}dz_{1}\frac{H_{1LT}\left(z_{1}\right)}{z_{1 }}-\frac{2}{z}\int_{z}^{1}dz_{1}\int_{z_{1}}^{\infty}\frac{dz_{2}}{(z_{2})^{2} }\frac{\frac{2}{z_{1}}-\frac{1}{z_{2}}}{(\frac{1}{z_{1}}-\frac{1}{z_{2}})^{2 }}\text{Im}\left[\hat{H}_{LL}^{\perp}\left(z_{1},z_{2}\right)\right], \tag{45}\]
which also decomposes the kinematical twist-3 FF \(H_{1LL}^{(1)}(z)\) into the contributions of \(H_{1LT}(z)\) and \(\hat{H}_{LL}^{\perp}\left(z_{1},z_{2}\right)\). We can obtain the WW-type relations for \(H_{LL}(z)\) and \(H_{1LL}^{(1)}(z)\) by dropping the terms of the dynamical FF in Eqs. (44) and (45).
If we consider the matrix elements of Eqs. (28) and (29) with \(\Gamma=\epsilon^{\alpha\mu\rho S_{LT}}\gamma_{\mu}\gamma_{5}\) and \(\Gamma_{1}=\frac{i}{2}(\gamma^{\rho}\sigma^{S_{LT}\xi}-\sigma^{S_{LT}\xi} \gamma^{\rho})\), respectively, one can derive
\[\epsilon^{\alpha\mu\rho S_{LT}}\xi_{\rho}\frac{\partial}{\partial \xi^{\alpha}}\langle 0|q(-\xi)|P_{h},T;X\rangle\langle\,P_{h},T;X|\,\overline{q}(\xi) \gamma_{\mu}\gamma_{5}|0\rangle\] \[= \int_{-1}^{\infty}dt\langle 0\left|gF_{\xi S_{LT}}(t\xi)q(-\xi) \right|P_{h},T;X\rangle\langle\,P_{h},T;X|\,\overline{q}(\xi)\xi|0\rangle+ \int_{\infty}^{1}dt\langle 0|q(-\xi)|P_{h},T;X\rangle\langle\,P_{h},T;X|\,\overline{q}(\xi) \xi gF_{\xi S_{LT}}(t\xi)|\,0\rangle\] \[+i\epsilon^{\alpha\mu\xi S_{LT}}\Big{[}\int_{-1}^{\infty}dtt \langle 0\left|gF_{\alpha\xi}(t\xi)q(-\xi)\right|P_{h},T;X\rangle\langle\,P_{h },T;X|\,\overline{q}(\xi)\gamma_{\mu}\gamma_{5}|0\rangle+\int_{\infty}^{1}dtt \langle 0|q(-\xi)|P_{h},T;X\rangle\]
\[\times\langle\,P_{h},T;X\,|\,\overline{q}(\xi)\gamma_{\mu}\gamma_{5}gF_{ \alpha\xi}(t\xi)|\,0\rangle\Big{]}-2im_{q}\langle 0|q(-\xi)|P_{h},T;X\rangle\langle\,P_{h},T;X|\, \overline{q}(\xi)\sigma^{\xi SL}|0\rangle, \tag{46}\]
and the l.h.s. is related to the matric element of \(\overline{q}(\xi)\gamma^{\mu}\gamma_{5}q(-\xi)\), which is given by
\[\langle 0|q(-\xi)|P_{h},T;X\rangle\langle\,P_{h},T;X|\,\overline{q}(\xi) \gamma^{\mu}\gamma_{5}|0\rangle=4N_{c}M\frac{\epsilon^{\mu\xi\alpha P_{h}}}{P_ {h}\cdot\xi}Y_{\alpha}\int d\left(\frac{1}{z}\right)e^{\frac{2iP\cdot p}{z}} \tilde{G}_{LT}(z), \tag{47}\]
and the vector \(Y\) is defined as
\[Y^{\alpha}=\frac{2M}{P_{h}\cdot\xi}\left[T^{\alpha\mu}\xi_{\mu}-\frac{\xi\cdot T \cdot\xi}{P_{h}\cdot\xi}P_{h}^{\alpha}+M^{2}\frac{\xi\cdot T\cdot\xi}{(P_{h} \cdot\xi)^{2}}(\xi^{\alpha}-\frac{\xi^{2}}{P_{h}\cdot\xi}P_{h}^{\alpha}) \right]. \tag{48}\]
If we take the lightcone limit \(\xi^{2}\to 0\), one can obtain \(Y^{\alpha}\to S_{LT}^{\alpha}\). Thus, Eq. (46) leads to the following identity,
\[\frac{1}{z}\frac{d\tilde{G}_{LT}(z)}{d(1/z)}+\frac{m_{q}}{M}\frac {d\tilde{H}_{1LT}(z)}{d(1/z)}\] \[= \int d\left(\frac{1}{z_{1}}\right)\mathcal{P}\left(\frac{1}{ \frac{1}{z}-\frac{1}{z_{1}}}\right)\left\{\left(\frac{\partial}{\partial(1/z) }-\frac{\partial}{\partial(1/z_{1})}\right)\text{Im}\left[\tilde{G}_{LT}(z,z _{1})\right]-\left(\frac{\partial}{\partial(1/z)}+\frac{\partial}{\partial(1/ z_{1})}\right)\text{Im}\left[\tilde{F}_{LT}(z,z_{1})\right]\right\}. \tag{49}\]
Combining Eq. (49) with the e.o.m. relation of Eq. (24), another LIR can be derived for tensor-polarized FFs,
\[\tilde{G}_{LT}(z)+(1-z\frac{d}{dz})G_{1LT}^{(1)}(z)=-2\int_{z}^{\infty}\frac{ dz_{1}}{(z_{1})^{2}}\frac{\text{Im}\left[\tilde{G}_{LT}(z,z_{1})\right]}{( \frac{1}{z}-\frac{1}{z_{1}})^{2}}. \tag{50}\]
and the quark mass term in Eq. (49) is canceled in this LIR. From Eqs. (49) and (24), one can also express the twist-3 FFs \(G_{LT}(z)\) and \(G_{1LT}^{(1)}(z)\) in terms of \(H_{1LT}(z)\), \(\tilde{F}_{LT}\left(z_{1},z_{2}\right)\) and \(\tilde{G}_{LT}\left(z_{1},z_{2}\right)\),
\[G_{LT}(z)= -\frac{m_{q}}{M}\left[zH_{1LT}(z)+z\int_{z}^{1}dz_{1}\frac{H_{1LT }(z_{1})}{z_{1}}\right]-z\int_{z}^{1}\frac{dz_{1}}{z_{1}}\int_{z_{1}}^{\infty} \frac{dz_{2}}{(z_{2})^{2}}\Bigg{\{}\frac{\left[1+\frac{1}{z_{1}}\delta(\frac{1 }{z_{1}}-\frac{1}{z})\right]\text{Im}\left[\hat{F}_{LT}\left(z_{1},z_{2}\right) \right]}{\frac{1}{z_{1}}-\frac{1}{z_{2}}}\] \[-\frac{\left[\frac{3}{z_{1}}-\frac{1}{z_{2}}+\frac{1}{z_{1}}( \frac{1}{z_{1}}-\frac{1}{z_{2}})\delta(\frac{1}{z_{1}}-\frac{1}{z})\right] \text{Im}\left[\hat{G}_{LT}\left(z_{1},z_{2}\right)\right]}{(\frac{1}{z_{1}}- \frac{1}{z_{2}})^{2}}\Bigg{\}}, \tag{51}\]
\[G_{1LT}^{(1)}(z)= \frac{m_{q}}{M}\int_{z}^{1}dz_{1}\frac{H_{1LT}(z_{1})}{z_{1}}+ \int_{z}^{1}\frac{dz_{1}}{z_{1}}\int_{z_{1}}^{\infty}\frac{dz_{2}}{(z_{2})^{2}} \Bigg{\{}\frac{\text{Im}\left[\hat{F}_{LT}(z_{1},z_{2})\right]}{\frac{1}{z_{1} }-\frac{1}{z_{2}}}-\frac{(\frac{3}{z_{1}}-\frac{1}{z_{2}})\text{Im}\left[ \hat{G}_{LT}\left(z_{1},z_{2}\right)\right]}{(\frac{1}{z_{1}}-\frac{1}{z_{2}})^ {2}}\Bigg{\}}. \tag{52}\]
If we consider the production of a tensor-polarized hadron \(h\) in the lepton-nucleon collision, namely \(l+N\to h+X\), the twist-3 cross sections are dependent on the chosen frame, which is induced by the arbitrariness in the choice of lightcone vectors for distribution and fragmentation correlators. The LIRs we derive can be used to remove the frame dependence of the twist-3 cross sections for this process, such as twist-3 SSAs and double-spin asymmetries.
## V Summary
The tensor-polarized FFs of a spin-1 hadron (\(h\)) can be measured in the various hard semi-inclusive processes such as \(e^{+}e^{-}\to hX\) and \(ep\to ehX\) (SIDIS), and the former process is accessible at BESIII and Belle II, while the latter is possible at JLab and the Electron-Ion Colliders in the US and China. Inspired by the ongoing measurement of the tensor-polarized FFs for \(\phi\) at BESIII, we investigate the theoretical relations among the tensor-polarized intrinsic, kinematical and dynamical FFs for a spin-1 hadron in this work. First, the QCD e.o.m. relations are obtained for the tensor-polarized FFs. Second, we derive the operator identities where the nonlocal quark-quark operators are expressed in terms of quark-gluon-quark operators. Three new Lorentz invariance relations (LIRs) are presented for the tensor-polarized FFs, and they can be used to remove the frame dependence of the twist-3 spin observables in the hard semi-inclusive reactions so that Lorentz invariance properties are satisfied. Finally, we also show that the intrinsic and kinematical twist-3 FFs are expressed in terms of the twist-2 FFs and the dynamical twist-3 FFs, and the Wandzura-Wilczek-type relations are obtained by neglecting the dynamical FFs. Since the twist-2 FFs are much easier to be accessed in experiment than the twist-3 ones, one can give a rough estimate for the twist-3 FFs by such relations. Our results will be valuable for the future experimental measurements and theoretical studies of tensor-polarized FFs.
###### Acknowledgements.
We acknowledge useful discussions with Shunzo Kumano, Bernard Pire, Ji Xu and Ya-Teng Zhang. Qin-Tao Song was supported by the National Natural Science Foundation of China under Grant Number 12005191.
|
2309.06031 | High fidelity macroscopic superposition states via shortcut to
adiabaticity | A shortcut to an adiabatic scheme is proposed for preparing a massive object
in a macroscopic spatial superposition state. In this scheme we propose to
employ counterdiabatic driving to maintain the system in the ground state of
its instantaneous Hamiltonian while the trap potential is tuned from a parabola
to a double well. This, in turn, is performed by properly ramping a control
parameter. We show that a few counterdiabatic drives are enough for most
practical cases. A hybrid electromechanical setup in superconducting circuits
is proposed for the implementation. The efficiency of our scheme is benchmarked
by numerically solving the system dynamics in the presence of noises and
imperfections. The results show that a mechanical resonator with
very-high-fidelity spatially distinguishable cat states can be prepared with
our protocol. Furthermore, the protocol is robust against noises and
imperfections. We also discuss a method for verifying the final state via
spectroscopy of a coupled circuit electrodynamical cavity mode. Our work can
serve as the ground work to feasibly realize and verify macroscopic
superposition states in future experiments. | Mehdi Aslani, Vahid Salari, Mehdi Abdi | 2023-09-12T08:04:57Z | http://arxiv.org/abs/2309.06031v2 | # High fidelity macroscopic superposition states via shortcut to adiabaticity
###### Abstract
A shortcut to adiabatic scheme is proposed for preparing a massive object in a macroscopic spatial superposition state. In this scheme we propose to employ counterdiabatic driving to maintain the system in the groundstate of its instantaneous Hamiltonian while the trap potential is tuned from a parabola to a double well. This, in turn, is performed by properly ramping a control parameter. We show that a few counterdiabatic drives are enough for most practical cases. A hybrid electromechanical setup in superconducting circuits is proposed for the implementation. The efficiency of our scheme is benchmarked by numerically solving the system dynamics in the presence of noises and imperfections. The results show that very high fidelity cat states with distinguishable spatial separations can be prepared with our protocol. Furthermore, the protocol is robust against noises and imperfections. We also discuss a method for verifying the final state via spectroscopy of a coupled circuit electrodynamical cavity mode.
## I Introduction
In recent decades two aspects of the quantum mechanics has become growingly prominent: First, its application in the technology and advantages that it brings over the classical competitors, e.g. in enhanced sensing schemes and secure communications [1; 2; 3; 4; 5]. And in the foundation of quantum theory itself, where several questions still need to be addressed. Crucially, those questions about where the quantum realm meets the classical mechanics and nonclassicality of dynamics [6; 7; 8; 9]. Among the other aspects, it still remains unclear whether it is the system size, its number of degrees of freedom, or its mass that determines the limit were one _must_ invoke the quantum theory for understanding its dynamics [10; 11; 12; 13]. One of the well-established approaches for addressing this issue are the theoretical extensions that predict unconventional mechanisms for decoherence [14]. Such theories usually provide a decoherence rate related to the size, mass, or the degrees of freedom of the system. Suggesting that the quantum states in a larger system loses its coherence faster [15].
Performing experiments nonetheless are necessary for testing the validity of these theories. It usually requires the ability of preparing a massive object in a superposition state [16; 17; 18; 19] or equivalently matter wave interferometry with large objects [20; 21; 22; 23]. Given the sensitivity of such systems it is necessary to be able to prepare such states with very high fidelity. Nonetheless, massive objects accessible in current experimental opto- and electro-mechanical systems are also subject to a tremendous amount of thermal noise. Therefore, one must conceive proper approaches where the nonclassical states can be achieved with high fidelity, and thus, making possibility for the observation of unconventional decoherence effects. Massive objects in quantum superposition could also prove useful for enhanced sensitivity in force measurement [24; 25; 26; 27; 28; 29; 30]. Therefore, various proposals have been put forth for preparing macroscopic objects in spatially distinguishable superposition states; by dissipative state preparation [18; 31], hybrid system manipulation [32; 33; 34; 35; 36], measurement induced [37; 38; 39], and adiabatic processing [40; 41; 42].
Here, we investigate a scheme in which shortcut to adiabaticity is employed for the rapid and high fidelity preparation of a macroscopic object in a superposition state [43]. The cat state is realized by preparing the massive system in the ground state of a double-well (DW) potential. In the scheme we propose, a mechanical mode is cooled down to its ground state while oscillating in an almost harmonic trap with a weak Duffing nonlinearity. Then the potential is twisted into a DW by applying an external anti-parabola potential. By retaining the system in its ground state during the process the desired macroscopic quantum state can be achieved. However, this is challenged by two effects: On the one hand, the thermal noise excites the system to other states making a mixed incoherent state. Such thermalization effects become growingly prominent as the lowest energy gap grows smaller with the formation of the DW. On the other hand, speeding up the process by employing faster ramps results in the diabatic transitions in the system through Landau-Zener effect which again prohibit formation of the desirable superposition state. To overcome this, one accelerates the procedure by employing counter-diabatic drives [44].
We use an approximate version of _transitionless quantum driving_[45; 46], where only a few substantial diabatic transitions are compensated for. Therefore, both the energy costs and the experimental feasibility are significantly relaxed. We benchmark our protocol by computing the final state fidelity through our numerical solutions to the quantum optical master equation considering realistic noise effects. By optimizing the required resources we show that the groundstate of the DW potential is attainable with a high fidelity by only employing a limited number of counter-drive fields. We compare our re
sults with the states obtained via a simple adiabatic passage protocol in the same conditions and show that the protocol performance is significantly better. Then different protocol scenarios as well as imperfections are studied. The latter includes the asymmetry in the potential that breaks the parity symmetry of the states as well as a finite thermal occupation as the starting point of the protocol. Eventually, we propose a readout technique through spectroscopy of a coupled cavity mode for verifying the state of the mechanical resonator.
The paper is organized as follows: In the next section we discuss the preliminary theoretical aspects of our work, including the model, the protocol, and the proposed setup. In Sec. III the numerical results are presented for an adiabatic process and the protocol with shortcut to adiabaticity. Sec. IV puts forward a method for verifying the prepared state. The paper is concluded by Sec. V.
## II Theory
When a control parameter of a quantum system changes over time it can modify the Hamiltonian and consequently its corresponding eigenstates. If the change is performed slowly enough, a system prepared in one of its eigenstates, e.g. groundstate, retains that status without occupying other eigenstates. This indeed is the so called quantum adiabatic theorem and it is commonly used in quantum information processing [47; 48; 49; 50; 51; 52]. Although one in principle should perfectly achieve the desired state by changing a Hamiltonian through arbitrarily long processes, the environmental noises and dissipations are prohibitive. Therefore, it is necessary to design processes fast enough that the decohering effects are minimal, yet the adiabatic nature is preserved. Shortcut to Adiabatic (STA) techniques are devised for this purpose [43]. Among the others, counterdiabatic (CD) driving is a versatile technique in which by adding auxiliary drives to the system the diabatic transitions resulting from fast modification of the Hamiltonian are averted and the system can be driven along a specific instantaneous eigenstate, thus giving the outcome of an adiabatic process in much shorter times.
### Counterdiabatic driving
Consider a time-dependent Hamiltonian \(\hat{H}_{0}(t)\) with its instantaneous eigenstates \(|n(t)\rangle\) satisfying the eigenvalue equation \(\hat{H}_{0}(t)|n(t)\rangle=E_{n}(t)|n(t)\rangle\). According to the quantum adiabatic theorem if the system is initially at any eigenstate it will remain in the same eigenstate when changing the Hamiltonian over time, provided that those changes are slow enough. In contrast, when the process is fast diabatic transitions populate other instantaneous eigenstates. In a transitionless process, such undesirable excitations in the system are compensated for by employing an auxiliary Hamiltonian \(\hat{H}_{1}(t)\). Therefore, the system ideally remains in its instantaneous eigenstate even if the adiabatic conditions are not satisfied. It is straightforward to show that [45]
\[\hat{H}_{1}(t)=i\hbar\sum_{m\neq n}\frac{|m(t)\rangle\langle m(t)|\partial_{t }\hat{H}_{0}(t)|n(t)\rangle\langle n(t)|}{E_{n}(t)-E_{m}(t)}. \tag{1}\]
Hence, dynamics of the system under the Hamiltonian \(\hat{H}(t)=\hat{H}_{0}(t)+\hat{H}_{1}(t)\) gives the eigenstates of \(\hat{H}_{0}(t)\) for arbitrary processing times, provided \(\hat{H}_{1}(t_{i})=\hat{H}_{1}(t_{f})=0\). This last condition ensures equality of the \(\hat{H}_{0}(t)\) and \(\hat{H}(t)\) eigenstates at the boundary times.
### Model
We consider a double-well potential as the system where its lowest eigenstates are a quantum superposition of two distinct states. Particularly, The groundstate of a symmetric DW potential consists of the symmetric superposition of the groundstate of each well, while its first excited state is an antisymmetric superposition of the same states. Hence, a balanced mixture of these two gives a classical state. The energy difference of the two eigenstates \(\delta_{10}\equiv(E_{1}-E_{0})/\hbar\) is proportional to the probability with which the particle tunnels from one well to the other. That is, a higher barrier energy results in a more distinguishability, but at the same time smaller energy gap between the two lowest states of the DW. Therefore, when approaching a DW potential the thermal excitations become growingly fast and fade out the superposition features of the state.
For a massive particle in a spatial DW potential occupation of the groundstate means that a massive spatial symmetric superposition state is realized. Various forms of DW potentials can be envisaged, but in this work we put our focus on the case were the explicit form of the potential is \(V(z)=-\frac{1}{2}\nu z^{2}+\frac{1}{4}\beta z^{4}\). This, in principle, can be realized by subjecting an intrinsic Duffing resonator to an inverse parabola external potential, see Ref. [18]. The physical realizations of \(\nu\) and \(\beta\) are discussed in Sec. II.4, here we examine some properties of such DW potential. The above potential is centered at \(z=0\) and its minima lie at \(z=\pm z_{0}=\pm\sqrt{\nu/\beta}\). In quantum regime, the eigenstate whose energy is less than that of the central barrier is either a symmetric or an antisymmetric superposition of states almost localized in each well provided the barrier is high enough. The groundstate of such a potential is a spatially distinguishable superposition state resulted from delocalization of the particle. Roughly speaking it can be understood as an even cat state resulting from the symmetric superposition of the two harmonic well groundstates. The total potential can be decomposed in two parts \(V(z)=V_{\rm m}(z)+V_{\rm e}(z)\): (i) The intrinsic Duffing oscillator \(V_{\rm m}(z)=\frac{1}{2}m\omega^{2}z^{2}+\frac{1}{4}\tilde{\beta}z^{4}\). (ii) The external potential \(V_{\rm e}(z)=-|\alpha_{2}|z^{2}+\alpha_{4}z^{2}\) where \(\alpha_{2}\) is an external softening force that overcomes the intrinsic Hook force. A positive \(\alpha_{4}>0\) can strengthen the trap--which is partly weakened by the anti-parabola--and thus enhance the controllability of the system. Nevertheless, for a given \(\nu\) a larger \(\beta=\tilde{\beta}+\alpha_{4}\) will result-in a smaller spatial spacing in the DW minima \(2z_{0}\). Therefore, one in principle must engineer the optimal values of \(\alpha_{2}\) and \(\alpha_{4}\) for having both better control over
the system and a spatially distinguishable cat state. Here, for the sake of simplicity we propose to operate in a regime were \(\alpha_{4}\approx 0\). Hence, \(\alpha_{2}\) is the only control parameter that tunes the potential of the membrane which is varied by time as explained in Sec. II.3. That is, by changing the external potential one can tune \(\alpha_{2}(t)\) over time. Since the goal is to undo the intrinsic harmonic stiffness we introduce the dimensionless fine tuning parameter \(\zeta\) with the following equation
\[\alpha_{2}(t)=-(1+\zeta(t))\frac{m\omega^{2}}{2}. \tag{2}\]
Then Hamiltonian of the mechanical system reads
\[\hat{H}_{\text{DW}}(t)=\frac{\hat{\rho}^{2}}{2m}-\frac{1}{2}\zeta(t)m\omega^{ 2}\hat{z}^{2}+\frac{\beta}{4}\hat{z}^{4}, \tag{3}\]
where \(\hat{z}\) and \(\hat{\rho}\) are the position and momentum operators for the only degree of freedom of the membrane, satisfying the canonical commutation relation \([\hat{z},\hat{\rho}]=i\hbar\). Note that \(\zeta=-1\) retrieves the intrinsic elastic Hamiltonian when the external potential is extinguished.
### Protocol and dynamics
The goal is to achieve the ground state of \(\hat{H}_{\text{DW}}\) with \(\zeta>0\) with a high fidelity despite its exposure to the environmental noises. To this end, we propose a protocol where first the mechanical mode is cooled down to its ground state when it is in (an almost) harmonic trap (\(\zeta(t_{\text{i}})=-1\)). This can be performed by various techniques, e.g. sideband cooling through a coupled cavity mode [53; 54]. Then the external potential is turned on and gradually increased until a DW forms (\(\zeta(t_{\text{i}})>0\)). However, the ramp function that takes the system from \(\zeta(t_{\text{i}})\) to \(\zeta(t_{\text{f}})\) is crucial. Moreover, there is a trade off between the system thermalization and the diabatic excitations built-up in the system during the process. For short protocol durations the Landau-Zener transitions are prominent, while for longer times the destructive effects of the environmental noise reduce the state fidelity. Therefore, we propose to speed up the process and meanwhile compensate for the unwanted transitions in the system by employing a STA protocol based on the counterdiabatic driving. A comprehensive CD scheme in a continuous variable system like the one studied in this work demands an infinite number of drives. Nevertheless, in the next section we show that the purpose can still be fulfilled to a great extend by carefully selecting a few transitions. This is specially crucial for the feasibility of our protocol as such drives can be experimentally implemented through the cavity modes. Therefore, a limited number of cavity modes are sufficient for attaining the superposition state with high fidelity.
Hence, the protocol for creating the cat state is the following three steps (\(t_{\text{i}}<t_{\text{c}}<t_{\text{f}}\)): (i) \(t<t_{\text{i}}\): Cooling the system to its groundstate when \(\zeta(t)=-1\). Indeed the harmonic nature of the system at this value of \(\zeta\) allows one to employ a standard sideband cooling mechanism. (ii) \(t_{\text{i}}\leq t\leq t_{\text{c}}\): Turning on the external potential and _adiabatically_ approaching the buckling point. That is, tuning the potential to a small but negative \(\zeta(t_{\text{c}})\lesssim 0\). With a proper choice of \(\zeta(t_{\text{c}})\) the thermal excitation rates remain low and the system retains its instantaneous groundstate with a high fidelity for a reasonably slow ramp \(\zeta(t_{\text{i}}\to t_{\text{c}})\). (iii) \(t_{\text{c}}\leq t\leq t_{\text{f}}\): Employing CD drives for a few lowest transitions while the potential is quickly modified to \(\zeta(t_{\text{f}})=\zeta_{\text{f}}>0\), see Fig. 1 for a schematic presentation. In fact, the symmetric nature of the potential in our work demands for the conservation of parity. Therefore, no diabatic transition occurs from the groundstate to the first excited state or any other eigenstate with odd parity. Hence, one only needs to compensate for the diabatic transitions to the higher symmetric energy levels. Since \(\hat{H}_{1}\propto\hat{z}^{2}\) as one can infer from Eq. (3), the CD Hamiltonian exactly produces the desired transitions.
To study dynamics of our open quantum system, we employ the master equation formalism. However, notice that the extreme nonlinearity of the system when the external softening force is comparable to the intrinsic stiffness demands for a careful open quantum system treatment. Specially when the system enters the DW regime. In Ref. [18] some of us derive the proper dissipators that constitute the master equation describing the interaction of the system with its surrounding environment as the following
\[\hat{\rho}=\frac{1}{i\hbar}\big{[}\hat{H},\hat{\rho}\big{]}+\frac{1}{2}\big{[} \hat{z},\rho\hat{A}^{\dagger}-\hat{A}\rho\big{]}, \tag{4}\]
where \(\hat{H}=\hat{H}_{\text{DW}}+\hat{H}_{\text{drv}}\) includes both the mechanical and drive terms and we have introduced the jump operator \(\hat{A}=\sum_{\omega>n}\gamma_{mn}\big{(}\hat{N}(\delta_{mn})\ket{m}\bra{n}+ \ket{[\hat{N}(\delta_{mn})+1]\ket{n}\bra{m}}\big{)}\) in which \(\gamma_{mn}=\big{(}2m\omega\delta_{mn}/\hbar\Omega\big{)}\bra{m}\ket{z}\) is the decay rate from state \(\ket{m}\) to \(\ket{n}\). Here, \(\delta_{mn}=(E_{m}-E_{n})/\hbar\) is the transition frequency, \(Q\) is the quality factor of harmonic mechanical oscillations,
Figure 1: Scheme of the proposed protocol (top panels) and the circuit quantum electromechanical setup (lower panel). The microwave cavity capacitively couples to the graphene membrane (the green sheet) through the gate electrode (red rectangle). The double-well potential forms by applying electrostatic forces via two parallel rod electrodes (blue lines).
and \(\bar{N}(\Omega)=[\exp(\hbar\Omega/k_{\rm B}T)-1]^{-1}\) is the occupation number at the temperature \(T\), where \(k_{\rm B}\) is the Boltzmann constant. We numerically solve the above equation with a time-dependent Hamiltonian through \(\zeta(t)\) and with appropriate CD drive auxiliary Hamiltonian [55]. Each step of the protocol is performed separately and outcome of the previous step is fed as the initial state for the next one, see Sec. III for the details.
### Setup
Here, we consider and discuss an experimental setup as a possible implementation of the above discussed scheme. A rectangular monolayer of graphene with dimensions \(w\times L\) is employed as the mechanical resonator, where the goal is to establish a superposition of deflections in two directions perpendicular to its surface. Thanks to the large Young modulus, flexural modes of a free-standing graphene membrane experience large Duffing nonlinearity making them a good candidate for implementing our protocol [Appendix A]. The polarizability of graphene allows us to apply the external softening force by applying an electrostatic potential [56]. Two line electrodes can provide the anti-parabola while maintaining symmetry properties of the membrane, see Fig. 1 for an illustration and Appendix B for the details.
When the pinned membrane boundary conditions are applied, one has \(m=\frac{1}{2}\rho Lwh\) and \(\beta=Yhw/(8\pi^{4}L^{3})\) for the effective mass and Duffing nonlinearity of the resonator, respectively, while the mode frequency is mostly determined by the tensile force, see Appendix A. Here, \(\rho=2.26\times 10^{3}\) kg/m\({}^{3}\) and \(Y=1.02\) TPa are the graphene bulk mass density and Young modulus. By considering a monolayer graphene with \(\{L,w,h\}=\{5,1,3.35\times 10^{-4}\}\)\(\mu\)m one finds \(m=1.9\times 10^{-12}\) kg and \(\beta=3.3\times 10^{13}\) J/m\({}^{4}\). We assume a mechanical frequency of \(\omega/2\pi=2\) MHz for the fundamental flexural mode, the mode which has the highest coupling strength to the cavity.
In fact, to perform the initial cooling as well as for the CD driving a well-controlled quantum system is required to couple and interact with the membrane. Therefore, we consider a circuit electromechanical system where a superconducting microwave cavity capacitively couples to the graphene membrane, see e.g. Ref. [57].
## III Results
The eigenstates of the Hamiltonian (3) are numerically computed for different values of \(\zeta\). By examining different values of \(\zeta>-1\), one clearly sees that the potential changes from an almost harmonic trap at \(\zeta=-1\) to highly nonlinear single-well trap for \(\zeta\lesssim 0\) and sets to form a DW shape when \(\zeta>0\) [see Appendix C]. A large positive \(\zeta\) gives a deep DW with several pairs of closely spaced energy levels with symmetric and antisymmetric wave functions. Even though such values of \(\zeta\) provide a larger spatial separation in the components of the superposition state, achieving their groundstate becomes increasingly difficult as \(\zeta\) increases. Indeed, the effect is twofold: First, the first excited state which is the antisymmetric superposition of the up and down deflections becomes easily accessible by thermal excitations. This can quickly result in a thermal mixture of the two lowest states which is a classical state. Second, a tight set of energy levels leads to more complicated diabatic transition and thus an exhaustive counterdiabatic driving scheme must be invoked, which in turn demand more experimental resources. Therefore, here we consider the modest value of \(\zeta_{\rm f}=+3\times 10^{-4}\). The optimal value of the intermediate \(\zeta\) that the third stage of the protocol starts is numerically found to be \(\zeta_{\rm c}=-2.5\times 10^{-4}\).
Next we notice that \(\partial_{t}\hat{H}_{\rm DW}\propto\hat{z}^{2}\). Therefore, the diabatic transitions can only happen among the even and odd subspaces. Consequently, the counterdiabatic Hamiltonian can be divided into two terms consisting transitions among the even and odd subspaces, \(\hat{H}_{\rm drv}=\hat{H}_{\rm drv}^{+}+\hat{H}_{\rm drv}^{-}\). Since in our protocol the system is initially in the groundstate \(|0\rangle\) with a fidelity close to the unity and the goal is to keep the system in its instantaneous groundstate, it is enough to only include counterdiabatic transitions in the even subspace. Hence, we discard the odd subspace drive terms and get
\[\hat{H}_{\rm drv}=-\tfrac{1}{2}i\dot{\zeta}m\omega^{2}\sum_{n\in\mathbb{Z}} \sum_{m\sim n}\frac{\langle n|\hat{z}^{2}|m\rangle}{\delta_{mn}}|n\rangle \langle m|+\text{H.c.}, \tag{5}\]
where \(\dot{\zeta}\) is the time derivative of the control parameter. In order to provide an experimentally feasible protocol, in our numerical analysis we consider a counterdiabatic Hamiltonian with a few transitions instead. In the following sections these cases are investigated and interestingly we find that only a few counterdiabatic drives are enough. To analyze the performance of our protocol in different situations we compute the fidelity \(F=\text{Tr}\{\sqrt{\sqrt{\rho}\sigma\sqrt{\rho}}\}\) of the outcome state \(\rho\) at the end of the protocol that results from the full dynamics of the master equation (4) with the target state \(\sigma\) which is the groundstate of \(\hat{H}_{\rm DW}\) for \(\zeta=\zeta_{\rm f}\). In Fig. 2(b) the Wigner function of the target state is presented.
### Full adiabatic preparation
We first consider the case were no CD transition drives are applied during the state preparation. The only difference with the protocol described above is that in all steps \(\hat{H}_{\rm drv}=0\). Moreover, we assume a perfect initialization of the system where the harmonic oscillator (\(\zeta=-1\)) is cooled down to its ground state with a fidelity of unity. The effect of non-ideal initialization will be studied later, see Sec. III.3. In the second step of the protocol the potential applied to the electrodes is tuned such that the control parameter changes from \(\zeta_{\rm i}=-1\) to \(\zeta_{\rm c}=-2.5\times 10^{-4}\). Our numerical calculations show that this stage can be performed in a reasonably short time interval with almost no state degradation, neither due to thermalization nor from diabatic transitions [58]. Employing a proper ramp function, however, is necessary. For \(\zeta<0\) the energy level spacing scaling is upper-bounded by \(\delta_{nm}\gtrsim\omega|\zeta|^{1/2}\). Hence,
a ramp adjusted with the pace of gap closing rate can keep the system dynamics away from any diabatic transition to arbitrary negative values of \(\zeta\), provided that \(\Delta t_{1}\sim 1/\omega\), see the solid blue line in Fig. 2(a). The third step of the protocol is the same as the previous one but the ramp function for evolving the system into a double-well potential is set to a slower pace. Here, we choose two simple possible ramps, that is, a linear change of \(\zeta\) in time and a sine function [see the red solid and dashed lines in Fig. 2(a)]. The results for the outcome state show that when there is no thermal noise (\(T=0\)) a high fidelity final state can be achieved for long enough processing time through either of the ramp functions. This is clear from Fig. 2(c) where the fidelity of the outcome state with respect to the target state is plotted versus \(\Delta t_{2}\). A linear ramp gives a better fidelity in shorter times. Therefore, in the next step that effect of the thermal noise is studied we only consider a linear ramp function. Now we include the thermal noise effect in the computations. As expected even for very low temperatures the noise is largely detrimental. Fig. 2(d) shows that the fidelity rapidly decreases as the thermal noise is introduced to the system. Note that for obtaining these results we have set \(\Delta t_{1}=1/\omega\) for the second step of the protocol.
### Shortcut to adiabaticity
The counterdiabatic transition drives are now included in the numerics. Let us emphasize that the level spacing for \(\zeta_{\rm i}\leq\zeta\leq\zeta_{\rm c}\) values are large enough to allow for the fast evolution of the system without state degradation through diabatic transition or the thermal noise in the temperatures studied in this work. Therefore, the CD drives are only employed for the last step of the protocol where the system enters the DW realm. As above we set \(\Delta t_{1}=1/\omega\) and find a high fidelity for the state at \(t=t_{\rm c}\). In the third part of the protocol, the counterdiabatic drives producing the Hamiltonian (5) with \(n,m\in\{0,2,\cdots\}\) are introduced, while the control parameter takes the smooth ramp that connects \(\zeta_{\rm c}\) to \(\zeta_{\rm f}\) in the time duration of \(\Delta t_{2}\). The ramp functions employed in the first and second parts are plotted in Fig. 2(a). Note that for the STA protocol one can choose a sine as the second ramp function to ensure validity of the boundary conditions stated below Eq. (1). The final outcome of the protocol is a cat state whose Wigner function is very close to the one presented in Fig. 2(b). The outcome has almost perfect match with the groundstate of the DW potential with the fidelity \(F\approx 99.7\%\). Achieving the ground state of a deep DW potential with a fidelity as high as \(99\%\) ensures that the system is in a very non-classical state. This is clearly visible from the large spatial separation of the coherent lobes and negative features of the Wigner function.
Effect of the environment temperature \(T\) that determines the thermal decoherence rate as well as the ramp function on the fidelity are investigated. The results are shown in Fig. 3(a), where the infidelity \(1-F\) of the final state is plotted against duration of the second ramp \(\Delta t_{2}\) at different temperatures. The destructive effect of ambient temperature is clear from the curves. Furthermore, a faster ramp accompanied with proper CD drives can tremendously reduces such destructive effects. The thermal noise is very prohibitive as it can be inferred by comparing the results in the absence of thermal noise (black circles) with the ones in its presence. For \(T=0\) K the error slightly reduces as the duration time increases. At finite temperatures the error gets about a hundred times larger when the duration time \(\Delta t_{2}\) increases from \(0.1\) to \(10\)\(\mu\)s. This, however, seems to be partially saturated as the temperature increases. In these computations only CD drives up to the fourth level are taken into account. We shortly provide numerical evidence that this is indeed enough for attaining a high fidelity at the end of the protocol.
Figure 2: (a) The ramp functions employed in the second and third steps of the protocol. (b) Wigner function of groundstate of the double-well potential with \(\zeta=3\times 10^{-4}\), the target state. Fidelity of the outcome state with respect to the target state versus duration of the last step of the protocol: (c) For an isolated system when two different ramp functions are considered. (d) At different temperatures when a linear ramp function is employed.
Figure 3: (a) Infidelity of the final state versus duration of the last step of the protocol \(\Delta t_{2}\) for different ambient temperatures. Here, only transition up to fourth level are taken in the numerics, i.e. \(m,n\in\{0,2,4\}\) in Eq. (5). In (b) the effect of including different CD drives in the fidelity of the final state is presented for two specific cases marked by blue circles in (a). The numbers on the horizontal axis refer to the highest level considered, e.g. 6 stands for \(\{m,n\}=\{0,2,4,6\}\).
We now study effect of the number of levels included in the counterdiabatic transition on the fidelity of the outcome state. For this, in Eq. (5) first only the lowest transition, i.e. \(|0\rangle\leftrightarrow|2\rangle\), is considered. We find that even though compensating for this transition significantly improves the results, it still leaves room for further enhancement. The matrix element \(\langle 4|\hat{z}^{2}|0\rangle\) assumes a rather large value and thus its diabatic transitions are appreciable. Therefore, introducing a de-exciting mechanism for this transition through the CD drives is expected to enhance the final fidelity. This indeed is numerically confirmed to be the case as the fidelity takes a leap in Fig. 3(b). The enhancement in the final fidelity resulting from the inclusion of higher transitions is only incremental and seems to be saturating. Hence, one only requires a limited number of counterdiabatic drives, only three, to attain a high fidelity cat state.
### Imperfections
Alongside the thermal noise that has been considered in the above numerical analyses, there are several other effects that can still hinder achievement of a high fidelity cat state. Here, we consider two most prominent effects, namely the case of non-ideal initial state and the asymmetry of the double-well potential. Regarding the former, despite successful experimental results, a perfect cooling of the system to its groundstate is not attainable and the residual thermal occupations can be prohibitive for the final cat state. Therefore, instead of a pure groundstate in the first step of the protocol one indeed must consider a thermal state, though with very low thermal occupation numbers, as the input for the second step of the protocol. We numerically analyze this effect for two different initial occupation numbers \(\tilde{N}_{0}=\{5\times 10^{-3},0.2\}\) that correspond to finding the initial harmonic system in its groundstate with the probabilities \(\approx\{99\%,90\%\}\). The results suggest that although for the low ambient temperatures the fidelity of the final state decreases with almost the same proportion, the effect is overwhelmed at higher temperatures with the thermal noise, see Fig. 4(a).
Next we study the other obstacle in attaining the cat state, the asymmetry in the DW potential. The small yet non-vanishing \(z^{3}\) contribution in the electrostatic potential introduces asymmetries in the total potential that the mechanical resonator 'feels', see Appendix B. This breaks the parity symmetry of the system and as a result alters the counterdiabatic transitions. To see to what extend such asymmetries can affect outcome of our protocol we add the extra term \(\frac{1}{5}\xi^{2}\) to the Hamiltonian (3) and perform the numerical computations. Interestingly, our results show that our protocol is robust against such asymmetric terms in the potential. That is, by only considering CD driving among the even states and more specifically for only \(n,m\in\{0,2,4\}\) in Eq. (5) the outcome state is a cat state with two identical coherent lobes which resembles the groundstate of a symmetric DW. The fidelity values confirm this visual assessment. The protocol outcome state has an overlap of \(F\approx 81.6\%\) with the symmetric DW groundstate, which is higher than that of the asymmetric case (\(\approx 79.3\%\)). In Fig. 4(b)-(d) Wigner functions of the three following states are shown respectively: the protocol outcome when the potential is symmetric (\(\xi=0\)), the outcome state for an asymmetric DW with \(\xi=0.01\), and groundstate of the asymmetric DW (\(\xi=0.01\)). The fidelities of these states with respect to the target state are given in each plot.
## IV Readout
The highly nonlinear nature of the DW potential gives rise to an anharmonic mechanical spectrum. In this section we take advantage of this property and propose a method to read out the mechanical state for verifying the prepared cat state via spectroscopy of a driven cavity field coupled to the mechanical mode. A cavity mode weakly coupled to the mechanical resonator adiabatically follows its dynamics and carries the information therein [59; 60; 61; 62]. Thus, the outgoing field of the cavity mode has fingerprints of different mechanical transitions weighted by the occupation of the mechanical levels. To analyze the effect we include the cavity mode and its interaction with the mechanical mode in the Hamiltonian. In the frame rotating at the laser drive the total Hamiltonian reads
\[\hat{H}_{\text{tot}}=-\hbar\Delta\hat{a}^{\dagger}\hat{a}+\hat{H}_{\text{DW}}+ \hbar g\hat{z}(\hat{a}+\hat{a}^{\dagger}), \tag{6}\]
where \(\Delta\) is the Laser-cavity detuning and \(g\) is the coupling rate. Dynamics of the cavity mode when driven in resonance (\(\Delta=0\)) is described by the Langevin equation
\[\dot{\hat{a}}=-\tfrac{1}{2}\kappa\hat{a}-ig\hat{z}+\sqrt{\kappa}\hat{a}_{\text {in}}, \tag{7}\]
where \(\kappa\) is the cavity decay rate and \(\hat{a}_{\text{in}}\) is the vacuum noise with the correlation function \(\langle\hat{a}_{\text{in}}(t)\hat{a}_{\text{in}}^{\dagger}(t^{\prime})\rangle= \delta(t-t^{\prime})\) and all
Figure 4: Effect of imperfections in the fidelity of the outcome state: (a) The fidelity as a function of ambient temperature for three different initial occupation numbers of the harmonic system \(\tilde{N}_{0}\). (b) and (c) Wigner functions of the outcome state of the protocol when the DW potential is symmetric (\(\xi=0\)) and asymmetric (\(\xi=0.01\)), respectively. In (d) the Wigner function for the groundstate of an asymmetric DW is shown. The numbers in the corner of the plots indicate their fidelity with respect to the target state. Here, \(\Delta t_{2}=0.1\)\(\mu\)s, and CD drives up to the fourth level are employed. In (b)–(d) the ambient temperature is set to \(T=15\) mK.
the other correlators vanishing. The rigorous solution to the above equation is
\[\hat{a}(t)=\int_{0}^{t}\!ds\ e^{-\frac{\kappa}{2}(t-s)}\big{[}-ig\hat{z}(s)+\sqrt{ \kappa}\hat{a}_{\text{in}}(s)\big{]}, \tag{8}\]
where we have dropped a transient term which is irrelevant in the steady-state limit. Since we are assuming a weakly coupled cavity the kernel in the integrand decays much faster than any other oscillation and effectively the higher limit of the integrand can be set to infinity. Hence, we arrive at
\[\hat{a}(t)\approx-ig\sum_{m,n}\frac{z_{mn}(t)|m\rangle\langle n|}{\kappa/2-i \delta_{mn}}+\frac{2}{\sqrt{\kappa}}\hat{a}_{\text{in}}(t), \tag{9}\]
where \(z_{mn}=\langle m|\hat{z}|n\rangle\) are the position matrix elements. Note that \(\delta_{mn}=-\delta_{mn}\) and because of the Hermitian nature of \(\hat{z}\) one has \(z_{mn}=z_{mn}^{*}\). From the input-output theory the microwave field leaving the cavity (\(\hat{a}^{\text{out}}=\hat{a}^{\text{in}}-\sqrt{\kappa}\hat{a}\)) is indeed carrying information about the mechanical transitions. Therefore, the output field spectrum can be exploited for readout of the mechanical state. The steady-state spectrum of the outgoing cavity field can be computed by employing the quantum regression theorem [63]
\[S(\Omega)=\sum_{n,m}\frac{\kappa g^{2}z_{mn}^{2}}{\kappa^{2}/4+\delta_{mn}^{2 }}L_{mn}(\Omega)\rho_{nn}, \tag{10}\]
where \(\rho_{nn}\) gives the \(n\)th diagonal element of the mechanical density matrix and we have introduced the Lorentzian function \(L_{mn}(\Omega)=\frac{1}{\kappa}\frac{L_{mn}}{(\Omega-\delta_{mn})^{2}+\Gamma _{mn}^{2}}\). Here \(\Gamma_{mn}\) is the decoherence rate of each mechanical transition whose value for \(m<n\) is \(\gamma_{mn}\hat{N}(\delta_{mn})\), while for \(m>n\) gives \(\gamma_{mn}[\hat{N}(\delta_{mn})+1]\). Note that the frequency of the cavity is the reference (\(\Omega=0\)) in above equation. Apart from the main peak resulting from the drive at \(\Omega=0\) state of the mechanical resonator in DW potential and its interaction with the cavity leaves its trace as sideband at the mechanical transitions \(\delta_{mn}\) in the output spectrum. Crucially, the fast decay rate of the cavity allows one to extract the mechanical information before its thermalization.
In Fig. 5 the cavity output spectrum is presented. The plot shows that by studying the cavity output spectrum one clearly identifies a pure groundstate of the DW potential from other states. In other words, the spectrum in Fig. 5(a) shows that at the groundstate one only has characteristic sideband peaks at \(\Omega=-\delta_{n,0}\) with \(n=3,5,7\) and less prominently at higher odd values of \(n\). Note that the broad peak at \(\Omega\approx 0\) corresponds to the lowest transition \(\delta_{1,0}\) that has the largest decoherence rate \(\Gamma_{1,0}\). The central peak as well as the sidebands grow broader as the mechanical system thermalizes over time. Moreover, extra sidebands emerge that reveal mixed nature of the state.
## V Summary and conclusion
In summary, we have proposed and numerically analyzed a protocol for preparing a macroscopic spatial superposition state of a massive object. The scheme is based on shortcut to adiabatic preparation of a system in the groundstate of a double well potential. The counterdiabatic driving mechanism has been employed for accelerating the approach to the desired DW form and avoiding thermalization of the system. Our results prove that very high fidelity cat states can be attained with large spatial separation by only compensating for a few diabatic transitions. We have also proposed a setup based on the superconducting circuits with graphene for implementing the scheme. The CD driving can be experimentally accomplished by employing driven cavity modes where each mode drives one transition. Given the limited number of cavity modes in a superconducting circuit our scheme can prove experimentally feasible. The efficiency and robustness of the protocol has been benchmarked by taking into account various factors and imperfections. The results show that the STA preparation of a macroscopic cat state of a graphene nanoresonator is robust and noise-resilient. The mechanical state can be readout through spectroscopy of a resonantly driven cavity mode which is weakly coupled to the resonator. As our investigations has been presented this method can efficiently fingerprint the state thanks to the high anharmonic nature of the DW potential. Finally, it is worth mentioning that the protocol studied in this work can be combined with the dissipative approach presented in Ref. [18] by adding one final step. In fact, after the third step one could employ a cavity mode for sideband cooling of the \(|1\rangle{\leftrightarrow}|0\rangle\) transition. This transition is the most destructive thermal channel that quickly degrades the ground cat state into a mixed state of two deflections. The cavity cooling can slow down the decoherence and give a longer coherence time which allows for better detection of the state.
###### Acknowledgements.
The authors thank L. Pakdel and M. Fani for fruitful discussions.
## Appendix A Mechanical properties of the graphene resonator
The elastic properties of a free-standing graphene membrane depends on its geometry as well as the fabrication method. In this work we have considered a rectangular form
Figure 5: Cavity output spectrum at different times after preparation of the cat state. The curves are shifted vertically for an easier comparison. The right panel gives a closer look at the first sideband. Here, we set \(\kappa=\omega\) and \(g=0.1\kappa\).
for the membrane with dimensions \(L\times w\) which has been pinned along two parallel edges. In the case of graphene on superconducting materials, due to the large differences in the modulus of elasticity, one has a significant tensile force at those edges who hold the graphene membrane atop of the support material. This large built-in tension imposes pinned boundary conditions on the mechanical resonator. In other words, the same as those for a string fixed at its both ends. Therefore, the mode profiles are \(\varphi_{n}(x)=\sin(n\pi x/L)\) and the frequencies \(\omega_{n}=\sqrt{\mathcal{T}/\mu}(n\pi/L)\) with \(n=1,2,\cdots\). That is, the membrane deflection can be cast in the form of \(z(x,t)=\sum_{n}u_{n}(t)\varphi_{n}(x)\) with the mode amplitudes \(u_{n}(t)\). Here, \(\mathcal{T}\) and \(\mu\) are the tensile force at the boundaries and the two-dimensional mass density of the membrane, respectively.
The bending modes cause a small extension in the length of the resonator and consequently give rise to a nonlinearity in the system. This extra tension is \(\Delta\mathcal{T}=Yh(\Delta L/L)\), where \(Y\) is the Young modulus and \(h\) is the thickness of the membrane. The total length stretch is given by \(\Delta L=\frac{1}{2}\int_{0}^{L}dy|\partial_{y}z(y,t)|^{2}\). This brings us at \(m_{n}=\frac{1}{2}\mu Lw\) for the mass and \(\beta_{n}=(Yhw/\delta L^{3})(n\pi)^{4}\) as the Duffing nonlinearity of the \(n\)th mode with \(w\) the width of membrane. These relations has been used in the text for computing the system parameters.
## Appendix B Electrostatic potential
To construct an anti-parabola that mimics symmetry of the membrane fundamental mode we propose to employ two line electrodes at \(x=\pm b\) with length \(2a\) and symmetrically positioned beneath the center of the membrane, see Fig. 1. Such configuration can roughly by estimated by two infinitesimally thin rods. Hence, the resulting electrostatic potential at the point \((x,y=0,z)\) is given by
\[V_{\rm e}(x,z)= V_{0}\Big{(}\ln[\frac{\sqrt{a^{2}+(x-b)^{2}+(z-z_{0})^{2}}+a}{ \sqrt{a^{2}+(x-b)^{2}+(z-z_{0})^{2}}-a}]\] \[+\ln[\frac{\sqrt{a^{2}+(x+b)^{2}+(z-z_{0})^{2}}+a}{\sqrt{a^{2}+( x+b)^{2}+(z-z_{0})^{2}}-a}]\Big{)},\]
where \(V_{0}\) is the potential applied to the electrodes and \(z_{0}\) is the equilibrium distance of the membrane from the surface that includes the electrodes.
By assuming small vibrational amplitudes one Taylor expands \(V_{\rm e}\) around \(z=0\) at \(x=y=0\) to find the effective external potential 'felt' by the membrane: \(V_{\rm e}(z)=\sum_{j}\alpha_{j}z^{j}\). The linear contribution of this external potential leads to a shift in the equilibrium position of the membrane. This, in turn, can be compensated for by the gate potential if necessary. However, the higher expansion terms contribute to the total Hamiltonian of the system and determine its dynamics. In Fig. 6 the coefficients of expansion \(\alpha_{j}\) are plotted against \(b\) for \(a=10z_{0}\). At \(b=z_{0}/\sqrt{3}\) one has \(\alpha_{3}=0\) which, in principle, is desirable since the potential remains symmetric under parity transformation, when neglecting the higher order odd terms.
Nonetheless, the membrane capacitively couples to the superconducting circuit and a considerable electromechanical coupling rate, which is necessary for the initial sideband cooling, demands a considerable capacitance. That is, a small spacing between the membrane and the gate electrode and yet a large area overlap between them (see the red rectangular electrode in Fig.1). Therefore, \(b=z_{0}/\sqrt{3}\) does not fulfill our requirements for the considerable electromechanical coupling. Instead, we consider the case of \(b\gg z_{0}\) were \(\alpha_{2}\) remains the dominant term. The negligibility of the higher order terms is justified because of the quantum regime that we are interested in. In fact, in the Hamiltonian one has \(\hat{z}=z_{\rm zpm}(\hat{b}+\hat{b}^{\dagger})\), where \(z_{\rm zpm}=\sqrt{\hbar/2m\omega}\) is the zero point amplitude. For the parameters discussed in our setup this is \(z_{\rm zpm}\sim 1\) pm. Hence, the contribution from the higher order expansion terms are further suppressed in the quantum regime.
Figure 7: The top panel: \(\mathbf{\theta}\) as a function of \(\zeta\). In the lower panels the relative error for the eigenvalues of the truncated Hamiltonian are given for different values of \(c_{1}\) for \(\zeta=-2.5\times 10^{-4}\) (left) and different values of \(c_{2}\) for \(\zeta=+3\times 10^{-4}\) (right).
Figure 6: Coefficients of expansion of the electrostatic potential \(\alpha_{j}\) as a function of their separation: \(j=2\) (bold line), \(j=3\) (dashed line), and \(j=4\) (dotted line). Here, we have set \(a=10z_{0}\). The right panel gives a closer look at the large \(b/z_{0}\) values. Note that \(\alpha_{4}\approx 0\) at this regime.
## Appendix C Numerical method
The continuous variable nature of the system gives a infinite Hilbert space whose numerical analysis demands for truncations. To avoid the burden of expensive computations due to considering a large Hilbert space, one needs to choose the computational basis carefully. It is easy to check that eigenbasis of the 'original' harmonic oscillator with frequency \(\omega\) only gives reliable results for very high Hilbert space truncations. By investigation, we find that a harmonic oscillator basis with \(\omega_{0}=\sqrt{\theta\nu/m}=\omega\sqrt{\theta\left|\zeta\right|}\) (for \(\zeta\geq-1\) and \(\theta>0\)) gives much better results for a proper choice of \(\theta\). Hence, we write
\[\hat{z} =\sqrt{\frac{\hbar}{2m\omega_{0}}}(\hat{b}+\hat{b}^{\dagger}), \tag{10a}\] \[\hat{p} =-i\sqrt{\frac{m\hbar\omega_{0}}{2}}(\hat{b}-\hat{b}^{\dagger}), \tag{10b}\]
where \(\hat{b}\) (\(\hat{b}^{\dagger}\)) is the bosonic annihilation (creation) operator with the commutator \([\hat{b},\hat{b}^{\dagger}]=1\). By plugging into (3) we arrive at the Hamiltonian
\[\frac{\hat{H}_{\text{DW}}}{\hbar\omega_{0}}=-\frac{(\hat{b}-\hat{b}^{\dagger} )^{2}}{4}-\frac{\text{sgn}(\zeta)(\hat{b}+\hat{b}^{\dagger})^{2}}{4\theta}+ \frac{\gamma(\hat{b}+\hat{b}^{\dagger})^{4}}{(\theta|\zeta|)^{3/2}}, \tag{11}\]
where \(\gamma=\beta\hbar/(16m^{2}\omega^{3})\) has been introduced.
Note that since in Eq. (11) \(\zeta\) appears in the denominator of the last term one has to be meticulous when dealing with the small values of \(\zeta\). In our numerical analysis, we find by inspection that for the range of \(\zeta\in[-1,-2.5\times 10^{-4}]\) a constant value of \(\theta=c_{1}\) gives results with high precision for a Hilbert space truncated at \(\text{dim}=50\) when \(c_{1}\) is carefully determined. For the remaining range including the final value \(\zeta_{\text{f}}=+3\times 10^{-4}\), we instead tune \(\theta\) such that the denominator remains finite. In other words, we set \(\theta|\zeta|=c_{2}\), where again the optimal value of \(c_{2}\) is found numerically.
Now we discuss the method we used to find the optimal values of \(c_{1}\) and \(c_{2}\). The goal is to have smallest truncated Hilbert space, yet with high precision. We do this by contrasting the eigenvalues obtained from a basis with two truncated dimensions: one high (\(\text{dim}=1000\)) and the other low (\(\text{dim}=50\)) for different values of \(c_{1}\) and then \(c_{2}\). We indicate the former by \(E_{n}^{\text{H}}\), while the latter is indicated by \(E_{n}^{\text{L}}\). A higher truncation of the Hilbert space always gives more reliable results for the states with lowest eigenvalues. Therefore, they give a good reference for gauging the accuracy of eigenstates in lower Hilbert space truncations. Hence, we define the relative error as \(\varepsilon_{n}=|E_{n}^{\text{H}}-E_{n}^{\text{L}}|/|E_{n}^{\text{H}}+E_{n}^{ \text{L}}|\), with \(n=0,1,2,...\) being indexing the energy levels.
For the first part that contains large values of \(\left|\zeta\right|\), i.e. \(\zeta\in[-1,-2.5\times 10^{-4}]\), by trying different values for \(c_{1}\) and comparing the errors, we find that \(c_{1}=2\) gives an energy spectrum with high accuracy for up to the 25th level. For the second part we find \(c_{2}=5\times 10^{-4}\) giving the least error for states with \(n\leq 25\), see Fig. 7.
|
2309.16222 | Experimental evidence of random shock-wave intermittency | We report the experimental observation of intermittency in a regime dominated
by random shock waves on the surface of a fluid. We achieved such a
nondispersive surface-wave field using a magnetic fluid subjected to a high
external magnetic field. We found that the small-scale intermittency of the
wave-amplitude fluctuations is due to shock waves, leading to much more intense
intermittency than previously reported in three-dimensional hydrodynamics
turbulence or in wave turbulence. The statistical properties of intermittency
are found to be in good agreement with the predictions of a Burgerslike
intermittency model. Such experimental evidence of random shock-wave
intermittency could lead to applications in various fields. | Guillaume Ricard, Eric Falcon | 2023-09-28T07:54:40Z | http://arxiv.org/abs/2309.16222v1 | # Experimental evidence of random shock-wave intermittency
###### Abstract
We report the experimental observation of intermittency in a regime dominated by random shock waves on the surface of a fluid. We achieved such a nondispersive surface-wave field using a magnetic fluid subjected to a high external magnetic field. We found that the small-scale intermittency of the wave-amplitude fluctuations is due to shock waves, leading to much more intense intermittency than previously reported in three-dimensional hydrodynamics turbulence or in wave turbulence. The statistical properties of intermittency are found to be in good agreement with the predictions of a Burgerslike intermittency model. Such experimental evidence of random shock-wave intermittency could lead to applications in various fields.
_Introduction.--_ Intermittency is characterized by localized bursts of intense activity that even occur in relatively quiescent flows [1; 2]. It has been extensively investigated in the past decades, especially in three-dimensional (3D) hydrodynamics turbulence [2] and has been ascribed to coherent structures such as vortex filaments [1]. Although describing successfully the energy cascade, the Kolmogorov dimensional analysis [3] fails to explain the small-scale intermittency observed experimentally [4; 5] and numerically [6; 7]. While several models have attempted to describe it [8; 9; 10], the lack of closure of the Navier-Stokes equations lets the discussion widely open [5; 6; 11]. Intermittency is a ubiquitous phenomenon that occurs in a wide range of experimental fields, e.g., wave turbulence [12; 13], integrable turbulence [14], solar winds [15], Earth's magnetic field [16] or atmospheric winds [17], turbulent flames in combustion [18], quantum turbulence [19], rotating turbulence [20] or granular systems [21].
Intermittency also occurs for Burgers turbulence, a simplified one-dimensional (1D) model of Navier-Stokes turbulence [22]. Although less complex, Burgers turbulence is more predictable. It predicts the emergence of highly coherent structures, as random shocks governing its statistical properties, the energy spectrum, the probability distribution functions (PDFs) of velocity increments and gradients, as well as intermittency [23]. In the inertial range, Burgers intermittency is predicted by a bifractal model and its origin is due to shock waves [24; 25; 26]. Numerical simulations of intermittency in the 1D stochastically forced Burgers equation have then been performed [27; 28; 29], but experimental evidence of intermittency in Burgers turbulence remains elusive so far, as a regime of random shock waves is hardly reachable experimentally.
Recently, we have experimentally shown that an ensemble of stochastic shock waves can emerge from random gravity-capillary waves on the surface of a fluid, made nondispersive using a magnetic fluid [30]. Their fronts are not fully vertical, conversely to theoretical Burgers shock waves, and they drive the dynamics [30]. Here, we explore the possible intermittent nature of such a random shock-wave-dominated field. We show that shock waves lead to intense small-scale intermittency that is quantified by the PDFs of the increment of the wave-amplitude fluctuations and by high-order statistics. In particular, the experimental structure-function exponents are found in quantitative agreement with a Burg-erslike intermittency model, modified to take account of the finite steepness of the experimental shock-wave fronts. When the shock waves are removed by numerical post-processing, the nonlinear wave field is smoother and exhibits much weaker intermittency of a different nature. The latter is close to wave turbulence intermittency, reported experimentally [12; 13] and numerically [31], involving other coherent structures (e.g., sharp-crested waves), and for which its origin is still a highly debated topic that may be related to the fractal dimension of these coherent structures [32; 33]. Our experimental results appear thus of primary interest regarding the wide range of fields in which Burgers turbulence [23] and wave turbulence [34] occur.
_Theoretical background.--_ Intermittency corresponds to a continuous deformation over the scales of the PDFs of increments of a given field (e.g., fluid-velocity or wave-amplitude fluctuations) [2]. Phenomenological models have been developed since the 60s to quantify such small-scale intermittency [2; 8]. To do so, first-order increments of a temporal signal \(\eta(t)\) are defined as \(\delta\eta(t,\tau)=\eta(t+\tau)-\eta(t)\), where \(\tau\) is the time lag. The scaling properties of the corresponding structure functions of order \(p\) as \(\mathcal{S}_{p}(\tau)\equiv\langle|\delta\eta|^{p}\rangle_{t}\sim\tau^{\zeta_{p}}\) (with \(p\) a positive integer) are the keys quantities to characterize intermittency. A nonlinear dependence on the exponents \(\zeta_{p}\) with \(p\) is indeed a signature of intermittency. In 3D hydrodynamics, experiments showed a nonlinear scaling of \(\zeta_{p}\) with \(p\)[1; 4] that is not described by the Kolmogorov nonintermittent prediction in \(\zeta_{p}=\frac{n-1}{2}p\)[3]. Phenomenological models have been proposed to tackle this discrepancy and predict the nonlinear shape of \(\zeta_{p}\)[2; 8; 9; 10]. Some agree with experiments [4], but do not account for the observed oscillating \(\zeta_{p}\)[5; 6]. For Burgers turbulence, the presence of shock waves leads to strong intermittency. In the limit of vanishing viscosity of the Burgers equation, the structure
functions are then predicted by a bifractal model, since regular random waves and shock waves coexist, as [24; 25; 26]
\[\mathcal{S}_{p}^{\mathrm{ni}}(\tau) \sim\tau^{(n-1)p/2}\ \ \mathrm{for}\ p<2/(n-1)\, \tag{1}\] \[\mathcal{S}_{p}^{\mathrm{B}}(\tau) \sim\tau^{1}\ \ \mathrm{otherwise}. \tag{2}\]
The scaling in \(\zeta_{p}=(n-1)p/2\) in Eq. (1) comes from the smooth random component of the solutions [2; 27]. It is also predictable by dimensional analysis and is related to the exponent \(n\) of the energy spectrum. The second scaling in \(\zeta_{p}=1\), regardless of \(p>2/(n-1)\), occurs when the shocks dominate, and the unit value is related to the vertical feature of the shock fronts leading to \(\delta\)-Dirac singularities for its increments (see Appendix A). Note that the effects of the spectral bandwidth of the random forcing to Burgers equation and of dissipation on the scaling of Eq. (2) have been numerically investigated [27; 36]. Nevertheless, to our knowledge, no experimental evidence for such intermittency in Burgers turbulence has been established so far.
_Experimental setup.--_ The experimental setup is represented in Fig. 1 and has been described in detail in Ref. [30]. A canal of length \(L=15\) cm is filled with a liquid up to a depth of \(h=2\) cm. An electromagnetic shaker with a paddle is located at one end to inject energy in a narrow random frequency bandwidth \(f_{0}\pm\Delta F\), with \(f_{0}=8.5\) Hz and \(\Delta F=2.5\) Hz. The typical rms wave maker amplitude is few mm. We use a magnetic fluid (Ferrotce PBG400 ferrofluid) subjected to an external horizontal magnetic field \(B\) collinear to the wave propagation. We have previously shown that increasing the strength of the magnetic field enables us to tune the surface-wave dispersion relation \(\omega(k)\), and to achieve, at large \(B\), a nondispersive acousticlike regime in \(\omega(k)\sim k\), where random shock waves dominate [30]. This regime occurs when the dispersive gravity and capillary terms in the theoretical dispersion relation of surface waves are much smaller than the nondispersive magnetic term. For our range of experimental parameters and \(B=650\) G, we showed that the magnetic term is about 20 times larger than the gravity and capillary terms, but weak dispersive effects are still present due to some residual gravity-capillary waves [30]. Such a nondispersive wave-field regime has been also experimentally evidenced by computing the spatiotemporal spectrum of the wave elevation leading to a spectrum in agreement with \(\omega(k)\sim k\) for \(B=650\) G whereas the gravity-capillary dispersive relation is observed for \(B=0\) G [30].
Here, we keep constant the horizontal magnetic field strength to \(B=650\) G to observe and characterize intermittency of the nondispersive regime involving mainly random shock waves. We measure the surface elevation, \(\eta(t)\), at a single point using a homemade capacitive wire gauge (0.2 mm in diameter, 10 \(\upmu\)m in vertical resolution at 2 kHz), located in the middle of the canal, during \(\mathcal{T}=15\) min. We checked that the location of the probe in the canal does not change the results reported here in particular for shock waves that conserve their shape (i.e., their discontinuity) along the canal (see Appendix B). To quantify the wave nonlinearity, we measure the mean wave steepness as \(\epsilon\equiv\eta_{\text{rms}}k_{m}\), with \(\eta_{\text{rms}}=\sqrt{\langle\eta(t)^{2}\rangle_{t}}\) the standard deviation of the surface elevation, and \(k_{m}\) the wave number at the wave spectrum maximum (typically at the forcing scale) [13]. \(\epsilon\) is chosen constant to a low value of 0.07 to be in a weakly nonlinear regime.
_Wave energy spectra.--_ A typical example of the fluctuations of the surface elevation, \(\eta(t)\), over time, is plotted in Fig. 2(a). Very steep wave fronts emerge as random shock waves (see arrows) corresponding to peaks in the signal difference. A typical shock wave is enlarged in Fig. 2(b) well fitted by a solution of the Burgers equation [22] in \(a\tanh(t/t^{*})\) with \(a\) its amplitude and \(t\) quantifying its steepness. These shock-wave parameters are inferred from a 15-min signal in which 863 shock waves are detected. We find \(a=1.3\pm 0.4\) mm and \(t^{*}=5.7\pm 2.1\) ms and Gaussian distributions for \(a\) and \(t^{*}\). This very steep profile does not reach a fully vertical front (i.e., \(t^{*}\to 0\)) that occurs only for vanishing viscosity in the Burgers equation. We have shown previously that shock waves are characterized by a discontinuity leading to a \(\delta\)-Dirac singularity in the second-order difference of their amplitude [30]. Using the Fourier transform \(\widehat{\eta}(\omega)\) of \(\eta(t)\), the frequency-power spectrum, \(E_{\eta}(\omega)\equiv|\widehat{\eta}(\omega)|^{2}/\mathcal{T}\), of the wave-amplitude fluctuations is computed and shown in Fig. 2(c). A well-defined power-law scaling in \(E(\omega)\sim\omega^{-4}\) is observed. Such a spectrum of shock waves has been shown to agree with the theoretical Kuznetsov spectrum of second-order singularities [30; 37]. It differs from the classical \(\omega^{-2}\) Burgers spectrum (see Appendix A), as the experimental shock-wave fronts are not fully vertical. When the singularities are removed from the signal by smoothing numerically
Figure 1: Experimental setup. A pair of Helmholtz coils generates a horizontal homogeneous magnetic field B on the ferrofluid surface. Random waves are driven by a wave maker linked to a shaker at one end of the canal. The wave elevation \(\eta(t)\) is measured at a single point using a capacitive wire gauge. \(L=15\) cm, \(L_{y}=2\) cm, and \(h=2\) cm.
the signal around the discontinuities (using a moving-average filter) [30], the corresponding spectrum scales in \(\omega^{-4.8}\) [see red-dashed line for data, and black dash-dotted line for the best fit, in Fig. 2(c)] as it is mainly governed by residual gravity-capillary waves and dissipation. Let us now focus on the statistics of the increments of the surface elevation.
_Probability density functions.--_ For such a steep \(\omega^{-4}\) spectrum, high-order difference statistics are required to test intermittency [35]. We will use afterward the fourth-order increment \(\delta^{(4)}\eta=\eta(t+2\tau)-4\eta(t+\tau)+6\eta(t)-4\eta(t-\tau)+\eta(t-2\tau)\). As we have checked, this is more than enough to achieve convergence of the structure functions, which then no longer depend on the higher order of the increment used [35]. \(\delta^{(4)}\eta\) will be denoted \(\delta\eta\) afterward, for the sake of clarity. The PDFs of \(\delta\eta(\tau)\) are displayed in Fig. 2(d) for different time lags \(\tau\) corresponding to more than one decade in frequency \(f=1/(2\tau)\). The PDFs are found to be almost Gaussian at large \(\tau\), as expected, and display a continuous deformation with decreasing \(\tau\) leading to heavy tails at small \(\tau\), as a signature of intermittency [2]. Its origin is ascribed to the coherent structures, i.e., shock waves, storing energy, and traveling over the canal. Indeed, at small scales (\(\tau<20\) ms, i.e., \(f>25\) Hz) the PDF tails are heavier in the presence of shock waves (solid lines) than without shock waves (dashed lines). Intermittency appears thus much more pronounced in the presence of shock waves than without, as it will be quantified below by the structure-function analysis. Note that, although the shock waves are removed, heavy tail PDFs still remain since other (less intense) coherent structures are present (see below).
_Structure functions.--_ We now compute the structure functions \(\mathcal{S}_{p}(\tau)\) in order to quantify the above-reported intermittency. \(\mathcal{S}_{p}(\tau)\) are shown in Fig. 3 with (a) and without (b) shock waves for \(p\in[1,6]\). At first glance, power laws in \(\tau^{\zeta_{p}}\) are observed in the inertial range. To extract more accurately these exponents, we compute logarithmic derivatives \(\zeta_{p}(\tau)\equiv d\log(S_{p})/d\log(\tau)\), i.e., the logarithmic local slopes, that are shown in Fig. 3 with (c) and without (d) shock waves.
With shock waves, a clear increase of \(\zeta_{p}\) with \(\tau\) is observed in the inertial range for \(p>3\), whereas \(\zeta_{p}\) is found to be roughly constant without shocks, as for classical intermittency [2]. The values of the exponents averaged over \(\tau\) in the inertial range, \(\langle\zeta_{p}\rangle\), are reported in Fig. 4. In the presence of shock waves (full symbols), we find that \(\langle\zeta_{p}\rangle\) strongly increases until \(p\approx 2\) and much less for larger \(p\). Note that this observed intermittency is much stronger than for 3D hydrodynamic intermittency [4; 5; 6; 7] or wave turbulence intermittency [12; 13], as stronger nonlinearities are involved, as shock waves. Such an evolution of \(\langle\zeta_{p}\rangle\) with \(p\) could be qualitatively described by the bifractal model of Burgers turbulence of Eqs. (1)-(2). Indeed, for low \(p\), Eq. (1) predicts \(\zeta_{p}=\frac{n-1}{2}p\) with \(n\) the frequency power-law exponent of the energy spectrum. This nonintermittent scaling, _a la_ Kolmogorov, comes from the smooth component of \(\eta(t)\). For large \(p\), Eq. (2) predicts that \(\zeta_{p}\) is independent of \(p\), such intermittent scaling being ascribed to the shock waves [24; 25; 26]. However, a clear departure is observed between the
Figure 2: (a) Typical signal \(\eta(t)\) (black) and its difference (red) highlighting shock waves (arrows). (b) Zoom on a shock wave (\(\bullet\)) fitted by a Burgers solution in \(a\tanh(t/t^{*})\) (\(a=1.7\) mm, \(t^{*}=4.7\) ms). (c) Frequency-power spectra \(E_{\eta}(f)\) with (solid line) and without (dashed line) shock waves. Black dash-dotted lines: Theoretical spectrum in \(f^{-4}\) for second-order singularities [30; 37] and best fit in \(f^{-4.8}\) for the spectrum without shock wave. (d): PDFs of the fourth-order increments \(\delta\eta(\tau)\) for time lags \(\tau\in[2.5,50]\) ms (see arrows), i.e., \(f\in[10,200]\) Hz, with (solid lines) and without (dashed lines) shock waves. Black dash-dotted line: Gaussian with unit standard deviation. Correlation time is \(\tau_{c}\approx 33\) ms (see Appendix C). PDFs have been shifted vertically for clarity.
Figure 3: Structure functions \(\mathcal{S}_{p}(\tau)\) as a function of \(\tau\) with (a) and without (b) shock waves, for increasing \(p\in[1,6]\) (see arrow). Dash-dotted lines: Power-law fits in \(\tau^{\zeta_{p}}\) within the inertial range (between vertical dashed lines). Exponent \(\zeta_{p}\) versus \(\tau\) with (c) and without (d) shock waves, for increasing \(p\in[1,6]\) (see arrow).
experiments and the Burgers model of Eq. (2) (see full symbols and gray line in Fig. 4). This discrepancy is due to the fact that the experimental shock-wave fronts are not fully vertical, which can be taken into account by modifying the Burgers model (see below). When the shock waves are removed, the experimental evolution of \(\langle\zeta_{p}\rangle\) is strongly different (see empty symbols in Fig. 4). Burgers intermittency has vanished, but intermittency is still weakly present. Indeed, \(\zeta_{p}\) can be fitted by the simplest nonlinear law, \(\zeta_{p}=c_{1}p-\frac{c_{2}}{2}p^{2}\), with \(c_{1}=2\) and \(c_{2}=0.23\) (see dashed-dotted line). \(c_{1}\) is related to the Kolmogorov nonintermittent prediction \(c_{1}\simeq(n-1)/2\) with \(n=4.8\) (see dashed line). The nonzero value of \(c_{2}\), quantifying the deviation from linearity, shows that much weaker intermittency remains as other coherent structures (i.e., steep gravity-capillary waves) are still present, as routinely reported in wave turbulence [12]. This is also confirmed by comparing the magnitudes of the structure functions with and without shock waves that differ from a few orders of magnitude at small \(\tau\) once \(p>3\) [compare Fig. 3(a) and 3(b)]. The reason is that shock waves through their discontinuities, generate peaks in the increment of amplitude, \(|\delta\eta|^{p}\), increasing with \(p\) that thus dominate the \(\mathcal{S}_{p}\) values. Another way to quantify intermittency, i.e., the PDF shape deformations over the scales, is to compute the coefficients of flatness, \(\mathcal{S}_{4}/\mathcal{S}_{2}^{2}\), and hyperflatness, \(\mathcal{S}_{6}/\mathcal{S}_{2}^{3}\). We find that they both confirm that strong intermittency is well ascribed to shock waves (see Appendix D).
_Modified Burgers model.--_ The experimental behavior of \(\zeta_{p}\) in Fig. 4 deviates from the Burgers intermittency model [24; 25; 26]. This model assumes shock waves with vertical fronts, and the structure functions are then predicted, in the inertial range, by [25]
\[\mathcal{S}_{p}^{\rm B}(\tau)=\Gamma\tau^{1}\left\langle|\Delta\eta|^{p} \right\rangle_{t},\ \ \text{for}\ \ p>2/(n-1), \tag{3}\]
where \(\Gamma\) is the shock rate (mean number of shocks per second) and \(|\Delta\eta|^{p}=|\delta\eta(\tau,t_{s})|^{p}\) is the \(p\)th moment of the increment amplitude of a shock wave occurring at \(t=t_{s}\). Equation (3) is only valid in the inertial range which is verified experimentally since the shock-wave singularity duration \(\tau_{s}\sim 1\ \text{ms}\ll\tau\ll\) duration between shocks \(1/\Gamma\sim 1\ \text{s}\). The shock-wave number and amplitude must be also large enough, as verified experimentally [30]. We first solve numerically the 1D viscous Burgers equation [22]. Burgers shock waves involve indeed vertical fronts (see Appendix A), its increment amplitude is well independent of \(\tau\) and its increment width scales as \(\tau^{1}\) [see Fig. 5(a)], as predicted by Eq. (3). Note that the three nondimensional values of \(\tau_{i}\) used in Fig. 5(a) are chosen to correspond to frequencies within the inertial range of the theoretical \(\omega^{-2}\) Burgers power spectrum (see Appendix A).
Experimentally, the shock-wave fronts have a finite steepness. As a consequence, the increment maximum amplitude \(|\Delta\eta|^{p}\) will depend on \(\tau\), conversely to Eq. (3). Indeed, Fig. 5(b) shows the experimental evolution of the increment amplitude for \(p=4\), \(|\delta\eta(\tau)|^{4}\), of a single shock wave, over time and for different \(\tau\). We find that the widening of the shock width scales as \(\tau^{1}\) as for the Burgers model, but a clear increase of its maximum amplitude occurs with \(\tau\), contrary to the Burgers case [Fig. 5(a)]. Assuming that shock waves are well described by a Burgers solution in \(\eta(t)=a\tanh(t/t^{*})\) with \(t^{*}\) its typical steepness [see Fig. 2 (c)], one obtains easily that
Figure 5: Evolution of a single shock-wave increment amplitude for \(p=4\), \(|\delta\eta(t,\tau)|^{4}\) and different \(\tau\): (a) simulations of the Burgers equation and (b) experimental data. (c) Experimental increment maximum amplitude, \(\langle|\Delta\eta|^{p}\rangle_{t}\), versus \(\tau\) for increasing \(p\in[1,6]\) (arrow). Dash-dotted lines: predictions of Eq. (5) for a typical shock-wave geometry of \(a=3.1\ \text{mm}\) and \(t^{*}=7.7\ \text{ms}\). (d) Compensated structure functions \(\mathcal{S}_{p}/\mathcal{S}_{p}^{\rm th}\) for \(p\in[1,6]\). Vertical dashed lines indicate the inertial range.
Figure 4: Exponents \(\langle\zeta_{p}\rangle\) of the structure functions versus \(p\) with (\(\bullet\)) and without (\(\circ\)) shock waves. Gray line: Burgers model of Eqs. (1)-(2). Solid line: modified Burgers model (see text). Dash-dotted line: nonlinear fit in \(\zeta_{p}=c_{1}p-\frac{c_{2}}{2}p^{2}\) with \(c_{1}=2\) and \(c_{2}=0.23\). Dashed line: Kolmogorov nonintermtent prediction of Eq. (1). The errorbars correspond to the \(\zeta_{p}\) standard deviation in the inertial range.
\(\Delta\eta(\tau)=2a\tanh[\tau/(2t^{*})]\). For a fully vertical front, \(t^{*}\) tends towards zero leading to \(\Delta\eta\) independent of \(\tau\). For finite steepness shock waves, Eq. (3) is thus modified as
\[\mathcal{S}_{p}^{\rm mB}(\tau) =\Gamma\tau^{1}\langle|\Delta\eta(\tau)|^{p}\rangle_{t}\,\quad\text{ with} \tag{4}\] \[\langle|\Delta\eta(\tau)|^{p}\rangle_{t} =\left|2a\tanh\left(\frac{\tau-\tau_{0}}{2t^{*}}\right)+\langle| \Delta\eta_{0}|^{p}\rangle_{t}^{1/p}\right|^{p}. \tag{5}\]
As this prediction is only valid when \(\tau>\tau_{s}\), the shortest possible time lag \(\tau_{0}\) (2 ms) and the corresponding shortest increment amplitude \(\langle|\Delta\eta_{0}|^{p}\rangle_{t}=\langle|\Delta\eta(\tau_{0})|^{p}\rangle _{t}\) are needed. The theoretical values \(\zeta_{p}^{\rm th}(\tau)\) thus reads \(\zeta_{p}^{\rm th}(\tau)\equiv d\log(S_{p_{1}}^{\rm th})/d\log(\tau)\) where \(S_{p}^{\rm th}=S_{p}^{\rm in}+S_{p}^{\rm mB}\) with \(S_{p}^{\rm in}=[\mathcal{S}_{2}(0)\tau^{\frac{n\pi}{2}}]^{p}\) the nonintermittent part, and \(S_{p}^{\rm mB}\) the shock-dominated part of the structure functions from Eq. (4). \(S_{p}^{\rm th}\) is then reported in Fig. 4 (solid line) and is in good agreement with experimental data (bullets) with no fitting parameter once the shock typical geometry (\(t^{*}\), \(a\), and \(n\)) are known. The time-averaged amplitude of the shock increments, \(\langle|\Delta\eta(\tau)|^{p}\rangle_{t}\) of Eq. (5) is also successfully compared with the experiments in Fig. 5(c) for different \(p\) leading to mean values of \(a=3.1\) mm and \(t^{*}=7.7\) ms, close to the ones found directly fitting the shock-wave profile as in Fig. 2(b). Finally, the experimental and theoretical structure functions of order \(p\) are compared by plotting \(\mathcal{S}_{p}/\mathcal{S}_{p}^{\rm th}\) in Fig. 5(d). Curves collapse well towards a constant value of the order of unity within the inertial range, thus validating the modified Burgers model, experimentally. The influence of the distribution widths of \(a\) and \(t^{*}\) on the compensated structure function, \(\mathcal{S}_{4}/\mathcal{S}_{4}^{\rm th}\), is shown in the Appendix E. \(\mathcal{S}_{4}/\mathcal{S}_{4}^{\rm th}\) shows some fluctuations, but less than one order of magnitude.
_Conclusion.--_ We have reported the experimental observation of intermittency in a regime dominated by random shock waves on the surface of a fluid. Their energy spectrum is well described by the theoretical Kuznetsov spectrum involving a random set of singularities [37]. We have shown that these shock waves lead to small-scale intermittency, quantified by the PDFs of the increment of wave-amplitude fluctuations and by high-order statistics (structure functions). The reported intermittency is found to be much more intense than in 3D hydrodynamics turbulence or wave turbulence. We have developed a Burgerslike intermittency model, modified to take account of the experimental finite steepness of the shock waves, which is found to be in good agreement with data.
Our results could be applied to other turbulent systems. Indeed, better understanding the role of coherent structures in forming a turbulent spectrum and causing intermittent behavior is crucial, particularly in wave turbulence and 3D turbulence. As intermittency is associated with the singularities of the turbulent flow [38], vortex filaments, for instance, could play the role of the Burgers shocks [2] although they have much more complicated statistics (multifractal instead of bifractal scaling). Since Burgers equation has a number of further applications from condensed matter to cosmology [39], and is formally equivalent to the Kardar-Parisi-Zhang equation describing interface growth dynamics in a random medium [40], to which extent our results can be applied to this range of fields is an open question. Finally, dissipative effects could be tested in the future (as the fluid viscosity in the Burgers equation impacts the shock-wave front \(t^{*}\)) using ferrofluids of different viscosities and constant magnetic properties.
###### Acknowledgements.
This work was supported by the Simons Foundation MPS No. 651463-Wave Turbulence (USA) and the French National Research Agency (ANR Sogood Project No. ANR-21-CE30-0061-04).
## Appendix A Numerical simulations of the 1D viscous Burgers equation
The 1D viscous Burgers equation reads [22]
\[\frac{\partial\eta}{\partial t}+A\eta\frac{\partial\eta}{\partial x}=\nu\frac{ \partial^{2}\eta}{\partial x^{2}}, \tag{6}\]
with \(A\) an arbitrary constant that ensures dimensional homogeneity and \(\nu\) the kinematic viscosity. To numerically solve Eq. (6), we use an implicit scheme with the Crank-Nicolson formulation and a Thomas algorithm with the initial condition \(\eta(x,t=0)=\sin(x)\)[30]. The numerical grid is resolved with 1024 points. Five successive shocks (black) and their first-order difference \(\delta\eta(x)=\eta(x+dx)-\eta(x)\) (red) are displayed in Fig. 6(a) at large \(t\). Shock waves with a vertical front are visible, and their difference is a \(\delta\)-Dirac distribution. The corresponding power spectrum \(E_{\eta}(k)\), displayed in Fig. 6(b), scales as \(k^{-2}\) as expected. In the inertial range, the amplitude of the corresponding increments \(\delta\eta(x,r)=\eta(x+r)-\eta(x)\) is independent of the separation \(r\), conversely to its width [see Fig. 5(a) - the time lag \(\tau\) playing the role of \(r\) since temporal signals are involved experimentally]. The shocks observed experimentally differ from the numerical ones of Fig. 6(a). The experimental fronts are not fully vertical, but have a finite steepness because of residual dispersive effects [see Fig. 2(b)]. This leads to a \(\delta\)-Dirac distribution in the second-order difference \(\delta^{(2)}\eta(t)=\eta(t+2dt)-2\eta(t+dt)+\eta(t)\) and not in the first-order difference. They are also found to appear randomly in the experimental signal [30].
## Appendix B Shock-wave propagation
The experimental propagation of the surface elevation in response to a single pulse forcing is shown in Fig. 7. This signal is obtained using a spatiotemporal measurement [30]. A shock wave involving a discontinuity is ob
served traveling along the canal, keeping a self-similar shape.
## Appendix C Correlation time estimation
The normalized autocorrelation function \(C(\tau)\) of a temporal signal \(\eta(t)\) is defined as
\[C(\tau)=\frac{\langle\eta(t+\tau)\eta(t)\rangle_{t}}{\langle\eta(t)^{2}\rangle_{ t}}. \tag{10}\]
The correlation time \(\tau_{c}\) can be inferred from \(C(\tau_{c})=0\). For small values of \(\tau\), \(C\) can be approximated by a parabolic function that provides a better estimation of the correlation time, \(\tau_{c}\), as
\[C(\tau)\approx 1-\frac{\tau^{2}}{\tau_{c}^{2}}. \tag{11}\]
If two points of the signal \(\eta(t)\) are separated by a time lag \(\tau\gg\tau_{c}\) they are fully uncorrelated and are thus independent. The experimental correlation function is displayed in Fig. 8. The parabolic fit of Eq. (11) provides an estimation of the correlation time \(\tau_{c}\approx 33\) ms. The range of \(\tau\) used in the main paper corresponds to \(\tau\ll\tau_{c}\).
## Appendix D Flatness and hyperflatness coefficients
The small-scale intermittency (i.e., the PDF shape deformations over the scales \(\tau\)) can be quantified by the dependence of the flatness coefficient, \(\mathcal{S}_{4}/\mathcal{S}_{2}^{2}\), with \(\tau\) as shown in Fig. 9(a). At large \(\tau\), the flatness is close to \(3\) (the value for a Gaussian) and increases up to \(10^{3}\) at small \(\tau\), corresponding to a much flatter PDF [see Fig. 2(d)]. Same is performed in Fig. 9(b) for the hyperflatness, \(\mathcal{S}_{6}/\mathcal{S}_{2}^{3}\), ranging from \(15\) (the value for a Gaussian) up to \(10^{6}\). These experimental dependencies of the flatness and hyperflatness coefficients are in agreement with the theoretical predictions, \(S_{p}^{\rm th}/S_{2}^{\rm th}/\), where \(S_{p}^{\rm th}=S_{p}^{\rm in}+S_{p}^{\rm mB}\) with \(S_{p}^{\rm ni}\) the nonintermittent part (valid
Figure 8: Normalized autocorrelation function \(C(\tau)\) of Eq. (10) versus the time lag \(\tau\) (solid blue line) and its parabolic fit from Eq. (11) for small \(\tau\) (black dashed line). \(\tau_{c}\approx 33\) ms.
Figure 6: (a) Numerical solution of Eq. (10) at large \(t\), showing \(\eta(x)\) (black line) with five successive shock waves and their first-order difference \(\delta\eta(x)\) (red line). \(A=25\) and \(\nu=2.86\ 10^{-6}\). (b) Corresponding power spectrum \(E_{\eta}(k)\). Black dash-dotted line: best fit in \(k^{-2.1}\). Vertical dashed lines represent the values of \(k_{i}=1/(2r_{i})\) with \(r_{i}\) chosen to be in the inertial range of the spectrum. The values of \(r_{i}\) are equivalent (changing \(k\) to \(\omega\)) to the values of \(\tau_{i}\) used in Fig. 5(a).
Figure 7: Experiment: Spatial evolution of the surface elevation in response to a single pulse forcing for increasing times (spaced from \(25\) ms, from blue to purple, i.e., from left to right). The arrows indicate the discontinuity location over time.
for small \(p\)) and \(S_{p}^{\rm mB}\) from Eq. (4) of the Burgerslike intermittency model (valid for large \(p\)) and corresponding to the shock-dominated part of the structure functions. When removing the shock waves (\(\lozenge\) in Fig. 9), the flatness drops by a factor of 20 and the hyperflatness by a factor of 500, confirming the strong intermittency is well ascribed to the shock waves.
## Appendix E Influence of the shock-wave parameters on the modified Burgers model
The compensated structure function \(\mathcal{S}_{4}/\mathcal{S}_{4}^{th}\) is plotted for three different values of the shock-wave parameters, \(a\) and \(t^{*}\), to evidence the limits of the modified Burgers model. When using the rms values of the distributions of \(a\) and \(t^{*}\), \(\mathcal{S}_{4}/\mathcal{S}_{4}^{th}\) shows some fluctuations, but less than one order of magnitude, thus validating the model.
|
2309.03149 | Real-time auralization for performers on virtual stages | This article presents an interactive system for stage acoustics
experimentation including considerations for hearing one's own and others'
instruments. The quality of real-time auralization systems for psychophysical
experiments on music performance depends on the system's calibration and
latency, among other factors (e.g. visuals, simulation methods, haptics, etc).
The presented system focuses on the acoustic considerations for laboratory
implementations. The calibration is implemented as a set of filters accounting
for the microphone-instrument distances and the directivity factors, as well as
the transducers' frequency responses. Moreover, sources of errors are
characterized using both state-of-the-art information and derivations from the
mathematical definition of the calibration filter. In order to compensate for
hardware latency without cropping parts of the simulated impulse responses, the
virtual direct sound of musicians hearing themselves is skipped from the
simulation and addressed by letting the actual direct sound reach the listener
through open headphones. The required latency compensation of the interactive
part (i.e. hearing others) meets the minimum distance requirement between
musicians, which is 2 m for the implemented system. Finally, a proof of concept
is provided that includes objective and subjective experiments, which give
support to the feasibility of the proposed setup. | Ernesto Accolti, Lukas Aspöck, Manuj Yadav, Michael Vorländer | 2023-09-06T16:44:50Z | http://arxiv.org/abs/2309.03149v1 | # Real-time auralization for performers on virtual stages
###### Abstract
This article presents an interactive system for stage acoustics experimentation including considerations for hearing one's own and others' instruments. The quality of real-time auralization systems for psychophysical experiments on music performance depends on the system's calibration and latency, among other factors (e.g. visuals, simulation methods, haptics, etc). The presented system focuses on the acoustic considerations for laboratory implementations. The calibration is implemented as a set of filters accounting for the microphone-instrument distances and the directivity factors, as well as the transducers' frequency responses. Moreover, sources of errors are characterized using both state-of-the-art information and derivations from the mathematical definition of the calibration filter. In order to compensate for hardware latency without cropping parts of the simulated impulse responses, the virtual direct sound of musicians hearing themselves is skipped from the simulation and addressed by letting the actual direct sound reach the listener through open headphones. The required latency compensation of the interactive part (i.e. hearing others) meets the minimum distance requirement between musicians, which is 2 m for the implemented system. Finally, a proof of concept is provided that includes objective and subjective experiments, which give support to the feasibility of the proposed setup.
## 1 Introduction
Auralization has a long tradition [1, 2, 3]. Furthermore, it has greatly developed in the last decade [4, 5, 6, 7, 8, 9, 10, 11]. Although realism has reached a great state of development, also for inclusion of musical instrument directivities [4], many improvements are still possible [12].
Currently, auralization with high plausibility is emerging in many applications - some even involving real-time processing - primarily for cases where the sources are external to the listener, and/or relatively far away. However, some details still remain as open research topics. That is especially the case of real-time auralization for applications involving musicians hearing themselves and/or their own instruments, musicians hearing other musicians/instruments who are very close, or other applications involving short source-receiver distances [13, 14, 15, 16, 17, 18]. A similar scheme but intended for voice communication was introduced by [19] as an overview of services infrastructure including multiparty immersive audio mixing and management, as well as immersive sound rendering.
This paper aims to present a real-time system for auralizing musical performance on a virtual stage by a single musician or an interactive ensemble between a group of musicians. The main contribution of this paper is outlining an elaborate framework for interactions between an arbitrary number of musicians on a virtual stage, which is limited only by availability of computing resources and electroacoustic hardware. This includes rigorous considerations of system calibration and latency, which is currently lacking in comparable systems for real-time auralization in literature (e.g., [15].) A proof of concept with two interconnected musicians on a virtual stage with several acoustic variations is also presented. The system is primarily intended for studying stage acoustics in virtual settings wherein it is easier to vary the acoustics than in real rooms. However, another potential uses include enabling rehearsal/performance on virtual stages.
Fig. 1 shows the workflow used in this paper for realizing a virtual acoustics scenario. The first step is the experimental design for stage acoustics experiments or for another related purpose (e.g. rehearsal session, recording session, training session, etc). This design includes the definition of both real and virtual rooms including number, locations, and orientations of both real and virtual sources and receivers as well as microphones and headphones. Once the experimental design is set, the second block of activities is the laboratory setup which includes both a physical setup and a digital setup. The physical setup involves placing and connecting all the equipment. The digital setup includes both the modeling (or measurement) of the binaural room impulse responses (BRIRs) for each virtual source-receiver combination and the setup of a Digital Audio Workstation (DAW).
The third block includes the measurements of latencies, microphone sensitivities, and headphones transfer functions, shown in steps 3.a, 3.b, and 3.c, respectively. The step 3c / 4.a involving measurement of headphone sensitivity can be considered as part of both blocks 3 and 4 because it is both a calibration measurement and a part of the activities with participants. The fourth block is called "activities with participants" and its main part is the listening/playing tests labeled 4.c (Run test). Before running the test, the calibration and latency compensation is applied in step 4.b. Although the step 4.b does not strictly require the participants to be present, it is grouped in block 4 because the adaption can be carried out quickly while participants remain in the laboratory after step 4.a, which may include time for briefing and completing some general questions forms, as was done in the current case. As part of step 4.b, the adapted BRIRs are uploaded to the previously set up DAW session. Finally, step 4.c consists of the main test.
The rest of the paper is organized as follows. Sec. 2 introduces the experimental design (block 1) as well as the laboratory setup (block 2) and the BRIR adaption (block 4b). Sec. 3 describes the required adaption of the BRIRs in terms of block 3 (namely latency compensation and calibration filters), and also an analysis of the main sources of uncertainty. Finally, in Sec. 4, a proof of concept is carried out with six guitar duets, followed by the discussion and conclusions in Secs. 5 and 6, respectively.
Figure 1: Auralization scheme. DAW: Digital audio workstation, BRIR: Binaural room Impulse Response, TF meas.: Transfer function measurement.
Setup
The goal here is to simulate a virtual stage environment for a solo musician, or for two or more musicians playing simultaneously while each of them is located in an individual isolated booth. To that end, each musician hears both the instrument of the other musicians and his or her own instrument in a simulated room in real-time. For simplicity, each musician is defined with a listener and a player role. Furthermore, each musician is assumed to play a musical instrument that can be considered as one source (Fig. 2).
Fig. 2 shows an example of the experimental setup (see block 2 in Fig. 1) for a system with two players and two listeners, i.e. a system for two musicians in which each of them play one sound source. The upper part represents the virtual situation in which a guitarist and a cellist play music in a simulated concert hall. The lower part shows the electroacoustic setup (e.g. headphones, microphones, and processing blocks) where the core parts are the DAW mixes which perform the real-time convolutions and the signal summations.
Let \(n\in\{1,\cdots,N\}\) denote the \(n\)th listener and \(m\in\{1,\cdots,M\}\) denote the \(m\)th player. Hence, based on the digital input signal \(\mathbf{s}_{i,m}\) from each player \(m\) and the binaural room impulse response \(\mathbf{h}_{m,n}\) at the listener \(n\) due to each player \(m\), the output signal \(\mathbf{s}_{\mathrm{o},n}\) to be sent to the listener \(n\) when \(M\) players are simultaneously playing is computed as
\[\mathbf{s}_{\mathrm{o},n}=\mathbf{h}_{1,n}*\mathbf{s}_{\mathrm{i},1}+\cdots+ \mathbf{h}_{m,n}*\mathbf{s}_{\mathrm{i},m}+\cdots+\mathbf{h}_{M,n}*\mathbf{s} _{\mathrm{i},M} \tag{1}\]
where \(*\) is the convolution operation. This equation can be straightforwardly implemented in a DAW session. For example, the output signal \(\mathbf{s}_{\mathrm{o},n}\) for each listener role can be generated by routing \(M\) stereo subgroups to a mix channel \(n\). Each subgroup \((m,n)\) contains a convolution plug-in which applies the convolution of each of the two channels of \(\mathbf{h}_{m,n}\) with the monaural signal \(\mathbf{s}_{\mathrm{i},m}\) routed from the source input channel \(m\). Setting up a system for implementing this equation is step 2.b in Fig. 1.
The system should be calibrated and latency-compensated in order to avoid the effect of the system itself in the signal that arrives at the musicians' ears. In this paper, the calibration \(\mathbf{k}_{m,n}\) and the latency-compensation due to both the audio system latency \(t_{\mathrm{l}}\), and the physical source-microphone distance \(d_{\text{M-S}}\) are applied to the simulated BRIRs \(\mathbf{h}^{\prime}_{m,n}\) as
\[\mathbf{h}_{m,n}=\mathbf{h}^{\prime}_{m,n}*\mathrm{sinc}(\mathbf{t}_{\mathrm{ s}}-t_{\mathrm{l}}-\frac{d_{\text{M-S},m}}{c})*\mathbf{k}_{m,n} \tag{2}\]
where \(\mathbf{t}_{\mathrm{s}}=[0,1/f_{\mathrm{s}},\cdots,T]\) is the time vector with sampling frequency \(f_{\mathrm{s}}\) and \(c\) is the speed of sound. The calibration filter \(\mathbf{k}_{m,n}\), the latency compensation \(t_{\mathrm{l}}\) and distance compensation \(d_{\text{M-S}}/c\) are analyzed in detail in the next section. This equation is applied in step 4.b in Fig. 1. Although the result of eq. (2) is required in eq. (1), it is simpler to first set up the implementation of eq. (1) in a DAW and then just adapting the BRIRs \(\mathbf{h}_{m,n}\) for each participating musician according to eq. (2). In most cases the experimental design involves fixed gains of the preamplifiers and the distance between microphone and musical instrument; hence, the only element of the calibration \(\mathbf{k}_{m,n}\) that is modified for each participant is the headphone transfer function.
Due to latency issues (Sec. 3.1), the direct sound is skipped in the impulse response \(\mathbf{h}_{m,n}\) with \(m=n\) (listener \(n\) receives the direct sound from his or her own instrument), and is included in the impulse response \(\mathbf{h}_{m,n}\) with \(m\neq n\) (because listener \(n\) is not in the same room than player \(m\)). For example, in
Figure 2: Example of an experimental design scheme. \(\mathbf{s}_{\mathrm{i},m}\): input signal from player role \(m\), \(\mathbf{s}_{\mathrm{o},n}\): output signal for listener role \(n\), \(h_{m,n}\): impulse response for listener \(n\) due to player \(m\), \(m\in\{1,2\}\), \(n\in\{1,2\}\)
Fig. 2, the direct sound of cello is skipped in \(\mathbf{h}_{1,1}\) and included in \(\mathbf{h}_{2,1}\).
In the example of Fig. 2, the guitarist is the listener \(n=1\) and hears the signal \(\mathbf{s}_{\mathrm{o},1}=\mathbf{h}_{1,1}*\mathbf{s}_{\mathrm{i},1}+\mathbf{h }_{2,1}*\mathbf{s}_{\mathrm{i},2}\) which is processed in real time in the mix 1 of the DAW according to Eq. (1). For that reason, the signals \(\mathbf{s}_{\mathrm{i},1}\) and \(\mathbf{s}_{\mathrm{i},2}\) are picked using microphones in front of each instrument for the guitar and the cello, respectively. Simultaneously, the mix 2 of the DAW renders audio for the cellist who represents the listener \(n~{}=~{}2\). For that purpose, the mix 2 of the DAW implements Eq. (1) by processing the same \(\mathbf{s}_{\mathrm{i},1}\) and \(\mathbf{s}_{\mathrm{i},2}\) signals, but this time convolved with \(\mathbf{h}_{1,2}\) and \(\mathbf{h}_{2,2}\), respectively (i.e. \(\mathbf{s}_{\mathrm{o},2}=\mathbf{h}_{1,2}*\mathbf{s}_{\mathrm{i},1}+\mathbf{ h}_{2,2}*\mathbf{s}_{\mathrm{i},2}\)).
The considered calibration accounts for both conversions from acoustic signals to digital signals and from digital signals to acoustic signals. Sec. 3.2 presents a method for estimating the calibration filters and the analysis of the effects of errors in the input data for this estimation.
## 3 Latency compensation and system calibration
The calibration of the virtual acoustic system can be calculated as the frequency-dependent gain for sending the signal at the microphone in front of player \(m\) to the headphones of listener \(n\). This represent the signal that would be received at the entrance of his or her ear canal in a real situation in free field without the headphones. The calibration filter is defined in the frequency domain of the signals as \(K_{m,n}(f)\).
Fig. 3 shows a scheme of the proposed situation for deducing the calibration filters in a hypothetical anechoic environment. The left part in Fig. 3 shows the hypothetical real situation which represents the playing situation with \(N=2\) listeners, but isolated in anechoic environment. In this example, just \(M=1\) player plays a guitar, so the input signal is just \(s_{\mathrm{i},1}\) picked at a microphone placed in front of the guitar. The right part in Fig. 3 shows the auralization situation.
Let \(\hat{s}(f)\), \(\hat{p}(f)\), and \(\hat{h}(f)\) be the elements of frequency \(f\) of the Fourier transforms of digital signals \(\mathbf{s}\), sound pressure signals \(\mathbf{p}\), and impulse responses \(\mathbf{h}\), respectively. In the case of Fig. 3 the right side of Eq. (1) keeps just one term corresponding to the only musical instrument in the proposed situation. The element of frequency \(f\) of the Fourier transform of Eq. (1) is
\[\hat{s}_{\mathrm{o},n}(f)=\hat{h}_{\mathrm{ane},m,n}(f)\hat{s}_{\mathrm{i},m} (f), \tag{3}\]
with latency compensated and frequency calibrated impulse response
\[\hat{h}_{\mathrm{ane},m,n}(f)=\hat{h}^{\prime}_{\mathrm{ane},m,n}(f)K_{m,n}(f) e^{-j\omega(t_{\mathrm{i}}+\frac{d_{\mathrm{M-S},m}}{c})}\]
as an alternative of Eq. (2). The notation below will drop \(f\) for brevity.
### Latency
The latency \(t_{\mathrm{l}}\) of off-the-shelf sound interfaces is usually greater than the wave travel-time for listening to a musical instrument played by oneself (e.g. shorter than 0.2 ms for a violinist or about 1.5 ms for a
Figure 3: Scheme of calibration factors required for real-time auralization. Left part: player role. right part: listener roles. \(p_{\mathrm{ec},n}\) sound pressure at the entrance of the ear canal of listener \(n\), \(p_{\mathrm{E,f,n}}\): free-field sound pressure at the position of the head of listener \(n\), \(d_{\mathrm{M-S},m}\) distance from the acoustic center of source \(m\) to microphone \(m\), \(d_{\mathrm{E-S},m,n}\) distance from the acoustic center of source \(m\) to the head of player \(n\), \(p_{\mathrm{M},m}\): sound pressure at a microphone in front of player \(m\), \(s_{\mathrm{i},m}\): input signal from player role \(m\), \(s_{\mathrm{o},n}\): output signal for listener role \(n\), \(m\in\{1\}\), \(n\in\{1,2\}\)
guitarist; some exceptions include pipe-organs, bagpipes, and some percussion instruments).
This is demonstrated in Table 1, which shows the latency of three interfaces measured using the ITA Toolbox [20], and where each interface was set to its smallest buffer size option.
To overcome the hardware latency issue for simulating hearing one's own instrument, one approach includes latency compensation, i.e. time shifting the impulse response. This may be applied without cropping the impulse response for source-receiver distances no closer than an equivalent distance \(e_{\mathrm{d}}=c\times t_{\mathrm{l}}\). Another approach, which is also used in previous systems (e.g., [13, 14, 2]), includes skipping the simulation of the direct sound for the hearing one's own instrument. This approach involves allowing the actual direct sound to reach the musician through open headphones.
Hence, once the direct sound for hearing oneself is skipped, the compensation time limit is the propagation time due to distances between musicians for hearing others, or due to floor reflection path for hearing oneself. For the current implementation, the RME Fireface UC interface meets both these limits: for distances greater than 2 m between musicians and for receiver heights greater than 1.3 m (these estimations include a compensation for a microphone-source distance of \(d_{\mathrm{M-S}}=1\) m), and represents a suitable off-the-shelf interface (Table 1).
### Calibration
During auralization of the proposed situation (Fig. 3) with headphones, listeners receive the same sound pressure at the ear canal \(p_{\mathrm{ec},n}\) that they would receive in an anechoic environment due to her or his own musical instrument (as in the right-upper part of Fig. 3) or the instrument of another musician (as in the right-lower part of Fig. 3). To that end, the output signal \(s_{\mathrm{o},n}\) should contain the required calibration to yield a sound pressure \(p_{\mathrm{ec},n}\) at the entrance of the ear canal of each musician.
#### 3.2.1 Filter components
In this section, the filter components of Eq. (3) are discussed and estimates of the filter functions are derived. For this purpose, estimates of the sound pressure at the microphone in front of each player and at the ears of each listener are defined first.
The sound pressure at the microphone \(p_{\mathrm{M},m}\) can be obtained from the input signal \(s_{\mathrm{i},m}\) as
\[\hat{p}_{\mathrm{M},m}=\frac{\hat{s}_{\mathrm{i},m}}{S_{\mathrm{M},m}} \tag{4}\]
where \(S_{\mathrm{M},m}\) is the digital sensitivity of the recording system measured as the ratio of the digital signal caused by 1 Pa of sound pressure at the microphone to the digital full scale of the system (i.e. it includes microphone, amplifier, ADC). Fig. 4 shows the digital sensitivity of the recording system consisting of an Oktava MK-012 microphone with an RME FIREFACE UC interface which is used below for a proof of concept (section IV). It was measured by comparison to a calibrated class 1 measurement microphone in the center of a medium-sized room with reverberation time below 0.1 s. Reflections were cropped with a 15-ms-length rectangular time-window centered in the direct sound as proposed in [21].
Let \(W_{m}\) be the sound power for source \(m\), \(Q(\Omega)\) be the directivity factor for direction \(\Omega\), and \(\Omega_{\mathrm{M-S},m}\) be the microphone-source direction for source \(m\). Then, the free-field squared sound pressure \(\hat{p}_{\mathrm{M}}^{2}\) measured by a microphone located at a distance \(d_{\mathrm{M-S},m}\) from the acoustic center of source \(m\) is
\[\hat{p}_{\mathrm{M},m}^{2}=\frac{\rho cW_{m}Q(\Omega_{\mathrm{M-S},m})}{4\pi d _{\mathrm{M-S},m}^{2}}e^{2\pi f\frac{d_{\mathrm{M-S},m}}{c}} \tag{5}\]
where \(\rho\) is the air density and \(c\) is the speed of sound. Note that the location of the acoustic cen
\begin{table}
\begin{tabular}{l r r r} Interface & \(S_{\mathrm{B}}\) (samples) & \(t_{\mathrm{l}}\) (ms) & \(e_{\mathrm{d}}\) (m) \\ \hline Fireface UC & 48 & 4 & 1 \\ Scarlett 2i2 & 48 & 13 & 4 \\ Fast Track Ultra & 256 & 17 & 6 \\ \end{tabular}
\end{table}
Table 1: Measured latency of RME Fireface UC, FO-CUSRITE Scarlett 2i2, and M-AUDIO Fast Track Ultra interfaces
ter of the source depends on frequency [22]. Hence, \(W_{m}\), \(Q(\Omega_{\text{M-S},m})\), and also \(d_{\text{M-S},m}\) are functions of frequency.
Let \(\Omega_{\text{E-S},m,n}\) be the direction and \(d_{\text{E-S},m,n}\) the distance from source \(m\) to listener \(n\). Then, the free-field squared sound pressure at the position of the head of listener \(n\) but in absence of the listener is
\[\hat{p}_{\text{E,ff},n}^{2}=\frac{\rho cW_{m}Q(\Omega_{\text{E-S},m,n})}{4\pi d _{\text{E-S},m,n}^{2}} \tag{6}\]
On the one hand, the nominal sound pressure at the blocked entrance of the ear-canal \(\hat{p}_{\text{ec},n}\) is
\[\hat{p}_{\text{ec},n}=\hat{p}_{\text{E,ff,n}}H_{\text{S},m,n} \tag{7}\]
where \(H_{\text{S},m,n}\) is the head related transfer function (HRTF) of listener \(n\) in the direction to the source \(m\). On the other, the actual sound pressure at the blocked entrance of the ear canal \(\hat{p}_{\text{ec},n}\) can be obtained from an output signal \(\hat{s}_{\text{o},n}\) as
\[\hat{p}_{\text{ec},n}=\hat{s}_{\text{o},n}H_{\text{E},n}e^{j2\pi t_{1}} \tag{8}\]
where \(H_{\text{E},n}\) is the headphone transfer function (HpTF) of the playback system for the listener \(n\) measured in terms of sound pressure with reference to the digital full scale (i.e. it includes the response of headphones, amplifier, and DAC). Fig. 5 shows the modulus in decibels of the HpTF of the playback system consisting of a LAMBDA STAX headphone with an RME FIREFACE UC interface. These measurements were carried out with a dummy-head with ten repetitions in order to estimate the deviations related to headphones placement. Besides, these results are congruent with results previously obtained by [23] in blocked ear canals of humans.
The calibration is based on a situation of direct sound in free field because it avoids propagating errors from estimations of the acoustic properties of room surfaces which would be the case for calibration based on reflections. Although, the actual direct sound for hearing oneself passes through the open headphone during the auralizations, the calibration is valid for the simulated impulse response that skips direct sound. Continuing with headphones properties, the sound insulation is important when skipping the direct sound of simulations because the actual direct sound is expected to reach ears through them (Subsec. 4.2).
Let \(\ell=e^{-2\pi f((d_{\text{M-S},m}/c+t_{1})}\) be the latency compensation. Then, substituting Eqs. (4), (5), (6), and (7) in (8) leads to
\[\hat{s}_{\text{o},n}=\frac{d_{\text{M-S},m}\sqrt{Q(\Omega_{\text{E-S},m,n})}}{ H_{\text{E},n}S_{\text{M},m}d_{\text{E-S},m,n}\sqrt{Q(\Omega_{\text{M-S},m})}}H_{ \text{S},m,n}\ell\hat{s}_{\text{i},m} \tag{9}\]
where directivity factor \(Q\) is referred to the mean squared pressure \(\overline{p^{2}}\) across directions \(\Omega\) over a spher
Figure 4: Recording system sensitivity \(S_{\text{M},m}\) (dB) of Oktava MK-012 microphone with RME FIREFACE UC interface.
Figure 5: Headphone transfer function \(H_{\text{E},n}\) (dB) of LAMBDA STAX headphones with RME FIREFACE UC interface.
ical surface centered at the source, i.e.
\[Q(\Omega)=\frac{p^{2}(\Omega)}{\overline{p^{2}}} \tag{10}\]
However, the directional factor
\[\Gamma(\Omega)=\frac{p(\Omega)}{p(\Omega_{\rm ref})} \tag{11}\]
is usually preferred as input-data for geometrical acoustics (GA) software. Hence, substituting eq. (10) and eq. (11) in eq. (9), yields to
\[\hat{s}_{\rm o,n}=\frac{d_{\rm M\mbox{-}S,m}\Gamma(\Omega_{\rm E\mbox{-}S,m,n} )}{H_{\rm E,n}S_{\rm M,m}d_{\rm E\mbox{-}S,m,n}\Gamma(\Omega_{\rm M\mbox{-}S,m} )}H_{\rm S,m,n}\ell\hat{s}_{\rm i,m} \tag{12}\]
Furthermore, since the reference direction coincides with the position of the microphone in front of the source, the respective sound pressures are equivalent, i.e. \(p(\Omega_{\rm M\mbox{-}S,m})=p(\Omega_{\rm ref})\). Thus, the directional factor in the source-microphone direction is unity, i.e. \(\Gamma(\Omega_{\rm M\mbox{-}S,m})=1\).
The distance parameters \(d_{\rm M\mbox{-}S,m}\) and \(d_{\rm E\mbox{-}S,m,n}\) should be measured from the acoustical center of the source \(m\) to the acoustical center of the microphone in front of that source and to the head of listener \(n\), respectively. Hence, all the parameters in Eq. (12) are frequency-dependent, because the acoustical center of a complex source such as a musical instrument is different for each frequency. However, source-receiver distance is usually a single value parameter in the state-of-the-art GA simulators [24] which can be used to estimate the impulse responses \(h_{m,n}\). In order to have an estimate of this effect in the calibration, in this work it is assumed
\[d_{\rm M\mbox{-}S,m}/d_{\rm E\mbox{-}S,m,n}=\tilde{d}_{\rm M\mbox{-}S,m}/ \tilde{d}_{\rm E\mbox{-}S,m,n}E_{\rm K,m,n} \tag{13}\]
where \(\tilde{d}_{\rm M\mbox{-}S,m}\) and \(\tilde{d}_{\rm E\mbox{-}S,m,n}\) are single value estimations of \(d_{\rm M\mbox{-}S,m}\) and \(d_{\rm E\mbox{-}S,m,n}\), respectively. Besides, \(E_{\rm K,m,n}\) is introduced as the error due to these two estimations (subsec. 3.2.2 shows a detailed description of the effect of this source of error on the calibration).
Eq. (12) can be written as Eq. (3) (i.e. \(\hat{s}_{\rm o,n}=\hat{h}_{\rm ane,m,n}\hat{s}_{\rm i,n}\) and \(\hat{h}_{\rm ane,m,n}=\hat{h}^{\prime}_{\rm ane,m,n}K_{m,n}\ell\)) with
\[K_{m,n}=\frac{\tilde{d}_{\rm M\mbox{-}S,m}E_{\rm K,m,n}}{H_{\rm E,n}S_{\rm M, m}\Gamma(\Omega_{\rm M\mbox{-}S,m})} \tag{14}\]
and
\[\hat{h}^{\prime}_{\rm ane,m,n}=\frac{\Gamma(\Omega_{\rm E\mbox{-}S,m,n})H_{ \rm S,m,n}}{\tilde{d}_{\rm E\mbox{-}S,m,n}} \tag{15}\]
Note that Eq. (14) can be used for estimating the calibration \(K_{m,n}\) for hearing oneself (i.e. \(m=n\)) as well as for hearing others (i.e. \(m\neq n\)). Besides, the calibration for the same listener \(n\) for each source \(m\) would modify the values of all parameters in Eq. (14) except \(H_{\rm E,n}\) that holds the same for hearing oneself as well as for hearing others.
In auralizations, Eq. (3) also applies but the free-field transfer function \(\hat{h}_{\rm ane,m,n}\) which was assumed for deducing the calibration data is replaced by the room transfer function \(\hat{h}_{m,n}\). Hence, the head related transfer function \(H_{\rm S}\), the source-head distance \(\tilde{d}_{\rm E\mbox{-}S}\), and source directional factor \(\Gamma(\Omega_{\rm E\mbox{-}S})\) may be applied to the binaural room impulse responses (BRIRs) which is the state-of-the-art of GA simulators. In this article the BRIRs are calculated with RAVEN [25] and the input data include the source directivity data reported in [26].
#### 3.2.2 The effect of source-receiver distance
The calibrated output signal depends on two distances measured from the same source position as shown above in Eq. (12). One of these distances is measured to the head of a listener (i.e. \(d_{\rm E\mbox{-}S}\)) and the other one to the center of a microphone (i.e. \(d_{\rm M\mbox{-}S}\)).
Let \(\tilde{d}_{\rm M\mbox{-}S}\) and \(\tilde{d}_{\rm E\mbox{-}S}\) be estimations of \(d_{\rm M\mbox{-}S}\) and \(d_{\rm E\mbox{-}S}\), respectively. These estimations - which assume the acoustic center is placed in a fixed point for all frequencies - can be the geometric center, the mean of the acoustic center for all frequencies in which it was aligned for the directivity database in [27], or any other suitable approximation. Then, the errors \(e_{1}\) and \(e_{2}\) indicate the estimation error for \(d_{\rm M\mbox{-}S}\) and \(d_{\rm E\mbox{-}S}\), respectively, and the ratio of this two distances is
\[\frac{d_{\rm M\mbox{-}S}}{d_{\rm E\mbox{-}S}}=\frac{\tilde{d}_{\rm M\mbox{-}S} +e_{1}}{\tilde{d}_{\rm E\mbox{-}S}+e_{2}} \tag{16}\]
In order to study the effect of the distance estimation error on the calibration \(K_{m,n}\), eq. (13) was
proposed. Hence, inserting eq. (13) in eq. (16) gives the effect of the estimation error
\[E_{\rm K}=\frac{1+e_{1}/\tilde{d}_{\rm M\mbox{-}S}}{1+e_{2}/\tilde{d}_{\rm E\mbox {-}S}} \tag{17}\]
Fig. 6 shows results of equation (17) as contour lines in dB units related to distance relative errors \(R_{\rm e,1}=e_{1}/\tilde{d}_{\rm M\mbox{-}S}\) and \(R_{\rm e,2}=e_{2}/\tilde{d}_{\rm E\mbox{-}S}\),.
The errors \(e_{1}\) and \(e_{2}\) for each frequency depend on the position of the acoustic center of the source and do not depend on the position of the receiver1 for a given source-receiver direction. Therefore the relative errors \(R_{\rm e,1}\) and \(R_{\rm e,2}\) are smaller for larger source-receiver distances (\(\tilde{d}_{\rm M\mbox{-}S}\) and \(\tilde{d}_{\rm E\mbox{-}S}\)). In other words, the effect of these relative errors in the calibration estimated with single value distances can be smaller for receivers in the audience than for receivers on the stage, because the former are closer to the sources than the latter.
Footnote 1: Actually the acoustic center of the microphone can vary by about a few centimeters, but it is assumed negligible compared to the variation of the acoustic center of the source in this article
In the case of a violin, the acoustic center for low frequencies can be about 15 cm apart from the acoustic center for high frequencies, as reported in [28]. Considering a distance of about 1 m this could yield a relative error \(R_{\rm e,1}\approx 15\%\).
The relative error \(|R_{\rm e,1}|\) can reach about 15% for a cello player hearing himself or herself; thus the magnitude of the effect of the distance estimation error \(E_{\rm K}\) can reach values below 2.6 dB for \(|R_{\rm e,2}|\leq|R_{\rm e,1}|\). For singers, woodwind instruments, brass instruments, and other instruments in which the source is very close to the ears, the \(|R_{\rm e,2}|\) error is likely greater. Hence, the proof of concepts in this paper is carried out with acoustic guitars that are usually played at a considerable distance from the ears.
For a given relative error \(R_{\rm e,1}\), the effect is null when \(R_{\rm e,2}=~{}R_{\rm e,1}\); it can be the case that both receivers are at the same point or a geometric coincidence which are both unlikely to occur in practice. This is shown in the curve for 0 dB effect in Fig. 6. Furthermore, for \(R_{\rm e,1}>R_{\rm e,2}\) the effect is positive in magnitude, whereas, for \(R_{\rm e,1}<R_{\rm e,2}\) the effect is negative in magnitude.
#### 3.2.3 Source directivity approach
The directivity of musical instruments occurs due to their complex physical systems which involve a large number of variables. However, the state-of-the art assumes fractional octave bands as a simplified representation of the radiation patterns because it reduces computational costs of simulations. Nevertheless, this simplification may not be totally accurate for some partials in some musical instruments [29, 30, 31, 32]. Besides, some partials of the same frequency may have strong dissimilarities for certain instruments. For instance, Fig. 7 shows balloon plots of a clarinet for the same 440 Hz frequency due to the fundamental tone of an A4 note in the upper register and due to the first harmonic of a A3 note in the lower register.
The input data representing directivity can be extracted from several available databases [33, 34, 29, 35, 31, 26, 36]. In this article the directivity database by [26] is chosen because of its high resolution and its centering processing. This database provides up to 4-th order spherical harmonics coefficients for 41 musical instruments of different historical periods considering up to 10-th order partials for each note in a chromatic scale covering the typical range of each instrument. Then, the directivity for each band is es
Figure 6: Effect of distances estimation error in dB units (\(E_{\rm K}\)) on calibration
timated by averaging the directional index of all the partials that fall in that band for that instrument [32, sec. II-C].
The directivity for the same frequency and two different partials can be both as similar as about \(0.1\) dB for certain directions and as different as about \(40\) dB for certain other directions However, it is shown below that the relative likelihood for such huge discrepancies is small compared to that for similarities.
Fig. 8 shows the probability density function (PDF) of the estimation errors of the directivity for each third octave frequency band for a guitar [32]. These estimation errors are defined in each direction as the level differences between the actual directional index for each partial and the average of the the directional index for the corresponding third octave band. Then, the errors are evaluated at \(2,592\) directions over a grid of azimuth and elevation with a \(5^{\circ}\) resolution. Finally, the PDFs are estimated for each frequency band with the kernel method2 on the errors for each of the partials that fall in that band.
Footnote 2: Gaussian \(0.1\) dB Kernel
The error of the \(50\) % of the evaluated directions is at most about \(6\) dB, as shown in Fig. 8. It should be noted that the error for partials is likely less noticeable than for fundamental tones, because the latter usually carry more energy than higher order partials. As shown, errors are (surprisingly) large but this is what is currently possible with the state-of-the-art databases.
Musicians movements may also cause an effect on the radiation pattern as well as the source position and orientation. Thus, the movements of musicians can be incorporated as in [37] by the RAVEN's animation module. However, in this article the sound field is simulated just with static sources as a proof of concept.
## 4 Proof of concept
In this section, the implementation and validation of a two-players - two-listeners setup in an virtual-acoustics interactive system is described. To that end, the three subsections show an analysis comparing a real measurement with a simulation, an objective determination of the calibration functions, and a subjective evaluation of the system, respectively.
### Simulation of a measured setup
In this subsection a comparison of simulation versus measurement of a setup for one musician hearing oneself is documented. Therefore, a real room with a dummy head and a loudspeaker was both measured and simulated. The room is about \(6\) m wide, \(8\) m long, and \(3\) m high; the inner surfaces are acoustically hard with mean absorption coefficient of \(0.06\)
Figure 8: Estimated probability density functions of directional factor estimation errors for a guitar. The dashed area shows the percentile \(25\) and \(75\), the red solid line shows the mean and the white solid line shows the median.
Figure 7: Comparison of balloon plots for a partial tone generated with the low register of a clarinet and a fundamental tone with almost the same frequency but played in the high register. Legend in dB scale (data extracted from [26]).
[7, See Scene 9 in]BRAS2019. Fig. 9 shows the room geometry.
A binaural room impulse response (BRIR) was measured with a dummy head using a directional loudspeaker (Genelec 8020c) as a source placed near the dummy head. Fig. 10 shows a photo of the measurement setup inside the empty room, i.e. the only objects in the room were the dummy head and the loudspeaker; the sound interface and the computer were placed outside the room.
The probability density functions of directivity estimation errors for the Genelec 8020c are shown in Fig. 11. The source data consisting of the radiation patterns in a 1-degree resolution spherical grid are available in [38]. The analysis was performed as in [32], by averaging the radiation pattern in frequency bands and comparing the resulting value with the radiation pattern for each frequency within each band.
About 50 % of the errors are within the \(\pm\)6 dB range for each 1/3 octave band below 4 kHz. This error range increases with frequency likely due to the higher variation of directivity within wider 1/3 octave bands at higher frequencies.
Fig. 12 shows the envelope of the measured BRIR. It was measured and processed with the ITA-toolbox [39, 20]. The envelope was estimated with the Hilbert transform as in [40] because it provides a good descriptor for analyzing the texture of room impulse responses.
Figure 11: Estimated probability density functions of directivity’s estimation errors of a Genelec 8020c. The dashed area shows the percentile 25 and 75, the red solid line shows the mean and the white solid line shows the median.
Figure 12: Measured binaural room impulse response. Envelope estimated with Hilbert transform at mid the frequencies range from 353 Hz to 2828 Hz (i.e. from 500 Hz to 2 kHz Hz octave bands)
Figure 10: Measurement setup for binaural room impulse response using a Genelec 8020c loudspeaker and a dummy head in a 146 m\({}^{3}\) room
Figure 9: Room geometry
The same setup was also simulated using GA modelling using RAVEN [25]. The measured HRTF of the same dummy head and the measured directivity of the same loudspeaker (Genelec 8020c) were included as RAVEN input data. Then, the resulting BRIR was convolved with the loudspeaker impulse response because the measured directivity of the loudspeakers are normalized to the radiation in the axial direction. Fig. 13 shows envelope of the simulated BRIR.
Comparing the results in Figs. 12 and 13, the measurement shows a direct sound at \(\Delta t\approx 2\) ms that is not present in the simulation because the direct sound is excluded in the hearing oneself case. Then, the first reflection is from the floor for both the measurement and the simulation. Furthermore, the energy of the subsequent early reflections in the simulation are more-or-less similar to that of the measurement. However, these subsequent reflections are not as similar as for the first reflection, which occurs due to the mentioned calibration uncertainties, simulation data inputs, and simulation modeling. The modeling of the sound diffusion is achieved by RAVEN as a gradual mix of the ray tracing model with the image source model. Hence, the sound energy is similarly distributed in a statistical sense across short time intervals.
The technique for the hearing-oneself case is to keep direct sound as in the measured case (Fig. 12) and replace the following sound energy with the simulation excluding the direct sound (Fig. 13). This is carried out by placing each musician in a test room with open headphones that allow the direct sound of her or his own instrument to pass through the cups of the headphones and playback the sound processed in real time to simulate the reflections over the headphones. The following two subsections explore this technique.
### Objective evaluation
In this subsection, the calibration is estimated and the effect of headphones attenuation on real direct sound are shown for a general case. This case considers the dummy head and the loudspeaker in Fig. 10 placed in a free field environment for emulating a virtual environment for a soloist musician (\(M=1\)) in hearing oneself configuration (\(N=1\)).
Fig. 14 shows the modulus in decibels of the calibration obtained for this configuration. Besides, it includes \(S_{\mathrm{M},m}\) from Fig. 4 and \(H_{\mathrm{E},n}\) from Fig. 5 as references.
The calibration is carried out by eq. (14) and assuming an error \(E_{\mathrm{K},m,n}\) (i.e. replacing \(E_{\mathrm{K},m,n}=1\)). The microphones used for estimating \(H_{\mathrm{E},n}\) and \(S_{\mathrm{M},m}\) in eq. (14) (Sennheiser KE4 and Oktava MK-012, respectively) were calibrated by comparison to a calibrated class 1 measurement microphone in the center
Figure 14: Calibration for a simulated guitarist in a hearing oneself configuration
Figure 13: Simulated binaural room impulse response. Envelope estimated with Hilbert transform at mid the frequencies range from 353 Hz to 2828 Hz (i.e. from 500 Hz to 2 kHz Hz octave bands)
of a medium sized room with a reverberation time below 0.2 s [41, see room called Virtual laboratory in Sec. 2.3]).
The actual direct sound arrives from outside the headphones and just the reflections are simulated through the loudspeakers of the headphones. Hence, the attenuation of the headphones is assumed as an error in representing the direct sound for the hearing oneself case (i.e. it does not apply when listener \(n\) hears musician \(m\neq n\)). Fig. 15 reports the sound attenuation measured for LAMBDA STAX and Sennheiser HD 650 with a dummy-head and a loudspeaker in twelve equidistantly distributed positions in a sphere of 2.5 m radius. LAMBDA STAX headphones were used for the following proof of concepts as they involve lesser attenuation of the direct sound.
### Subjective evaluation
In order to provide subjective support, an experiment with six actual guitar duets was conducted. Each duet interacted over several simulated scenarios. Per scenario, they also evaluated the overall quality, and the similarity to playing in a real room.
The twelve guitarists' age was 30.4 years in average (4.8 years standard deviation) with self-reported normal hearing. Although only one of them is a professional musician, every guitarist reported some experience with guitar ensemble either during the past up to ten years or during the current year. Musicians received a monetary compensation.
Each duet performed in eight different scenarios, whose main characteristics are shown in Fig. 16. The eight scenarios have the same audience plan dimensions (25 m \(\times\) 20 m) and height (16 m). The width and height of stages are shown in Tab. 2.
The first two scenarios (R\({}_{1}\) and R\({}_{2}\)) were provided for the musicians to get familiar with the experiment. The first scenario R\({}_{1}\) had a tight stage shell (see blue long dashed line in Fig. 16) which provides acoustic support (with walls 1 m apart from musicians and a splayed ceiling) and the second one R\({}_{2}\) is just the stage and the audience surfaces as in an outdoors situation (i.e. without any walls or ceiling). This was followed by the six experimental scenarios (A, B, C, D, and two times M) in random order. The experimental scenarios are shoebox-shaped stages whose dimensions were set by the four possible combinations (A, B, C, and D) of a \(2^{k}\) statistical design varying two factors \(k\) at two levels plus a central point (M) with a repetition. The statistical design involves two
\begin{table}
\begin{tabular}{l r r r r r r r} Scenario & R\({}_{1}\) & R\({}_{2}\) & A & B & C & D & M \\ \hline Stage height (m) & 4 to 9 & – & 9 & 18 & 9 & 18 & 13.5 \\ Stage width (m) & 12 & 20 & 20 & 20 & 12 & 12 & 16 \\ \hline \hline Reverberation time (s) & 0.9 & – & 1.0 & 1.1 & 0.9 & 1.0 & 1.0 \\ \hline \hline Early stage support (\(ST_{\mathrm{E}}\); dB) & -12.4 & – & -13.6 & -16.6 & -11.8 & -14.2 & -14.4 \\ Late stage support (\(ST_{\mathrm{L}}\); dB) & -18.1 & – & -16.0 & -16.2 & -15.7 & -16.0 & -16.6 \\ \end{tabular}
\end{table}
Table 2: Configuration of the scenarios and relevant acoustic parameters
Figure 15: Attenuation of LAMBDA STAX and Sennheiser HD 650
levels of width (9 m and 18 m, see red dotted line and solid line, respectively in Fig. 16), two levels of height (12 m and 20 m, see black dash dotted line and solid line, respectively in Fig. 16), and one level of depth (11 m).
Tab. 2 also shows acoustical descriptors of the virtual scenarios. Due to identical audience simulation (i.e., geometry and materials are the same), the reverberation time is similar for all the scenarios except for the free-field case R\({}_{2}\). Early stage support \(ST_{\text{E}}\) and late stage support \(ST_{\text{L}}\) are reported in Tab. 2[42]. They are the average values for octave bands from 250 Hz to 2 kHz and for three source positions, each with three receiver positions.
The scenarios' dimensions were chosen in order to have some diversity in stage acoustic quality as reported in previous work on stage acoustics of real rooms [43]. Besides, \(ST_{\text{E}}\) range of the scenarios is \([-11.8,-16.6]\) with about 5 dB difference between scenarios B and C. \(ST_{\text{E}}\) of scenarios A and D are similar with about 0.6 dB difference.
The distance between guitarists was set at a constant virtual distance of 10 m, i.e., \(d_{\text{S-E},1,2}=d_{\text{S-E},2,1}=10\) m between musicians. This was primarily to ensure that the differences between experimental conditions were easier to perceive. Smaller distances between musician increase the ratio of direct to reflected field, in turn making reflected field harder to be perceived. The latter was also confirmed during pre-experiment explorations where for distances more typical in real ensembles (e.g., around 2 m), it was hard to notice differences in experimental variations. This does not imply that perceiving differences in stage acoustics is hard in general, but just that they were relatively harder to perceive with shorter inter-musician distances for some of the current set of experimental conditions.
For each duet, the two musicians were located in separate rooms connected only aurally by their instruments (i.e., with microphones and headphones implementing eq. (1)). The microphones were Oktava MK-012 and the headphones were STAX LAMBDA (Figs. 4, 5, and 15 show the microphone sensitivity, the HpTF, and the headphones cups' attenuation, respectively). Besides, the source-microphone distance was \(\hat{d}_{\text{M-S},m}=0.9\) m for both guitarists \(m\in\{1,2\}\).
Musicians were informed that they would play on a virtual stage, positioned 3 m from the edge of the stage and 10 m apart from each other. Since guitar duets usually play with shorter inter-musician distances (roughly 2 m apart typically), they were instructed to imagine that the 10 m distance was due to them being part of a bigger ensemble. They were also informed on their virtual orientations, which were quite different for each musician (Fig. 16a). Looking from the stage to the audience, the head of the musician at the right (Mus. 1) is turned left to the fretboard and the head of the musician at the left (Mus. 2) is directed to the audience in front and between the fretboard and the position of the other musician3.
Footnote 3: This setup for the orientation of the receivers was decided based on recorded videos of several professional duets. All participants were right-handed players; hence no other config
Figure 16: Scheme of the virtual scenario. Solid line: bigger room with high level of height and width. Red dotted line: low level width. Black dash-dotted line: low level height. Blue long dashed line: tight-shell
Participant duets were asked to play two short (about 60 s long) musical pieces for guitar ensemble (one of them allegro and the other one adagietto or andante) in the simulated scenarios. Although the participants communicated between the scenarios using their voices, they were asked to base their judgments just on guitar signals, acoustics, and ensemble playing, and not on visuals. To that end, they also tested the scenarios with their guitars playing single tones, chords, arpeggios, or improvising at the beginning and the end of each scenario.
Footnote 10: The \
Hence, the presented system substantially extends the possibilities of multi-musician auralizations compared to existing systems for one musician interacting in real-time with recorded signals [13, 14]. A key advantage of the presented system is the increased interactivity: musicians can modify their performances due to the real-time ensemble. This represents a more dynamic and closer-to-reality playing condition compared to interacting with recordings.
The critical latency issue for the hearing oneself configuration (about less than 3 ms) is addressed in the presented system by avoiding the digital simulation of direct sound. This technique, which has previously been used in non-interactive systems (e.g., [13, 14]), allows for a larger latency of about 6 ms considering a 2 m distance between musicians.
On the one hand, some latency may not interfere with tempo of the musical performance, although it may change the perceived room acoustics. Hence, proper latency compensation is an important consideration in systems intended for stage acoustics studies. On the other hand, the drawback with not simulating the direct sound is that there will be some direct sound attenuation due to the headphones. To minimize this effect, headphones with relatively smaller attenuation (about 6 dB at frequencies higher than 2 kHz; Fig. 15), compared to an alternative, were used in the subjective evaluation of this paper. Further, in the case of a guitarist in a hearing one-self configuration, the quickest reflection comes from the guitar-floor-ear path, which was accurately simulated with a suitable off-the-shelf hardware interface. It is possible that with more customized hardware, the latency issues could be curtailed further in future implementations.
Besides the latency issue, the implications of the calibration filter were analyzed. To that end, a comprehensive theoretical derivation of the components of a calibration filter for the system was presented. It was determined that the three main sources of error are: the directivity of sources, the distances from source to the microphone to the head, and the transfer function of headphones. Although not considered in this paper, head movement of the guitarists may have an effect on these three sources of error (i.e. the headphones transfer function may be slightly modified if the headphones move during performance).
The uncertainty due to band averaging the source directivity is comprehensively analyzed elsewhere [32]. However, it is important to note that, as far as the authors know, no study has investigated the directivity differences among units of the same instrument. Furthermore, each musician played a different guitar in the experiments because musicians were encouraged to play with their own instruments for comfort. This is a clear weakness of this study and other similar studies. Hence, investigating this uncertainty further is an open research topic.
The effects of the single number estimation of source-microphone and source-head distances derived in this paper (Fig. 6 also apply for listeners in the audience of performance spaces, conference halls, or other rooms. This effect grows with relative errors in distances. Therefore, larger real source-microphone and virtual source-head distances are less sensitive to this effect.
The objective proof of concepts showed encouraging results, hence a subjective experiment was carried out. The subjective proof showed that playing in the simulated scenarios was judged as having a score above the middle of a scale measuring similarity to playing in a real room (Fig. 18) by all the participants. Furthermore, for the more experienced musicians, this rating was closer to the maximum score.
The other weaknesses of this setup are related to the lack of visual feedback and the lack of response to source and head movements (especially for head movements of the guitarist whose left hand is closer to the side wall). Improving both these weaknesses is challenging due to latency issues as discussed in the following two paragraphs.
On the one hand, preliminary experiments carried out previous to the proof of concepts with experienced guitarist participants showed that static visual feedback did not improve the ratings of similarities to playing in a real room (Fig. 18). On the other hand, latency compensation of real-time video feedback is challenging. A latency mismatch between video and audio could introduce further uncontrolled variables, which was avoided here by excluding visual feedback altogether. Visual feedback can be improved by positioning musicians in the same room or in separated
rooms with glass windows, hence visually connecting them like in recording studios. However a recording-studio-like setup could also interfere with aural cues of distance between musicians. Nevertheless, improving audio-visual integration represents future work.
Although not studied in this article, placing two or more musicians in the same room may be useful to reduce the distance between them to be below 2 m. Of course, in this case the direct sound would be skipped in the room impulse responses for hearing others (i.e. \(m\neq n\)). This, however, could not be achieved in the current implementation with musicians playing in separated rooms. Moreover, placing musicians in the same room reduces the flexibility in varying the distances between them across experimental scenarios, and/or in setting virtual distances beyond the dimensions of the actual laboratory rooms; both of these are possible within the current system.
Tracking head movements represents the current state-of-the-art technique for auralization of prerecorded material. However, when the audio signals are generated just 6 ms before, the tracking of movements becomes a challenging task.
Despite these limitations, the system was able to provide a realistic and engaging listening and playing experience. The results of this study suggest that real-time auralization of musical performance is a promising technology with the potential to provide experimental platform for stage acoustics investigations as well as musical performance training.
## 6 Conclusions
An interactive and real-time system for stage acoustics experimentation is proposed and validated. The system enables interactive listening, including hearing oneself as well as hearing others, while playing simultaneously on a virtual stage. The latency issues are circumvented by time compensation of the simulated impulse responses. The main sources of uncertainty are discussed based on state-of-the-art practices and databases. Future studies are recommended to address knowledge gaps regarding source directivities including their interaction with typical movements by musicians during playing. The results from a pilot study with guitar duets show promise in terms of feasibility of the system for duets. Moreover, the current system can be scaled to accommodate a larger number of musicians and musical instruments. This can be useful in enabling virtual stage acoustics studies with a large number of musicians concurrently, which currently represents a challenge.
## Acknowledgment
The work of E. A. in Germany was supported by the Alexander von Humboldt Foundation. M.Y. was supported by a DFG (German Research Foundation) grant - Project number 503914237.
|
2309.08593 | Attention-Only Transformers and Implementing MLPs with Attention Heads | The transformer architecture is widely used in machine learning models and
consists of two alternating sublayers: attention heads and MLPs. We prove that
an MLP neuron can be implemented by a masked attention head with internal
dimension 1 so long as the MLP's activation function comes from a restricted
class including SiLU and close approximations of ReLU and GeLU. This allows one
to convert an MLP-and-attention transformer into an attention-only transformer
at the cost of greatly increasing the number of attention heads. We also prove
that attention heads can perform the components of an MLP (linear
transformations and activation functions) separately. Finally, we prove that
attention heads can encode arbitrary masking patterns in their weight matrices
to within arbitrarily small error. | Robert Huben, Valerie Morris | 2023-09-15T17:47:45Z | http://arxiv.org/abs/2309.08593v1 | # Attention-Only Transformers and Implementing MLPs with Attention Heads
###### Abstract
The transformer architecture is widely used in machine learning models and consists of two alternating sublayers: attention heads and MLPs. We prove that an MLP neuron can be implemented by a masked attention head with internal dimension 1 so long as the MLP's activation function comes from a restricted class including SiLU and close approximations of ReLU and GeLU. This allows one to convert an MLP-and-attention transformer into an attention-only transformer at the cost of greatly increasing the number of attention heads. We also prove that attention heads can perform the components of an MLP (linear transformations and activation functions) separately. Finally, we prove that attention heads can encode arbitrary masking patterns in their weight matrices to within arbitrarily small error.
## 1 Introduction
The transformer architecture was introduced in the landmark 2017 paper _Attention is All You Need_(Vaswani et al., 2023) and traditionally consists of alternating attention and multilayer-perceptron (MLP) sublayers. Although initially used for machine translation, transformers have been used across a wide range of tasks, including language modeling (Radford et al., 2018; Devlin et al., 2019; Liu et al., 2018), computer vision (Khan et al., 2022; Cornia et al., 2020), and image generation (Parmar et al., 2018). The widespread deployment of transformers has led to increasing interest in _mechanistic interpretability_(Wang et al., 2022; Conmy et al., 2023), which seeks to convert the computations of transformers into human-understandable explanations. Some interpretability efforts, such as Elhage et al. (2021), focused on attention-only transformers, finding that MLP layers were harder to interpret.
This work seeks to supplement those mechanistic interpretability methods by showing that MLP layers in transformers are equivalent to a sum of masked attention heads and therefore can be subjected to interpretability techniques that work on attention-only transformers. In Theorem 3 we show that by including a "bias token" akin to the persistent memory vectors in Sukhbaatar et al. (2019) and using a slightly unusual attention-masking pattern, an MLP layer of size \(\ell\) can be written as the sum of \(\ell\) attention heads with internal dimension 1. We show in Theorem 6 that one can apply this process throughout the entire transformer, converting the typical MLP-and-attention transformer into an attention-only transformer. We then show in Theorems 7 and 8 that attention heads can implement row-wise linear transformations and matrix-level activation functions separately. Finally, we show in Theorem 9 that a slightly augmented network is capable of approximating any masking pattern to within arbitrary error.
## 2 Background
**Notation**.: _Throughout, we will use \(M_{n,k}\) to denote the set of real-valued \(n\)-by-\(k\) matrices._
_For matrices \(X\in M_{n_{1},k_{1}}\) and \(Y\in M_{n_{2},k_{2}}\) of any size, we will write \(X\oplus Y\) for the block matrix_
\[X\oplus Y=\left[\begin{array}{c|c}X&\mathbf{0}\\ \hline\mathbf{0}&Y\end{array}\right]\in M_{n_{1}+n_{2},k_{1}+k_{2}}\]
_where each \(\mathbf{0}\) is a correctly sized zero matrices. We will similarly write \(\mathbf{1}\) for matrices with a 1 for every entry._
_For matrices \(X\in M_{n,k_{1}}\) and \(Y\in M_{n,k_{2}}\), we will write_
\[[X|Y]\in M_{n,k_{1}+k_{2}}\]
_for the matrix made by appending one to the other._
_For a real-valued function \(f\) and matrix \(X\), we will write \(f(X)\) for the entry-wise application of that function to the matrix._
_We write_
\[\mathrm{ReLU}(x):= max(x,0)\] \[\mathrm{SiLU}(x):= x\sigma(x)\] \[\mathrm{GeLU}(x):= x\Phi(x)\]
_where \(\sigma(x)=1/(1+\exp(-x))\), and \(\Phi(x)\) is the cumulative distribution function for the standard Gaussian distribution with mean 0 and variance 1. We will say that a generalized SiLU function is a function of the form_
\[f(x)=a_{1}\mathrm{SiLU}(a_{2}x)\]
_for some \(a_{1},a_{2}\in\mathbb{R}\)._
The class of generalized SiLU functions includes SiLU(\(x\)) and approximations of GeLU and ReLU. In particular, \(\mathrm{GeLU}(x)\approx\mathrm{SiLU}(1.702x)/1.702\) (Hendrycks and Gimpel, 2023) (reaching a maximum absolute error of 0.0203 at \(x=\pm 2.27\)) and \(\mathrm{ReLU}(x)\approx\mathrm{SiLU}(kx)/k\) for large \(k\) (reaching a maximum absolute error of \(\frac{0.2785}{k}\) at \(x=\pm\frac{1.278}{k}\)).
**Definition 1**.: _An MLP with no biases and one hidden layer is a function \(f:M_{n,k}\to M_{n,k}\) of the form_
\[f(X)=\alpha(XV_{1})V_{2} \tag{1}\]
_where \(\alpha:\mathbb{R}\rightarrow\mathbb{R}\) is some real-valued function applied entry-wise to matrices, and \(V_{1},V_{2}\) are fixed matrices in \(M_{k,\ell}\) and \(M_{\ell,k}\), respectively, called parameter matrices. The number \(\ell\) is called the size of the hidden layer, and the function \(\alpha\) is called the activation function._
Many transformer architectures follow the convention that \(\ell=4k\)(Vaswani et al., 2023; Brown et al., 2020), but we do not require this. There are many popular choices for activation functions (Hendrycks and Gimpel, 2023), including ReLU, SiLU, and GeLU.
For describing attention heads, we largely follow the framework of Elhage et al. (2021).
**Definition 2**.: _A mask matrix \(\Lambda\) is a matrix with entries in \(\{0,1\}\) such that every row has at least one nonzero entry._
_Let \(X,\Lambda\in M_{n,k}\), and suppose \(\Lambda\) is a mask matrix. Then define the masked softmax function_
\[\mathrm{msoftmax}(X,\Lambda):=\mathrm{rownorm}\left(\exp(X)\odot\Lambda\right)\]
_where \(\mathrm{rownorm}\) denotes row-wise \(\ell^{1}\) normalization, and \(\odot\) denotes element-wise multiplication. That is, the masked softmax function acts like the usual row-wise softmax but applied to only the entries of \(X\) where the mask \(\Lambda\) is 1. At the entries where \(\Lambda\) is 0, the output of the masked softmax function takes the value 0._
_A masked attention head is a function \(h:M_{n,k}\to M_{n,k}\) of the form_
\[h(X)=\mathrm{msoftmax}(XW_{QK}X^{T},\Lambda)XW_{OV} \tag{2}\]
_for some matrices \(W_{OV},W_{QK}\in M_{k,k}\), and mask matrix \(\Lambda\in M_{n,n}\). We call \(W_{OV}\) and \(W_{QK}\) the parameter matrices for this attention head._
For practical reasons, attention heads are rarely described (or implemented) as in Equation 2. However, one can verify that this definition encompasses the classical transformer framework in Vaswani et al. (2023), with \(W_{QK}=(W_{i}^{Q})(W_{i}^{K})^{T}/\sqrt{d_{k}}\), and \(W_{OV}=W_{i}^{V}W_{i}^{O}\), where \(W_{i}^{O}\) denotes the appropriate subblock of the \(W^{O}\) matrix.
For many language tasks, the masking pattern is chosen to mask later tokens from earlier tokens (Vaswani et al., 2023; Radford et al., 2018), i.e., \(\Lambda\) is the subdiagonal matrix with \(\Lambda_{i,j}=\begin{cases}1&\text{ if }i\leq j\\ 0&\text{ otherwise}\end{cases}\). However, in our construction in Theorem 3 and Theorem 6, we will make use of a nonstandard masking pattern in which tokens only attend to themselves and a single special token.
## 3 Implementing MLP Layers with Attention Heads
In this section we show that MLP layers whose activation functions are generalized SiLU functions are in fact a sum of attention heads.
The intuition for this claim is simple: both attention heads and MLPs are mostly linear, with a single nonlinearity (respectively, masked softmax and the generalized SiLU activation function). Additionally, softmax can easily play the role of the sigmoid part of SiLU since \(\text{softmax}([-x,0])=\text{rownorm}([e^{-x},1])=[\sigma(x),\sigma(-x)]\). Multiplying this attention pattern onto the vector \([x,0]\), we get \(x\sigma(x)+0\sigma(-x)=\text{SiLU}(x)\). The following theorem is a formalization of this intuition.
**Theorem 3**.: _Let \(f(X)=\alpha(XV_{1})V_{2}\) be an MLP on \(M_{N,D}\) with no biases and one hidden layer of size \(\ell\), and suppose \(\alpha\) is a generalized SiLU function \(\alpha(x)=a_{1}\text{SiLU}(a_{2}x)\). Then there are \(\ell\) masked attention heads \(\{h_{i}\}_{i=1}^{\ell}\) on \(M_{N+1,D+1}\) such that_
\[f(X)\oplus[0]=\sum_{i=1}^{\ell}h_{i}(X\oplus[1])\]
_for all \(X\in M_{N,D}\)._
_In particular, for the \(i\)th attention head, one uses parameter and mask matrices_
\[W_{QK} = a_{2}\begin{bmatrix}\mathbf{0}&-V_{1}^{i}\\ \hline\mathbf{0}&0\end{bmatrix}\] \[W_{OV} = a_{1}a_{2}V_{1}^{i}V_{2}^{i}\oplus[0]\] \[\Lambda = \begin{bmatrix}I_{N}&\mathbf{1}\\ \hline\mathbf{0}&1\end{bmatrix}\]
_where the block decompositions are into size \(N\) and \(1\), \(V_{1}^{i}\) denotes the \(i\)th column of \(V_{1}\), \(V_{2}^{i}\) denotes the \(i\)th row of \(V_{2}\), and \(\mathbf{1}\) denotes the column vector of all 1s._
Proof.: We first prove the claim in the case of \(\ell=a_{1}=a_{2}=1\). In this case, since there is only one column in \(V_{1}\), then \(V_{1}=V_{1}^{i}\), and similarly \(V_{2}=V_{2}^{i}\). Consider the attention matrix \(\text{msoftmax}((X\oplus[1])W_{QK}(X\oplus[1])^{T},\Lambda)\). Multiplying matrices on the level of their blocks, we get that the first argument of the masked softmax is
\[(X\oplus[1])W_{QK}(X\oplus[1])^{T}=\begin{bmatrix}X&\mathbf{0}\\ \hline\mathbf{0}&1\end{bmatrix}\begin{bmatrix}\mathbf{0}&-V_{1}^{i}\\ \hline\mathbf{0}&0\end{bmatrix}\begin{bmatrix}X&\mathbf{0}\\ \hline\mathbf{0}&1\end{bmatrix}^{T}=\begin{bmatrix}\mathbf{0}&-XV_{1}\\ \hline\mathbf{0}&0\end{bmatrix}\]
Now consider the masked softmax term in the \(j\)th row for \(j\leq N\). This row has exactly two unmasked values, the diagonal entry and the rightmost entry, taking the values \(0\) and \(-(XV_{1})_{j}\), respectively. Applying \(\exp\) and rownorm results in \(\sigma((XV_{1})_{j})\) and \(\sigma(-(XV_{1})_{j})\), respectively. Thus, the masked softmax term becomes
\[\text{msoftmax}((X\oplus[1])W_{QK}(X\oplus[1])^{T},\Lambda) =\] \[= \left[\begin{array}{c|c}\text{diag}(\sigma(XV_{1}))&\sigma(-XV_{1} )\\ \hline\mathbf{0}&1\end{array}\right]\]
Substituting these values into the expression for \(h(X)\) gives
\[h(X\oplus[1]) = \text{msoftmax}((X\oplus[1])W_{QK}(X\oplus[1])^{T},\Lambda)(X \oplus[1])W_{OV}\] \[= \left[\begin{array}{c|c}\text{diag}(\sigma(XV_{1}))&\sigma(-XV_ {1})\\ \hline\mathbf{0}&1\end{array}\right](X\oplus[1])W_{OV}\] \[= \left[\begin{array}{c|c}\text{diag}(\sigma(XV_{1}))&\sigma(-XV_ {1})\\ \hline\mathbf{0}&1\end{array}\right]\left[\begin{array}{c|c}X&\mathbf{0}\\ \hline\mathbf{0}&0\end{array}\right]\] \[= \left[\begin{array}{c|c}\text{diag}(\sigma(XV_{1}))XV_{1}V_{2}& \mathbf{0}\\ \hline\mathbf{0}&0\end{array}\right]\] \[= \left[\begin{array}{c|c}\text{SiLU}(XV_{1})V_{2}&\mathbf{0}\\ \hline\mathbf{0}&0\end{array}\right]\] \[= \left[\begin{array}{c|c}f(X)&\mathbf{0}\\ \hline\mathbf{0}&0\end{array}\right]\]
as desired. This completes the \(\ell=a_{1}=a_{2}=1\) case.
For a general \(a_{1},a_{2}\), apply the previous case to an MLP with weight matrices \(a_{2}V_{1}\) and \(a_{1}V_{2}\).
Finally, for the fully general case with \(\ell>1\), for each \(1\leq i\leq\ell\), let \(f_{i}(X)=\alpha(XV_{1}^{i})V_{2}^{i}\), and note that \(f=\sum_{i=1}^{\ell}f_{i}\). Let \(h_{i}\) denote the attention head corresponding to \(f_{i}\) given by the \(\ell=1\) case. Then we have that
\[f(X)\oplus[0] = \sum_{i=1}^{\ell}f_{i}(X)\oplus[0]\] \[= \sum_{i=1}^{\ell}h_{i}(X\oplus[1])\]
as desired.
**Remark 4**.: _The additional term \(\oplus[1]\) in Theorem 3 is similar to the persistent vectors of Sukhbaatar et al. (2019). In that work, the authors propose a new architecture, which they call the all-attention architecture, in which attention can also be paid to certain static vectors, learned for each attention head, called the persistent vectors. Our approach could also be implemented in that architecture with a single persistent vector \((0,0,0,..,0,1)\) shared across all attention heads._
_Note also that the \(W_{QK}\) and \(W_{OV}\) matrices used in Theorem 3 can be factored into the matrices \(W_{Q}\), \(W_{K}\), \(W_{V}\), \(W_{O}\in M_{D+1,1}\) from Vaswani et al. (2023) satisfying \(W_{QK}=W_{Q}W_{K}^{T}/\sqrt{D+1}\) and \(W_{OV}=W_{V}W_{O}\). In particular, we can take \(W_{Q}=W_{V}=a_{2}[V_{1}^{i}|0]^{T}\), \(W_{K}=\sqrt{D+1}[\mathbf{0}|-1]^{T}\), and \(W_{O}=a_{1}[V_{2}^{i}|0]^{T}\). Since \(W_{K}\) is shared across all attention heads, we only need to store two sets of parameters, the vectors \(W_{Q}=W_{V}\) and \(W_{O}\)._
_This provides an alternative perspective on MLP neurons: a neuron in an MLP is an attention head with internal dimension 1 and a particularly restrictive masking pattern in which each token attends only to itself and a static "bias" token._
We now have the necessary tools to show that a decoder-only transformer as in Liu et al. (2018); Radford et al. (2018) can be implemented entirely with attention heads.
**Definition 5**.: _A transformer is a function \(t:M_{N,D}\to M_{N,D}\) of the form \(X_{0}\mapsto X_{1}\mapsto...\mapsto X_{m}=t(X_{0})\), where_
\[X_{j+1}=\begin{cases}\mathrm{LayerNorm}(X_{j}+\sum_{i}h_{j,i}(X_{j}))&\text{ or}\\ \mathrm{LayerNorm}(X_{j}+f_{j}(X_{j}))&\text{}\end{cases}\]
_for some attention heads \(h_{j,i}\) or MLPs with a single hidden layer \(f_{j}\). Note the use of Layer Normalization (Ba et al., 2016) and skip connections, where one performs some computation \(f\) on \(X_{j}\) and defines \(X_{j+1}=\mathrm{LayerNorm}(X_{j}+f(X_{j}))\), as opposed to \(X_{j+1}=f(X_{j})\)._
Classically, transformers alternate between attention sublayers and MLP sublayers, but we allow the existence of other architectures, including attention-only transformers and "MLP-only" transformers.
**Theorem 6**.: _If a transformer's MLP layers are activated by a generalized SiLU function, they can be substituted with attention heads._
Proof.: We will show that we can create a new transformer \(t^{\prime}\) on \(M_{N+1,D+1}\) whose residual stream \(X^{\prime}_{j}\) on every sublayer satisfies
\[X^{\prime}_{j}=X_{j}\oplus[1]\]
This is sufficient to prove the main claim since the output of this new transformer will be \(X^{\prime}_{2m}=X_{2m}\oplus[1]\) and therefore contain the output of the original transformer.
Without loss of generality, assume that the MLP layers have no bias terms (i.e., that we've already used the "bias trick" to fold bias terms into the weight matrix).
To prove that there is a transformer \(t^{\prime}\) that satisfies \(X^{\prime}_{j}=X_{j}\oplus[1]\) on every sublayer, we proceed by induction. For the base case of \(j=0\), we tweak the transformer's context window and embedding weights so that \(X^{\prime}_{0}=X_{0}\oplus[1]\).
We split the inductive case depending on whether the original transformer's sublayer used attention or an MLP. If the original layer was an MLP, then by Theorem 3 there are attention heads \(h^{\prime}_{j,i}\) such that \(f_{j}(X)\oplus[0]=\sum h^{\prime}_{j,i}(X\oplus[1])\), so in our transformer \(t^{\prime}\), using these attention heads yields
\[X^{\prime}_{j+1} = \mathrm{LayerNorm}(X^{\prime}_{j}+\sum h^{\prime}_{j,i}(X^{ \prime}_{j}))\] \[= \mathrm{LayerNorm}((X_{j}\oplus[1])+\sum h^{\prime}_{j,i}(X_{j} \oplus[1]))\] \[= \mathrm{LayerNorm}((X_{j}\oplus[1])+(f_{j}(X)\oplus[0])))\] \[= \mathrm{LayerNorm}(X_{j}+f_{j}(X))\oplus[1]\] \[= X_{j+1}\oplus[1]\]
as desired.
If instead, the transformer used attention heads on the \(j\)th sublayer, we must tweak our original induction heads to account for the new size. To this end, we will show that for each of the original induction heads \(h=h_{j,i}\), we can create an induction head \(h^{\prime}\) such that
\[h^{\prime}(X\oplus[1])=h(X)\oplus[0]\]
Let \(W_{QK},W_{OV}\), and \(\Lambda\) denote the original parameter and masking matrices for \(h\). Then define
\[W^{\prime}_{QK} = W_{QK}\oplus[1]\] \[W^{\prime}_{OV} = W_{OV}\oplus[0]\] \[\Lambda^{\prime} = \Lambda\oplus[1]\]
Then,
\[h^{\prime}(X\oplus[1]) = \text{msofmax}((X\oplus[1])W^{\prime}_{QK}(X\oplus[1])^{T},\Lambda^ {\prime})(X\oplus[1])W^{\prime}_{OV}\] \[= \text{msofmax}((X\oplus[1])(W_{QK}\oplus[1])(X\oplus[1])^{T},( \Lambda\oplus[1]))(X\oplus[1])(W_{OV}\oplus[0])\] \[= \text{msofmax}(XW_{QK}X^{T}\oplus[1],\Lambda\oplus[1])(XW_{OV} \oplus[0])\] \[= (\text{msofmax}(XW_{QK}X^{T},\Lambda)\oplus[1])(XW_{OV}\oplus[0])\] \[= \text{msofmax}(XW_{QK}X^{T},\Lambda)XW_{OV}\oplus[0]\] \[= h(X)\oplus[0]\]
as desired. Now, creating such \(h^{\prime}_{j,i}\) for each of the original attention heads \(h_{j,i}\), we have
\[X^{\prime}_{j+1} = \text{LayerNorm}(X^{\prime}_{j}+\sum h^{\prime}_{j,i}(X^{\prime}_{ j}))\] \[= \text{LayerNorm}((X_{j}\oplus[1])+\sum h^{\prime}_{j,i}(X_{j} \oplus[1]))\] \[= \text{LayerNorm}((X_{j}\oplus[1])+\sum h_{j,i}(X)\oplus[0]))\] \[= \text{LayerNorm}((X_{j}+\sum h_{j,i}(X)))\oplus[1]\] \[= X_{j+1}\oplus[1]\]
as desired. This completes the inductive step and the proof.
It is instructive to compare this construction to the negative results of Dong et al. (2021), which find that without skip connections or MLPs, a self-attention network converges rapidly to a rank-1 matrix. Since we obviously do away with the MLP layer, our result depends on the use of skip connections. In particular, the "bias term" of \(\oplus[1]\) is zeroed out by the construction in Theorem 3, so applying the construction in Theorem 6 without a skip connection results in \(X^{\prime}_{0}=X_{0}\oplus[1]\), but \(X^{\prime}_{1}=X_{1}\oplus[0]\). Then, in the \(j=2\) sublayer, the construction in 3 would fail for lack of this bias term, as, without it, the pre-attention matrix \((X^{\prime})W_{QK}(X^{\prime})^{T}\) is 0.
## 4 Linear Transformations and Activation Functions with Attention Heads
Theorem 3 shows that attention heads can implement an MLP layer, but can they separately implement the components of an MLP, a linear transformation and an activation function? In this section we show that the answer is yes.
We first show that an attention head can perform an arbitrary linear operation row-wise on the matrix.
**Theorem 7**.: _Let \(h:M_{N,D}\to M_{N,D}\) be an attention head with masking matrix \(\Lambda=I_{N}\). Then \(h(X)=XW_{OV}\)._
Proof.: Because \(\Lambda=I_{n}\), after masking, the attention matrix \(\text{msofmax}(XW_{QK}X^{T},\Lambda)\) will have nonzero entries only along the diagonal. Since the rows of the attention matrix are normalized to sum to 1, it follows that \(\text{msofmax}(XW_{QK}X^{T},\Lambda)=I_{n}\). Then,
\[h(X)=\text{msofmax}(XW_{QK}X^{T},\Lambda)XW_{OV}=I_{n}XW_{OV}=XW_{OV}\]
as desired.
Now we will show that one can apply a generalized SiLU function entrywise.
**Theorem 8**.: _Let \(\alpha\) be a generalized SiLU function. Then there are \(D\) attention heads \(h_{1},...,h_{D}\) on \(M_{N+1,D+1}\) such that_
\[\alpha(X)\oplus[0]=\sum_{i=1}^{D}h_{i}(X\oplus[1])\]
Proof.: This follows immediately from applying Theorem 3 to the MLP \(f(X)=\alpha(XI_{N})I_{N}=\alpha(X)\), whose hidden layer is of size \(\ell=D\).
Note that a transformer usually makes use of skip connections, so that the residual stream experiences the transformation \(X\mapsto X+sublayer(X)\). Thus, to get the transformation \(X\mapsto\alpha(X)\), one can combine these two theorems, using \(D+1\) attention heads to produce \(sublayer(X)=\alpha(X)-X\), in which case \(X\mapsto X+sublayer(X)=\alpha(X)\).
## 5 Encoding Masking Patterns in Weight Matrices
Although some previous work has used multiple masking patterns1, some readers may be disappointed that the attention patterns prescribed in the previous sections are oddly "artificial". In this section, we will show a technique to ameliorate this concern by embedding the masking pattern into the \(W_{QK}\) matrix. To do so, we must further augment the residual stream, but our technique allows us to encode an arbitrary masking pattern in the \(W_{QK}\) parameters at the cost of arbitrarily small errors and poor training behavior.
Footnote 1: E.g., Brown et al. (2020) uses “alternating dense and locally banded sparse attention patterns”.
**Theorem 9**.: _Let \(h\) be a masked attention head on \(M_{N,D}\) with mask matrix \(\Lambda_{1}\). Then for any mask matrix \(\Lambda_{2}\) satisfying \(\Lambda_{1}\leq\Lambda_{2}\) entrywise, there is a family of masked attention heads \(h_{\Omega}\), parameterized by \(\Omega\in\mathbb{R}\), that use \(\Lambda_{2}\) as their mask matrix and such that \(h_{\Omega}([X|I_{N}])\rightarrow[h(X)|\mathbf{0}]\) uniformly on compacta as \(\Omega\rightarrow\infty\)._
Proof.: Define \(h_{\Omega}\) to be the attention head using the mask matrix \(\Lambda_{2}\) and parameter matrices
\[W_{QK,\Omega} = W_{QK}\oplus\Omega\Lambda_{1}\] \[W_{OV,\Omega} = W_{OV}\oplus\mathbf{0}\]
Fix some compact set \(K\subset M_{N,D}\) and \(\epsilon>0\).
First observe that
\[h_{\Omega}([X|I_{N}]) := \text{msoftmax}([X|I_{N}]W_{QK,\Omega}[X|I_{N}]^{T},\Lambda_{2})[ X|I_{N}]W_{OV,\Omega}\] \[= \text{msoftmax}([X|I_{N}](W_{QK}\oplus\Omega\Lambda_{1})[X|I_{N}] ^{T},\Lambda^{\prime})[X|I_{N}](W_{OV}\oplus\mathbf{0})\] \[= \text{msoftmax}(XW_{QK}X^{T}+\Omega\Lambda_{1},\Lambda_{2})[XW_{OV }|\mathbf{0}]\]
Our first task is to show that the attention pattern \(A_{1}:=\text{msoftmax}(XW_{QK}X^{T}+\Omega\Lambda_{1},\Lambda_{2})\) converges to the corresponding attention pattern \(A_{2}:=\text{msoftmax}(XW_{QK}X^{T},\Lambda_{1})\) entrywise as \(\Omega\rightarrow\infty\). To this end, fix \(\epsilon_{0}>0\), and pick \(b\in\mathbb{R}\) such that entries of \(XW_{QK}X^{T}\) are bounded in absolute value by \(b\) as \(X\) ranges over \(K\), and let \(\Omega>\ln(N/\epsilon_{0})+2b\). We have three cases depending on whether the corresponding entries in \(\Lambda_{1}\) and \(\Lambda_{2}\) are \(0\) or \(1\):
1. If \(\Lambda_{1,(i,j)}=\Lambda_{2(i,j)}=0\), then \(A_{1,(i,j)}=A_{2,(i,j)}=0\) due to masking.
2. If \(\Lambda_{1,(i,j)}=0\) and \(\Lambda_{2,(i,j)}=1\), then \(A_{1,(i,j)}=0\). Since \(\Lambda_{1}\) is a mask matrix, in row \(i\) there is a column \(J\) such that \(\Lambda_{1,(i,J)}=1\). Then the \((i,J)\)th entry of \(\exp(XW_{QK}X^{T}+\Omega\Lambda_{1})\) is at least \(\exp(\Omega-b)\), while the \((i,j)\)th entry is at most \(\exp(b)\). Thus, after row-normalizing, we have
\[A_{2,(i,j)} \leq \frac{\exp(b)}{\exp(\Omega-b)}\] \[= \frac{1}{\exp(\Omega-2b)}\] Since \(\Omega>\ln(N/\epsilon_{0})+2b\), we have \(\exp(\Omega-2b)>N/\epsilon_{0}\), so \(A_{2,(i,j)}\leq\frac{1}{N/\epsilon_{0}}=\epsilon_{0}/N<\epsilon_{0}\) as desired.
3. If \(\Lambda_{1,(i,j)}=\Lambda_{2,(i,j)}=1\), then consider the \(i\)th row. As shown in the previous two cases, in each entry of this row where \(\Lambda_{1,(i,j)}=0\), we have \(A_{2,(i,j)}<\epsilon_{0}/N\). Since there are \(N\) terms in this row, and any row sums to \(1\) due to normalization, this means that the remaining terms, where \(\Lambda_{1,(i,j)}=1\), sum to some value \(S\in[1-\epsilon_{0},1]\). Since the log ratio between two such terms is the difference of their corresponding entries in \(XW_{QK}X^{T}+\Omega\Lambda_{1}\), and the \(\Omega\) terms of those entries will cancel, this shows that the ratio between terms where \(\Lambda_{1,(i,j)}=1\) in \(A_{2}\) is the same as the corresponding ratio in \(A_{1}\). That is, the \(i\)th row of \(A_{1}\) concentrates its mass \(S\) in the same locations as \(A_{2}\) at the same ratios, so \(A_{1,(i,j)}=SA_{2,(i,j)}\) for all \(j\) with \(\Lambda_{1,(i,j)}=1\). Thus \(|A_{1,(i,j)}-A_{2,(i,j)}|=A_{1,(i,j)}|1-S|<\epsilon_{0}\).
Rephrasing our partial result, we have shown that \(A_{1}=A_{2}+E_{\Omega}\), where \(E_{\Omega}\) is an error matrix whose entries are bound by \(\epsilon_{0}\) whenever \(\Omega>\ln(N/\epsilon_{0})+2b\).
Returning to our expression for \(h_{\Omega}([X|I_{N}])\), we have
\[h_{\Omega}([X|I_{N}]) = A_{1}[XW_{OV}|\mathbf{0}]\] \[= (A_{2}+E_{\Omega})[XW_{OV}|\mathbf{0}]\] \[= A_{1}[XW_{OV}|\mathbf{0}]+E_{\Omega}[XW_{OV}|\mathbf{0}]\] \[= [h(X)|\mathbf{0}]+[E_{\Omega}XW_{OV}|\mathbf{0}]\]
Thus, the entry-wise difference between \(h_{\Omega}([X|I_{N}])\) and \([h(X)|\mathbf{0}]\) is \([E_{\Omega}XW_{OV}|\mathbf{0}]\), so it suffices to show that \(E_{\Omega}XW_{OV}\) is entry-wise less than \(\epsilon\). To this end, fixing some \(\epsilon>0\), let \(\epsilon_{0}=\epsilon/K\), where \(K=\max(||XW_{OV}||/\sqrt{N},1)\) and \(||\cdot||\) denotes the operator norm of a matrix. Then, for all \(\Omega>\ln(N/\epsilon_{0})+2b\), we have \(E_{\Omega}\) is entry-wise less than \(\epsilon_{0}\). Therefore, in the \(i,j\)th entry of \(E_{\Omega}XW_{OV}\), we have
\[|(E_{\Omega}XW_{OV})_{i,j}| = |row_{i}(E_{\Omega})\cdot column_{j}(XW_{OV})|\] \[\leq \epsilon_{0}\sqrt{N}\cdot||XW_{OV}||\] \[= (\epsilon/K)\sqrt{N}||XW_{OV}||\] \[\leq (\epsilon/(||XW_{OV}||/\sqrt{N}))\sqrt{N}||XW_{OV}||\] \[= \epsilon\]
as desired.
The above result shows that by augmenting the residual stream with an \(I_{N}\) matrix, one can write the masking pattern into the \(W_{QK}\) matrix. Combined with Theorem 6, this shows that one can convert a standard transformer into one using only attention heads and the standard masking pattern.
**Remark 10**.: _Inspecting the relation between \(\epsilon\) and \(\Omega\) in the previous theorem allows us to provide a more concrete choice of \(\Omega\). We require \(\Omega>\ln(N/\epsilon_{0})+2b\), where \(N\) is the size of the context window, \(\epsilon_{0}=\epsilon/\max(||XW_{OV}||/\sqrt{N},1)\), and \(b\) is a bound on the entries of \(XW_{QK}X^{T}\)._
_Using properties of logs, we may simplify our requirement to_
\[\Omega>\ln(N/\epsilon)+2b+\max(\ln(N^{\frac{1}{2}}||XW_{OV}||),0)\]
_Since the entries of a marix are bounded by the matrix's operator norm, we can take \(b=||XW_{QK}X^{T}||=||X||^{2}||W_{QK}||\). The resulting requirement on \(\Omega\) is then an increasing function of \(||X||\), so we may remove our dependence on it by replacing it with \(B=\sup_{X\in K}||X||\), in which case our bound becomes_
\[\Omega>\ln(N/\epsilon)+2B^{2}||W_{QK}||+\max(\ln(N^{\frac{1}{2}}B||W_{OV}||),0)\]
_Notably, \(\Omega\) grows only in the logarithm of \(\epsilon\)._
**Example 11**.: _Let's compute a value of \(\Omega\) that is suitable for a particular language model. Take \(\epsilon=2^{-146}\), the minimum positive value representable by a single-precision floating-point number (IEEE, 2008), and apply this to GPT-2, which has a maximum context window of \(N=1024\) tokens (Radford et al., 2019). According to Millidge and Winsor (2023), individual model weights are normally distributed, falling entirely within \([-1,1]\). Recall that \(W_{QK}\) is in fact stored internally as two matrices \(W_{Q}\) and \(W_{K}\), with \(W_{QK}=W_{Q}W_{K}^{T}\). Such matrices are conventionally of size \(N\times D/n_{heads}\), and since \(D=1600\)(Radford et al., 2019), and \(n_{heads}=25\)(Heimersheim and Turner, 2023), we have \(W_{Q},W_{K}\in M_{1024,64}\). Combining this with the bound that each entry is in \([-1,1]\), we get that \(||W_{Q}||\leq\sqrt{64}=8\). Similarly, \(||W_{K}||\leq 8\), so \(||W_{QK}||\leq||W_{Q}||||W_{K}||\leq 8\cdot 8=64\). By a similar argument, \(||W_{OV}||\leq 64\)._
_For the bound \(B\) on the norm of the residual stream, we turn to Heimersheim and Turner (2023) who finds that the measured norm of the residual stream increases across layers but does not seem to exceed \(B=10^{4}\). Combining these into our formula, we find that a sufficient value of \(\Omega\) is_
\[\Omega = \ln(N/\epsilon)+2B^{2}||W_{QK}||+\max(\ln(N^{\frac{1}{2}}B||W_{ OV}||_{2}),0)\] \[= \ln(1024/2^{-146})+2(10^{4})^{2}\cdot 8+\max(\ln(1024^{\frac{1}{2} }10^{4}\cdot 8),0)\] \[\approx 1.6\times 10^{9}\]
_with almost all of the contribution due to the \(2B^{2}||W_{QK}||\) term._
## 6 Limitations
The technique described in Theorem 6 faces several practical limitations. First is the quantity of attention heads: we use one attention head per dimension of the hidden layer, which can easily increase the number of attention heads by several orders of magnitude, partially offset by the new attention heads having smaller internal dimension. For example, each layer of GPT-3 has 96 attention heads with internal dimension 128 (Brown et al., 2020), and the process we describe would require 49152 additional 1-dimensional attention heads in each layer.
Second, it may be the case that replacing a feedforward network with attention heads slows down model inference or training. In particular, this approach replaces matrix multiplication with many vector-by-vector multiplications. One also computes many terms that are "thrown away" in the masking step. Combined, these suggest that converting an MLP layer to attention heads would increase computational costs.
Finally, the "pseudo-masking" in Theorem 9 introduces a separate set of issues into any training process due to the large \(\Omega\) terms added to the \(W_{QK}\) matrix. Most notably, pseudo-masking would interact poorly with most forms of dropout regularization and with \(\ell^{2}\) regularization on the entries of \(W_{QK}\).
## 7 Discussion
We have proven that attention heads can implement an MLP layer and in particular that any transformer can be converted to an attention-only transformer. One implication of these results is that it is theoretically possible to train an attention-only transformer that matches the performance of an MLP-plus-attention transformer. It remains unknown whether such an architecture would be competitive with the more classical transformer architecture in terms of practical considerations like training or inference speed. Such a test would be a promising future area of research.
Our foremost hope in this work is to facilitate the advancement of mechanistic interpretability approaches such as Elhage et al. (2021), which found the most success in transformers without MLP layers, but found that a complete understanding of transformers "will require progress on MLP layers". Our technique could allow one to reuse the techniques that are successful on attention heads on the MLP layers.
In doing so, the primary impediment is scale since the approach described in this paper increases the number of attention heads in a transformer by several orders of magnitude. However, this is itself a useful new perspective on the difficulty of interpreting MLP layers: MLP layers in a model like GPT-3 are larger than attention layers by a 2:1 margin if one measures by number of parameters but by 500:1 if one measures by number of attention heads. It may be the case that the AI capabilities slogan "scale is all you need" applies equally to mechanistic interpretability.
### Acknowledgements
The authors would like to thank Ari Rahikkala for pointing us towards relevant literature and Delta Hessler for proofreading. The authors would like to thank Open Philanthropy for their support.
|
2308.16484 | Test-Time Adaptation for Point Cloud Upsampling Using Meta-Learning | Affordable 3D scanners often produce sparse and non-uniform point clouds that
negatively impact downstream applications in robotic systems. While existing
point cloud upsampling architectures have demonstrated promising results on
standard benchmarks, they tend to experience significant performance drops when
the test data have different distributions from the training data. To address
this issue, this paper proposes a test-time adaption approach to enhance model
generality of point cloud upsampling. The proposed approach leverages
meta-learning to explicitly learn network parameters for test-time adaption.
Our method does not require any prior information about the test data. During
meta-training, the model parameters are learned from a collection of
instance-level tasks, each of which consists of a sparse-dense pair of point
clouds from the training data. During meta-testing, the trained model is
fine-tuned with a few gradient updates to produce a unique set of network
parameters for each test instance. The updated model is then used for the final
prediction. Our framework is generic and can be applied in a plug-and-play
manner with existing backbone networks in point cloud upsampling. Extensive
experiments demonstrate that our approach improves the performance of
state-of-the-art models. | Ahmed Hatem, Yiming Qian, Yang Wang | 2023-08-31T06:44:59Z | http://arxiv.org/abs/2308.16484v2 | # Test-Time Adaptation for Point Cloud Upsampling Using Meta-Learning
###### Abstract
Affordable 3D scanners often produce sparse and non-uniform point clouds that negatively impact downstream applications in robotic systems. While existing point cloud upsampling architectures have demonstrated promising results on standard benchmarks, they tend to experience significant performance drops when the test data have different distributions from the training data. To address this issue, this paper proposes a test-time adaption approach to enhance model generality of point cloud upsampling. The proposed approach leverages meta-learning to explicitly learn network parameters for test-time adaption. Our method does not require any prior information about the test data. During meta-training, the model parameters are learned from a collection of instance-level tasks, each of which consists of a sparse-dense pair of point clouds from the training data. During meta-testing, the trained model is fine-tuned with a few gradient updates to produce a unique set of network parameters for each test instance. The updated model is then used for the final prediction. Our framework is generic and can be applied in a plug-and-play manner with existing backbone networks in point cloud upsampling. Extensive experiments demonstrate that our approach improves the performance of state-of-the-art models.
## I Introduction
Point cloud is one of the most popular representations of the 3D information in robotic applications. Point clouds can be obtained from readily available 3D scanning devices, such as LiDARs and RGB-D cameras. They play a vital role in a variety of applications, such as autonomous driving [1], augmented reality [2], and robotics [3]. However, the point clouds obtained from affordable 3D scanners are usually sparse and non-uniform. Therefore, these sparse point clouds need to be effectively upsampled to produce denser point clouds in order to be used in many downstream applications.
Given an input sparse point cloud, our goal is to generate a high-resolution uniform point cloud that adequately represents the underlying surface. Many traditional optimization-based methods [4, 5, 6, 7, 8] have been proposed for point cloud upsampling. Recently, DNN-based upsampling methods have emerged and achieved impressive results, including MPU [9], PUGAN [10], PUGCN [11], Dis-PU [12], and PU-Dense [13]. These methods can learn complex structures of point clouds and outperform traditional approaches. However, most learning-based methods have followed a fully supervised learning paradigm for point cloud upsampling. They assume that the training and test data are sampled from the same data distribution. This is unrealistic for real-world scenarios due to the fact that the data captured by different 3D sensors have large discrepancies. It is challenging for the training data to cover all the variations that can happen during testing. Consequently, trained models usually experience a drastic drop in performance when they are evaluated on unknown test distributions. This is known as the domain shift problem.
To address the distribution shift issue and the resulting performance drop, recent work has proposed domain adaptation approaches for point clouds to minimize the gap between training and test distributions, e.g. using adversarial learning [14, 15, 16, 17] or self-supervised learning [18, 19, 20, 21]. However, these approaches have some limitations. First, they assume having access to unlabeled samples of the target test distribution during training. This may not be feasible in real-world settings where the information from test distribution is not available in advance. In addition, they do not fully utilize the useful internal information available within the test instance, since the trained model parameters are fixed during inference time for all unseen test instances.
Recently, Zhou et al. [22] have adopted ZSSR [23] for point cloud upsampling. They have proposed a Zero-Shot Point Cloud Upsampling (ZSPU) [22] approach that can capture the internal information provided by a given point cloud at test time. This method trains the network from scratch at test time using augmented pairs of sparse-dense point clouds extracted from the test point cloud. Although ZSPU [22] can successfully exploit the internal features of the test instances, it requires a high inference time due to self-training at test time. In addition, it fails to utilize the useful information learned from the external dataset.
In this work, we address the above limitations by introducing a test-time adaptation approach that leverages the internal and external information of point clouds. Concretely, we first conduct large-scale training using pairs of sparse-dense point clouds to utilize the external dataset. Then, we adapt the model parameters in an instance-specific manner during inference and obtain a different set of network parameters for each different instance. This allows our model to better capture the uniqueness of each test sample and thus generalize better to unseen data. We have found that a simple fine-tuning of the pre-trained network is not optimal, since it takes a huge number of gradient updates for adaptation. Therefore, we propose to use meta-learning for a fast and effective adaptation of the model at test time. Meta-learning has shown great success in learning new tasks quickly with few training samples. In particular, Model-Agnostic Meta-Learning (MAML) [24] has been widely employed for various domain adaptation tasks [25, 26, 27, 28]. MAML [24] is an optimization-based method that aims to learn the model
parameters in a way that facilitates fast adaptation at test time within a few gradient updates. We adopt MAML [24] for training the point cloud upsampling networks. During meta-training, each input point cloud is downsampled by a predefined scaling factor and the MAML [24] task is the reconstruction of the input point cloud. At test time, the model is updated by a few gradient updates based on a self-supervised learning procedure that exploits the internal information of the test point cloud. The updated model is then used for the final prediction. Our key contributions are summarized as follows:
* We propose a test-time adaptation approach for point cloud upsampling. To the best of our knowledge, this is the first work that exploits the complementary advantages of both internal and external learning for point cloud upsampling.
* We propose to employ meta-learning to provide the model with the ability of fast and effective adaptation at inference time to improve the model generalization.
* We introduce a novel model-agnostic framework that can be applied on any point cloud upsampling network to boost its performance.
## II Related Work
Our work is closely related to several lines of research. We briefly review prior work closest related to ours.
### _Point Cloud Upsampling_
Early work has proposed optimization-based methods [4, 5, 6, 7] for densifying point clouds. Alexa et al. [4] generate dense points by inserting points at the Voronoi diagram's vertices. Lipman et al. [5] introduce a locally optimal projection operator for resampling point clouds based on L1 norm. Later, an improved weighted version of the local optimal projection operator was proposed in [6]. However, these methods usually do not work well around sharp edges. Huang et al. [7] propose an edge-aware resampling algorithm that progressively transfers generated points towards the edge singularities to preserve edge sharpness.
Ever since the emergence of deep learning, recent work has shifted to upsampling point clouds using deep neural networks. PU-Net [29] is the first to apply deep learning in point cloud upsampling. It uses Point-Net++ [30] for feature extraction and expands point features using multi-branch convolutions in feature space. EC-Net [31] proposes an edge-aware point cloud upsampling by directly minimizing distances from points to edges. MPU [9] proposes a patch-based network that learns different levels of point cloud features by progressively upsampling the points in multiple steps. PU-GAN [10] adopts a generative adversarial network to generate high-quality upsampled points. PU-GCN [11] proposes a graph convolutional network for point cloud upsampling. PUGeo-Net [32] incorporates differential geometry to improve point cloud upsampling performance by learning the local geometry of point clouds. Dis-PU [12] introduces disentangled refinement units for point upsampling using two sub-networks, including a dense point generator and a point spatial refiner. PU-Dense [13] proposes a novel feature extraction unit for extracting 3D multiscale features and adopts U-Net architecture based on sparse convolutions for computationally efficient processing of point clouds.
### _Domain Adaptation_
Extensive work has been proposed for 2D image domain adaptation [33, 34, 35, 36]. Recently, there is also work exploring domain adaptation for point clouds. Most existing domain adaptation approaches on point clouds [14, 37, 38, 15] mainly rely on adversarial learning to transfer knowledge from labeled source domain to unlabeled target domain. Qin et al. introduce PointDAN [14] to jointly align local and global features of point cloud distributions across different domains for 3D classification. Wang et al. [37] propose a cross-range adaptation to enhance far-range 3D object detection performance. Saleh et al. [38] adopt CycleGAN [39] to adapt projected 2D bird's eye view synthetic images for real-world vehicle detection. Wu et al. [15] use geodesic correlation alignment for minimizing the domain gap between synthetic and real data.
Recently, several studies have proposed to design self-supervised tasks [40, 41, 19, 20] for learning domain invariant features of point clouds. [40] introduces a deformation reconstruction task to learn the underlying structures of 3D objects. [41] defines the self-supervised task as a reconstruction of random partially displaced point clouds. [19] proposes two self-supervised tasks, including a scale prediction task and a 3D/2D projection reconstruction task to transfer global and local features across domains.
The main limitation of most existing domain adaptation approaches is the assumption of the availability of unlabelled target domain data during training, which is infeasible when the target domain is unknown during training.
### _Meta Learning_
Meta-learning has been successfully applied in many computer vision and robotics applications. Existing meta-learning methods can be categorized into model-based [42, 43, 44, 45, 46], metric-based [47, 48, 49, 50] and optimization-based methods [24, 51, 52, 53]. Model-based methods learn to update their model parameters with a few steps either using another meta-learner network for parameters prediction [45, 46] or via its internal architecture [42]. Metric-based methods learn a metric function to measure the similarity between samples. Optimization-based methods learn an optimal model initialization that can rapidly adapt to new tasks.
MAML [24] is a widely used optimization-based algorithm, which has been successfully adapted to many 2D image domain tasks [54, 27, 55, 25, 56]. Zhang et al. [54] propose MetaGaN that integrates the MAML algorithm with GAN network to improve image classification performance with a few number of training samples. Chi et al. [27] introduce meta-auxiliary learning framework based on MAML for image deblurring to enable fast model adaptation. Liu
et al. [55] propose to use meta-auxiliary learning with test-time adaptation for the problem of future depth prediction in videos. [25, 56] use MAML for image super-resolution, which learns effective pre-trained model weights that can quickly adapt to unseen test images. Relatively little meta-learning work has been proposed for point clouds [45, 46, 57]. Hai et al. [46] introduce a parameter prediction network for point cloud segmentation to enable fast adaptation to new part segmentation tasks. Ye et al. [45] propose a meta-subnetwork for point cloud upsampling that is trained to dynamically adjust the upsampling network parameters to support flexible scale factors.
## III Proposed Method
Given a sparse and noisy point cloud \(X\in R^{N\times 3}\) with \(N\) points and an upsampling ratio \(r\), our goal is to generate a dense point cloud \(Y\in R^{rN\times 3}\) with \(rN\) points that adequately cover the underlying surface. More importantly, the generated points should be uniformly-located on the object surface. We aim to learn a model \(F_{\theta}(X)\to Y\) parameterized by \(\theta\) that maps \(X\) to \(Y\) for a given upsampling ratio \(r\).
### _Preliminaries_
To leverage the advantages of external learning, we first perform supervised training using pairs of sparse-dense point clouds \((X,Y)\). We adopt the architecture of three state-of-the-art point cloud upsampling networks as the backbones of our approach, including PU-GCN [11], Dis-PU [12], and PU-Dense [13]. Each network is optimized using a standard supervised loss to learn the initial model parameters \(\theta\):
\[\min_{\theta}L(F_{\theta}(X),Y) \tag{1}\]
where \(L\) is a supervised loss between the prediction \(F_{\theta}(X)\) and the ground truth \(Y\).
At this step, we may directly fine-tune the pre-trained parameters during inference to exploit the internal features of point clouds for test-time adaptation (TTA). For example, given an input point cloud \(X\) during inference, we can downsample \(X\) to obtain a sparser point cloud \(X_{\downarrow}\). We can then fine-tune the model parameter \(\theta\) by treating \((X_{\downarrow},X)\) as a sparse-dense pair of point clouds in Eq 1. In our experiments, we will demonstrate that such naive TTA can already improve the model performance. However, it requires a large number of gradient updates to effectively adapt for each test instance since the model is not explicitly learned to facilitate test-time adaptation.
In our work, we propose a meta-learning approach to explicitly learn the model parameters for test-time adaptation (Fig. 1). Our approach consists of a meta-training stage and a meta-testing stage. During meta-training, we learn the model parameters from a set of tasks, where each task is constructed from a sparse-dense pair of point clouds from the training data. Each inner update of meta-training involves adapting the model to a sampled task. The adapted model is then used for inference on that task. This performance of this task-adaptive model is then used for the loss function for the outer update of meta-training. The goal of the outer update is to optimize the model parameters such that after adapting to a particular task, the adapted model performs well on that task. Through this bi-level optimization, the model parameters are explicitly trained so that they can be effectively adapted to a new test instance with only a few gradient updates. During meta-testing, we are given a new test instance. We first adapt the meta-learned model to this instance, then use the updated
Fig. 1: Overview of the proposed meta-learning procedure for point cloud upsampling. During each iteration of meta-training, we sample a batch of training pairs. For each sampled training pair \((X_{n},Y_{n})\), we first downsample \(X_{n}\) to obtain a sparser version \(X_{n\downarrow}\). We obtain the adapted parameters by applying our model to upsample \(X_{n\downarrow}\) and use \(X_{n}\) as the ground-truth to define as a self-supervised loss. We perform a small number of gradient updates using the computed self-supervised loss in the inner loop. Then we use the adapted model to perform the main task by upsampling \(X_{n}\). Finally, we update the model in the outer loop based on the calculated upsampling loss on the adapted parameters. Given a new instance \(X\) during meta-testing, we perform a few gradient updates using \((X_{\downarrow},X)\) to adapt the model for this instance and use the adapted model for final prediction.
model for prediction.
### _Meta-Training_
Inspired by the success of adopting MAML [24] for image super-resolution problem [25, 56], we learn the model using meta-learning such that the model parameters are trained to quickly adapt to unseen data at test time using a small number of gradient updates. First, the model is initialized by the pre-trained weights \(\theta\) resulting from the standard supervised training. The pre-trained feature representations help in stabilizing meta-training and thus ease the training phase of meta-learning [56]. We further optimize the model parameters using meta-learning. Specifically, we develop a meta-learning algorithm summarized in Algorithm 1 based on MAML [24]. The key to our approach is the construction of a task for the inner update of meta-training. We use a pair of point clouds consisting of the input point cloud \(X\) and its downsampled version \(X_{\downarrow}\) for a task in MAML. During the inner update, the model is used to upsample \(X_{\downarrow}\) and \(X\) is treated as the ground truth. The loss between \(F_{\theta}(X_{\downarrow})\) and \(X\) is then used to adapt the model parameters \(\theta\) by a few gradient updates. This allows the model to be quickly adapted to different data distributions at test time and boosts the overall generalization capability of the model.
Figure 1 illustrates the overall scheme of our proposed approach. The external training dataset consists of pairs of sparse-dense point clouds. We optimize the network weights using our proposed meta-learning approach to learn the optimal model parameters that can quickly adapt to new data distributions at test time. At test-time, we adapt the meta-learned parameters for each given test instance and use the adapted parameters to obtain the upsampled point cloud \(Y\).
More specifically, in each iteration of meta-training, we sample a batch \(B\) of sparse-dense training pairs \(\{X_{n},Y_{n}\}_{n=1}^{B}\). We downsample \(X_{n}\) to a sparser version \(X_{n\downarrow}\). In each inner update of meta-training, we perform model adaptation for a small number of gradient updates using (\(X_{n\downarrow}\),\(X_{n}\)) pairs as follows:
\[\theta_{n}\leftarrow\theta-\alpha\nabla_{\theta}L(F_{\theta}(X_{n\downarrow}),X _{n}) \tag{2}\]
where \(\alpha\) controls the learning rate of the adaptation and \(\theta_{n}\) represent the adapted model parameters for the input \(X_{n}\) using internal learning.
The adapted model \(\theta_{n}\) is then used to generate the dense point cloud \(Y\) and optimize the following meta-objective:
\[\min_{\theta}\sum_{n=1}^{B}L(F_{\theta_{n}}(X_{n}),Y_{n}) \tag{3}\]
Note that we use the adapted model \(\theta_{n}\) in the model \(F_{\theta_{n}(\cdot)}\), but the optimization in Eq. 3 is performed over the original model parameters \(\theta\).
In the outer update of meta-training, we optimize the meta-objective in Eq. 3 by performing gradient update as follows:
\[\theta\leftarrow\theta-\beta\nabla_{\theta}\sum_{n=1}^{B}L(F_{\theta_{n}}(X_{n }),Y_{n}) \tag{4}\]
where \(\beta\) is the meta-learning rate.
```
0:\(X,Y\): training pairs
0:\(B\): batch size
0:\(\alpha,\beta\): learning rates
0:\(\theta\): learned parameters
1: Initialize the network with pre-trained weights \(\theta\)
2:whilenot convergeddo
3: Sample a training batch \(\{X_{n},Y_{n}\}_{n=1}^{B}\)
4: Generate downsampled \(X_{n\downarrow}\)
5:forn=1 to Bdo
6: Evaluate loss: \(\nabla_{\theta}L(F_{\theta}(X_{n\downarrow}),X_{n})\)
7: Compute adapted parameters \(\theta_{n}\):
8:\(\theta_{n}\leftarrow\theta-\alpha\nabla_{\theta}L(F_{\theta}(X_{n\downarrow}),X _{n})\)
9:endfor
10: Evaluate the main upsampling task using the adapted parameters and update:
11:\(\theta\leftarrow\theta-\beta\nabla_{\theta}\sum_{n=1}^{B}L(F_{\theta_{n}}(X_{n }),Y_{n})\)
12:endwhile
13:return\(\theta\)
```
**Algorithm 1** Meta-training
### _Meta-Testing_
At test-time, we downsample the input point cloud \(X\) to a sparser version \(X_{\downarrow}\). Then, we fine-tune the model parameters by performing a small number of gradient updates using the point cloud pairs \((X_{\downarrow},X)\). This update is completely self-supervised and exploits the internal features of the input point cloud \(X\).
\[\theta^{\prime}\leftarrow\theta-\alpha\nabla_{\theta}L(F_{\theta}(X_{\downarrow }),X) \tag{5}\]
Finally, the adapted model \(\theta^{\prime}\) is used to perform the main upsampling task and generates the densified point cloud \(F_{\theta^{\prime}}(X)\). The inference procedure is summarized in Algorithm 2.
```
0:\(X\): sparse point cloud
0:\(\alpha\): learning rate
1:\(Y\): dense point cloud
1: Initialize the network with meta-trained weights \(\theta\)
2:Generate downsampled \(X_{\downarrow}\)
3:Evaluate loss: \(\nabla_{\theta}L(F_{\theta}(X_{\downarrow}),X)\)
4:Compute adapted parameters:
5:\(\theta^{\prime}\leftarrow\theta-\alpha\nabla_{\theta}L(F_{\theta}(X_{\downarrow }),X)\)
6:Generate upsampled point cloud Y = \(F_{\theta^{\prime}}(X)\)
7:return\(Y\)
```
**Algorithm 2** Meta-testing
## IV Experiments
We first describe some implementation details (Sec. IV-A). We then introduce the datasets and experimental setup (Sec. IV-B). We present our experiment results and comparison in Sec. IV-C. We also perform extensive ablation studies in Sec. IV-D.
### _Implementation Details_
We use the official implementations of PU-GCN [11], Dis-PU [12] and PU-Dense [13] as the backbones of our
approach. We first conduct supervised training on the backbones to obtain the initial pre-trained weights. During meta-training, we perform 5 gradient updates in the inner loop as described in Algorithm 1. We set the batch size to 8 and the learning rates \(\alpha\) and \(\beta\) to \(10^{-5}\) and \(10^{-6}\), respectively. We optimize the networks using the Adam optimizer with a learning rate of \(10^{-4}\) and an exponentially decayed factor of 0.99. All experiments are conducted on a single NVIDIA TitanX GPU.
### _Datasets and Setup_
Following [13], we train our method on the ShapeNet dataset [58] and evaluate the performance of the networks on the ShapeNet [58] dataset (1024 test samples), the 8iVFB dataset [59] (1200 test samples), and the Semantic3D.net dataset [60] (1500 test samples). We adopt two widely used upsampling evaluation metrics, namely Chamfer distance (CD) and Peak signal-to-noise ratio (PSNR), to measure the quality of the upsampled point cloud compared to the ground truth dense point cloud. CD sums the distances between the nearest neighbors correspondences of the upsampled point cloud and ground truth point cloud. CD is defined as:
\[CD_{(Y,G)}=\sum_{a\in Y}\min_{b\in G}|a-b|^{2}+\sum_{b\in G}\min_{a\in Y}|a-b|^{2} \tag{6}\]
where \(Y\) is the upsampled point cloud and \(G\) is the ground truth point cloud. PSNR measures the ratio between a normalization factor and point-to-point MSE as defined in [13], which is calculated from the upsampled point cloud \(Y\) to the ground truth \(G\) as well as in the opposite direction. PSNR is defined as follows:
\[PSNR=min(PSNR_{(Y,G)},PSNR_{(G,Y)}) \tag{7}\]
Fig. 2: Qualitative results of point cloud upsampling on the ShapeNet dataset [58]. We compare the 4x upsampled results with the three baselines: PU-GCN [11], Dis-PU [12], and PU-Dense [13].
\[PSNR_{(A,B)}=10\log_{10}(\frac{p_{s}^{2}}{d_{MSE}(A,B)}) \tag{8}\]
where \(p_{s}\) is the normalization factor and \(d_{MSE}\) is the average mean squared error between points in one point cloud and their nearest neighbors correspondences in the other point cloud.
### _Results and Comparisons_
We compare the proposed method with five state-of-the-art upsampling methods: MPU [9], PU-GAN [10], PU-GCN [11], Dis-PU [12], and PU-Dense [13]. We also compare our method against a meta-learning-based upsampling approach, namely Meta-PU [45]. We apply our framework with several different backbone networks, including PU-GCN [11], Dis-PU [12], and PU-Dense [13]. The comparison is shown in Table I. All methods are trained on the ShapeNet dataset [58] using the same experimental setup for a fair comparison. As shown in Table I, our method achieves a significant performance improvement on three backbones [11, 12, 13] across all the evaluation metrics with a good margin. More importantly, our method with PU-Dense [13] as the backbone outperforms all other state-of-the-art methods. Notably, we observe that the performance improvement on the 8iVFB dataset [59] and Semantic3D.net dataset [60] is more significant than the ShapeNet dataset [58]. This demonstrates the effectiveness of our method in boosting the generalization capability of models to unseen test data by enabling the networks to utilize the internal features of point clouds at test time. Besides quantitative results, we present upsampling qualitative comparisons in Figure 2 and Figure 3. In most cases, the upsampled point clouds of our method are more uniform, less noisy, and preserve edge sharpness.
**Robustness to Noise**: To validate the robustness to noise, we add Gaussian noise of varying noise levels to the input point clouds. We use the model trained on ShapeNet for evaluation and report our results in Table II. As the noise level increases, we can observe that the performance of all approaches drops. But our approach still outperforms all other methods under each noise level by a significant margin. This demonstrates the robustness of our approach to noise.
**Varying Upsampling Ratios**: We further investigate the robustness of our framework across different upsampling ratios. We have conducted experiments with upsampling scale ratios r = 4, 8, and 16. Table I reports the results with upsampling scale ratio of 8x on ShapeNet [58] and 8iVFB [59] datasets. Table III shows the quantitative comparisons on the ShapeNet [58] dataset under upsampling scales of 4x and 16x. We can observe that our method effectively improves the performance of all backbones across different upsampling ratios.
**Framework Components**: To study the relative contributions of various components in the proposed framework, we conduct additional ablation experiments. We first investigate the effect of test-time adaptation on improving performance. In this experiment, we do not apply our meta-learning approach. Instead, we use the pre-trained parameters resulting from supervised training. At test time, the input point cloud is downsampled and the model is fine-tuned using the input and the downsampled point clouds. As shown in Table IV, the performance of all the backbones has already been improved. This demonstrates the effectiveness of naive test-time adaptation in utilizing the internal features of point clouds, even without meta-learning. When applying our meta-training approach in Algorithm 1, the performance has been further improved. This demonstrates that both TTA and meta-training contribute to the final performance improvement.
**Number of Gradient Updates**: In this study, we investigate the impact of the number of gradient updates \(N\) in the inner loop of Algorithm 1. We use N = 1, 3, 5, 7, and 9 during the meta-training. Table V shows the evaluation results of our method trained with a different number of gradient updates. Overall, we observe that a large number of gradient updates enables the model to better capture the internal features of test point clouds and thus improve the performance. However, the PSNR with N=7 is slightly worse when compared to N=5. Furthermore, the performance experiences a minor decrease with N=9. Note that, we use the same number of gradient updates during training and testing.
## V Conclusion
In this paper, we have introduced a novel test-time adaptation framework for point cloud upsampling that utilizes both internal and external features of point clouds. In previous work, the model is typically trained on an external supervised dataset and fixed during evaluation on unseen test data. This approach fails to exploit the useful internal information of the test point clouds. In contrast, our framework is designed to efficiently adapt the model parameters for each test instance at inference time to improve the upsampling performance. To this end, we have proposed a meta-learning algorithm that allows fast adaptation of model parameters at test time using only the input sparse point cloud. More importantly, our method is a generic framework that can be applied to any deep learning-based upsampling network without modifying the architecture. Extensive experiments show the effectiveness of our proposed approach in boosting the upsampling performance and outperforming state-of-the-art methods.
\begin{table}
\begin{tabular}{l|c c c} \hline \hline & CD \(\downarrow\) & PSNR \(\uparrow\) & Time \\ \hline Dis-PU [12] & 55.62 & 70.23 & **36.31** \\ Ours + Dis-PU (N=1) & 53.41 & 70.86 & 50.46 \\ Ours + Dis-PU (N=3) & 50.71 & 71.89 & 95.38 \\ Ours + Dis-PU (N=5) & 48.25 & **71.96** & 148.65 \\ Ours + Dis-PU (N=7) & **47.83** & 71.95 & 197.28 \\ Ours + Dis-PU (N=9) & 48.79 & 71.92 & 254.72 \\ \hline \hline \end{tabular}
\end{table} TABLE V: Ablation studies on the number of gradient updates (\(N=1,3,5,7,9\)). We report the 8x results of the ShapeNet dataset [58] based on CD (\(10^{-2}\)), PSNR (dB), and inference time (ms).
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline & 0\% & 0.5\% & 1\% & 2\% \\ \hline PU-GCN [11] & 65.81 & 76.46 & 84.73 & 159.62 \\
**Ours + PU-GCN** & **50.49** & **56.32** & **63.61** & **124.79** \\ \hline Dis-PU [12] & 55.62 & 65.24 & 71.19 & 136.87 \\
**Ours + Dis-PU** & **48.25** & **52.58** & **59.35** & **117.52** \\ \hline PU-Dense [13] & 30.52 & 35.71 & 37.55 & 74.67 \\
**Ours + PU-Dense** & **26.44** & **29.10** & **33.96** & **62.41** \\ \hline \hline \end{tabular}
\end{table} TABLE II: Robustness to upsampling noisy point clouds results with different noise levels on the ShapeNet dataset [58]. We use an upsampling ratio of r=8 and compare different methods using the CD (\(10^{-2}\)) evaluation metric.
\begin{table}
\begin{tabular}{l|c c} \hline \hline & CD \(\downarrow\) & PSNR \(\uparrow\) \\ \hline PU-GCN [11] & 65.81 & 69.59 \\ PU-GCN + TTA (w/o meta) & 61.74 & 69.83 \\
**Ours + PU-GCN** & **50.49** & **71.62** \\ \hline Dis-PU [12] & 55.62 & 70.23 \\ Dis-PU + TTA (w/o meta) & 54.52 & 70.40 \\
**Ours + Dis-PU** & **48.25** & **71.96** \\ \hline PU-Dense [13] & 30.52 & 73.11 \\ PU-Dense + TTA (w/o meta) & 28.33 & 73.18 \\
**Ours + PU-Dense** & **26.44** & **73.38** \\ \hline \hline \end{tabular}
\end{table} TABLE IV: Ablation studies on different components of our framework components, including meta-learning and test-time adaptation.
\begin{table}
\begin{tabular}{l|c c|c c} \hline \hline & CD \(\downarrow\) & PSNR \(\uparrow\) & CD \(\downarrow\) & PSNR \(\uparrow\) \\ \hline PU-GCN [11] & 48.15 & 70.90 & 121.65 & 66.27 \\
**Ours + PU-GCN** & **31.74** & **72.87** & **102.84** & **67.11** \\ \hline Dis-PU [12] & 36.23 & 72.19 & 92.88 & 67.46 \\
**Ours + Dis-PU** & **25.61** & **73.28** & **84.31** & **69.01** \\ \hline PU-Dense [13] & 18.82 & 75.24 & 69.48 & 70.32 \\
**Ours + PU-Dense** & **15.33** & **75.86** & **62.19** & **70.73** \\ \hline \hline \end{tabular}
\end{table} TABLE III: Quantitative comparisons with baselines on the ShapeNet dataset [58] with varying upsampling scale ratios. |
2309.05909 | Exploring Short-Range Correlations in Symmetric Nuclei: Insights into
Contacts and Entanglement Entropy | The Short-Range Correlations between nucleons in nuclei is regarded as a
complex system. We investigate the relationship between the orbital
entanglement entropy of SRCs $S_{ij}$ in nuclear structures and Tan contact
$c_{ij}$, and find that the orbital entanglement entropies and Tan contacts
corresponding to proton-proton SRC pairs and neutron-proton SRC pairs in nuclei
demonstrate a scaling relation. More specifically, the proportionality of
entanglement entropy between proton-proton pairs and neutron-proton pairs is
directly related to the ratio of nuclear contacts within the atomic nucleus,
demonstrating an approximate ratio of 2.0. Our research suggests that this
scaling relationship should hold true for all symmetric nuclei, furthermore, we
offer a possible explanation for this phenomenon. | Wei Kou, Jingxuan Chen, Xurong Chen | 2023-09-12T01:45:53Z | http://arxiv.org/abs/2309.05909v2 | # Discovering a universal law of short-range correlation in symmetric nuclei
###### Abstract
The Short-Range Correlations (SRC) between nucleons in nuclei is regarded as a complex dynamical equilibrium system. We investigate the relationship between the orbital entanglement entropy of SRCs \(S_{ij}\) in nuclear structures and Tan contact \(c_{ij}\), and find that the orbital entanglement entropies and Tan contacts corresponding to proton-proton SRC pairs and neutron-proton SRC pairs in nuclei satisfy the obvious scaling relation. More specifically, the proportionality of entanglement entropy between proton-proton pairs and neutron-proton pairs is directly related to the ratio of nuclear contacts within the atomic nucleus, demonstrating an approximate ratio of 2.0. Our research suggests that this scaling relationship holds true for all symmetric nuclei, furthermore, we offer a possible explanation for this phenomenon.
Introduction
The atomic nuclei are complex and strongly interacting systems that are difficult to solve exactly. As an approximation, the nuclear forces can be separated into long-range attractive as well as short-range repulsive interactions. Strong attractive forces and strong repulsive forces between two, three or even more nuclei can reach a dynamical equilibrium and form a unique ground state of the nucleus, a phenomenon known as Short-Range Correlation (SRC) [1; 2; 3]. The details of the SRC effect are important inspiration for understanding and studying topics such as symmetry energies of nuclear matter [2; 4; 5; 6], mergers of neutron stars [7], and lepton-nucleus scattering processes [8; 9]. For some review articles on SRC physics please refer to [10; 11; 12; 13; 14]. Experimentally investigating SRC employs electron (nucleon)-nucleus scattering process with high-energy and large-momentum transfer [15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28]. A large number of measurements have shown that SRC pairs within the nucleus are dominated by the deuteron form, i.e., neutron-proton (\(np\)) pairs. Quantitatively described as \(np\) pairs are about 20 times more numerous than the other channels (\(pp\) or \(nn\)) [25; 26; 21]. Meanwhile, with ab-initio calculations [29; 30; 31; 32; 33; 34; 35; 36; 37], there are more progresses have been discussed on SRCs.
The mean-field approximation enables the examination of the global nature of atomic nuclei. However, due to the significant momentum transfer associated with SRCs, the study of dilute nuclear matter becomes unfeasible. Extensive research on atomic nuclei, employing simple Fermi momentum distributions and cold atomic gas methods, has revealed the inadequate ability to accurately describe the behavior of SRCs. Consequently, a more sophisticated approach is required to elucidate the interactions between nucleons in SRC states. Recently, the investigation of nuclear physics has incorporated the concepts of entropy in thermodynamics and information science, leading to a burgeoning research area. For a comprehensive list of recent references, please refer to [38]. Notably, the use of information entropy to investigate nuclear structure has yielded several innovative ideas. The application of quantum entanglement entropy, a specific form of information entropy, is widely utilized across various disciplines [39; 40; 41; 42; 43; 44; 45; 46]. The interplay between entanglement entropy and SRCs has been explored in multiple studies. Specifically, Refs. [47; 48; 49; 50; 51] offer insights into the relationship between entanglement entropy and SRCs.
As mentioned in Ref. [43], entanglement is a fundamental feature of quantum systems. In principle, it applies to any quantum pure state which can be divided into different subsystems. Being SRCs states of nuclear many-body systems, they should generate entanglement at any energy. Calculating the entanglement entropy of SRCs, i.e., quantizing the degree of entanglement is required to study the quantum entanglement states of SRCs. The entanglement entropy of the nuclear structure is the first to be considered. In recent studies the entanglement entropy of nuclei such as \({}^{4}\)He, \({}^{6}\)He, etc. have been discussed in [52]. In addition, the discussion of single-orbital or two-orbital mutual information for nuclei such as \({}^{28}\)Si, \({}^{56}\)Ni and \({}^{64}\)Ge using density matrix renormalization group (DMRG) schemes have also been addressed in recent work [53]. The investigation of the entanglement entropy of the SRC is essentially a scaling separation of the eigenstates of the nucleus, i.e., the SRC nuclear states for high momentum orbital and the mean-field approximation of the Fermi momentum distribution nucleon states for low-momentum orbital. One approach to addressing scaling separation is called the Generalized Contact Formalism (GCF) [54; 55; 56; 57]. The nuclear contacts were defined which inspired by Tan contact theory in atomic physics [58; 59; 60]. The application of the GCF method to describe the fundamental properties of the SRCs have achieved some successes, such as two-body nucleon densities [57], high-momentum tails [55; 57; 61], and electron-nucleus scattering experiments [62]. The connection between nuclear contacts and SRCs information science was also discussed recently [50; 51; 52].
In this work, we find the scaling relation of nuclear contact and entanglement entropy in SRCs nucleus. Through our viewpoint, the extraction and analysis of existing nuclear contacts and the corresponding entanglement entropy satisfy a scaling relation among them. One should conclude that the relationship between nuclear contacts and the single-orbital entanglement entropy constructed from them - a relationship that should apply to nuclei of any mass number \(A\) and may predict the existence of SRC channel ratios (\(pp/np\)) for nuclei that have not been measured yet. We start with simple review of GCF theory and single-orbital entanglement entropy in Sec. II. In Sec. III, we present and show our calculations and main results about nuclear contacts and scaling law of SRCs. Meanwhile, some discussion are given. Finally, we conclude our work and give the outlook. We emphasize here that for the convenience of the whole discussion only the symmetric nuclei case is considered, and that the relevant corrections for asymmetric nuclei case can be found in Ref. [63].
## II Formalism
### General contact formalism
The GCF method is a decomposition of nuclear many-body wave function into spatially close correlated nucleon pair two-body part and other components. If the correlated nucleon pair is chosen to be the universal state, the remainders
of them imply situation-dependent state consisting of \(A-2\) nucleons. Therefore, the factorized asymptotic wave-function takes the form [55]
\[\Psi\xrightarrow{r_{ij}\to 0}\sum_{\alpha}\varphi_{\alpha}(\mathbf{r}_{ij})A^{ \alpha}_{ij}(\mathbf{R}_{ij},\{\mathbf{r}\}_{k\neq ij}), \tag{1}\]
where the index \(ij\) corresponds to \(np\), \(pp\), and \(nn\) pairs. The parts of \(\varphi_{\alpha}(\mathbf{r}_{ij})\) are the two-body universal functions defining the SRC state, \(A^{\alpha}_{ij}\) denote the so called regular parts of the many-body wave function, the index \(\alpha\) represents the quantum numbers of two-body states. The \(\varphi_{\alpha}(\mathbf{r}_{ij})\) is a function of the distance \(\mathbf{r}_{ij}\) between nucleon pair with SRC states rather than of the center-of-mass system coordinate \(\mathbf{R}_{ij}\) appearing in \(A^{\alpha}_{ij}\). The later one is obtained by solving the two-body, zero energy, Schrodinger equation with the full many-body potential.
Under the approximation discussed above, the nuclear contact of GCF is simply defined as
\[C=N(A,Z)\langle A|A\rangle, \tag{2}\]
Since we are interested in the symmetric nuclei case, the \(N\) as a nucleon pairs number is determined by \(Z\) protons and \(A-Z\) neutrons, one can consider \(Z=A/2\).
The single nucleon momentum distribution in momentum space is represented as
\[n(\mathbf{k})=\langle\Psi|a^{\dagger}_{\mathbf{k}}a_{\mathbf{k}}|\Psi\rangle. \tag{3}\]
If we return to the SRC orbitals, i.e., \(k_{F}\ll|\mathbf{k}|\), the momentum distribution can be approximated by GCF theory [56]
\[n(\mathbf{k})=C|\phi(\mathbf{k})|^{2}, \tag{4}\]
where \(\phi(\mathbf{k})\) is the Fourier transform of function \(\varphi_{\alpha}(\mathbf{r}_{ij})\). According to normalisation condition \(\int_{k_{F}}^{\infty}|\phi(\mathbf{k})|^{2}\mathrm{d}\mathbf{k}=1\), the fraction of the one-body momentum density with the momentum above \(k_{F}\) is given by [56]
\[\frac{\int_{k_{F}}^{\infty}n(\mathbf{k})d\mathbf{k}}{\int_{0}^{\infty}n(\mathbf{k})d\mathbf{k }}=\frac{C_{nn}^{s=0}+C_{pp}^{s=0}+C_{np}^{s=0}+C_{np}^{s=1}}{A/2}. \tag{5}\]
Note that we consider the contribution of the main channels of the SRCs, e.g., the \(np\) deuteron channel (\(l=0,2\) and \(s=1\) coupled to \(j=1\)), and the singlet \(pp\), \(np\), and \(nn\)\(s\)-wave channel (\(l=s=j=0\)). \(C_{NN}^{s}/\frac{A}{2}\) gives the fraction of the one-body momentum density above the Fermi momentum due to each type of SRC pair [56]. In fact, the above GCF has been successful in explaining the one-body as well as two-body density distributions of nucleons [56; 63].
### Single-orbital entanglement entropy
The origin of entanglement entropy is distinct from the conventional notion of entropy attributed to a lack of knowledge concerning the microstate of a system, originating from thermal fluctuations. Rather, entanglement entropy stems from the intricate entanglement prevailing among distinct subunits of the system [64; 65]. In order to consider the scaling separation of SRC nuclei [66], a simple model is to introduce orbital entanglement entropy. The simplified model in which the SRC is identified with the high-momentum subspace and considered as a single orbital. Thus a nucleon can occupy either one of the Fermi sea (FS) orbitals or an SRCs. In this way, the Hilbert space of nucleus can be divided into the tensor product of the FS and the SRC space orbitals
\[\mathcal{H}=\mathcal{H}_{\mathrm{FS}}\otimes\mathcal{H}_{\mathrm{SRC}}. \tag{6}\]
We use establishment process of the single-orbital entanglement entropy in Ref. [52], which essentially yields the reduced density matrix of subsystems. The nucleus eigenstates can then be written as a linear combination of Slater determinants \(|\phi\rangle\) for the nucleon wave functions,
\[|\Psi\rangle=\sum_{\eta}\mathcal{A}_{\eta}|\phi_{\eta}\rangle, \tag{7}\]
where the Slater determinant is given in terms of applying creation operators on the real particle vacuum \(|0\rangle\):
\[|\phi_{\eta}\rangle=\prod_{i\in\eta}^{A}a^{\dagger}_{i}|0\rangle, \tag{8}\]
where \(A\) is the nucleus mass number.
According to this way, the single-orbital reduced density matrix is [52]
\[\rho_{n_{i},n_{i}^{\prime}}^{(i)}=\sum_{BC}\bra{\Psi|BC}\ket{n_{i}^{\prime}} \bra{n_{i}}\bra{BC|\Psi}, \tag{9}\]
where \(BC=n_{1}n_{2},\cdots n_{i},n_{i+1},\cdots n_{A}\). Each state \(i\) has the possibility of being occupied or empty. The basis \(\{\ket{n_{i}}\}\) denotes \(\{\ket{0},\ket{1}=a_{i}^{\dagger}\ket{0}\}\). With this basis the density matrix is written as [50; 52]
\[\rho^{(i)}=\begin{pmatrix}1-\gamma_{ii}&0\\ 0&\gamma_{ii}\end{pmatrix}, \tag{10}\]
where the occupation of the orbital is given by \(\gamma_{ii}=\bra{\Psi|a_{i}^{\dagger}a_{i}|\Psi}\). Thus, one can construct the von Neumann entropy from the density matrix (10)
\[S_{i}^{(1)}=-\text{Tr}[\rho^{(i)}\ln\rho^{(i)}]=-\sum_{k=1}^{2}\omega_{k}^{(i )}\ln\omega_{k}^{(i)}, \tag{11}\]
where \(\omega_{k}^{(i)}\) is the eigenvalue of \(\rho^{(i)}\). Here we emphasize that Eq. (11) is an expression for the single orbital entanglement entropy, and the corresponding density matrix is of \(2\times 2\) form.
### Single-orbital entanglement entropy with nuclear contact
At present, our discussion can be succinctly summarized as follows: Firstly, the atomic nucleus system can be divided into two distinct scales, namely, SRC orbitals with momentum exceeding the Fermi momentum, and weakly interacting FS orbitals with momentum below the Fermi momentum. These two types of orbitals are quantum entangled. Secondly, nuclear contacts can be constructed using the GCF method. Thirdly, the density matrix of entanglement entropy for the SRC single orbital is correlated to the occupancy of the orbital. In the following, we provide a brief description of how the SRC single-orbital entanglement entropy can be represented in terms of nuclear contacts.
Since SRC is characterized by a high momentum tail compared with FS, one can consider the nucleons with the momenta \(k>k_{F}\) as occupying high momentum SRC orbitals. According to the GCF, Eq. (5) represents the ratio of high-momentum orbital nucleons to total nucleons. If one defines the operator \(\hat{P}=a_{k}^{\dagger}a_{k}|_{k>k_{F}}\), the probability that a nucleon in a given nucleus occupies an SRC orbital can be easily obtained as
\[\gamma_{\text{SRC}}=\bra{\Psi|\hat{P}|\Psi}=\frac{C}{A/2}\equiv c, \tag{12}\]
where \(c\) is the normalised (reduced) nuclear contact and can be extracted by nucleons two-body wave function and momentum distribution [56]. According to the above definition of SRC occupancy probability, the single orbital entanglement entropy entropy for a single SRC is directly obtained through Eq. (10)
\[S^{\text{SRC}}(c)=-\bigg{[}c\ln\bigg{(}\frac{c}{1-c}\bigg{)}+\ln\left(1-c \right)\bigg{]}. \tag{13}\]
To obtain the total SRC orbital entanglement entropy one has to multiply the single SRC entanglement by nucleon pair number \(N(A,Z)=A/2\)
\[S^{\text{SRC}}_{tot}(A,c)=-\frac{A}{2}\bigg{[}c\ln\bigg{(}\frac{c}{1-c} \bigg{)}+\ln\left(1-c\right)\bigg{]}. \tag{14}\]
This expression reveals the linear dependence of the entanglement entropy on the nuclear mass number \(A\). In other words the total SRC entanglement entropy is proportional to the volume of the nucleus. This easily piques people's curiosity, as the entanglement entropy is frequently associated with the system's area law - the Bekenstein-Hawking entropy [67; 68; 69].
## III Results and discussions
We review the formalism for computing the entanglement entropy of SRC orbitals in Section II. An example of calculating entanglement entropy is given in Ref. [50], however the absolute magnitude of entanglement entropy is not our focus in this work. Since there is experimental interest in the ratio of SRC nucleon pair types in nuclei, i.e., the ratio of the number of proton-proton pairs in the SRC state to the number of neutron-proton pairs in a given nucleus. From the perspective of nuclear contacts it seems possible to qualitatively use the ratio of the reduced contacts of the corresponding channel as a basis for determining this ratio [56]. In this section we start from the relation between the reduced nuclear contact and the entanglement entropy to discuss what relations should be satisfied by the ratios between the different SRC channels in the nucleus.
In fact, it is viable to extract nuclear contacts, although the nuclear many-body wavefunction cannot be fully solved, some practical methods are given in Ref. [56]. The authors argued that the nucleons two-body functions used in their work were calculated numerically using the AV18 potential for zero energy [63]. The obtained wave functions are insensitive to the exact value of the energy for small distances and large momenta. In Ref. [56], the authors have used three methods for the extraction of nuclear contacts, employing two-body density distributions [70] in coordinate space and momentum space as the first two methods, respectively. In the third method, they used experimental data [20; 21; 24; 25].
The extracted nuclear contacts are shown in Table 1 of Ref. [56]. We consider all the extraction results for the symmetric nuclei case to compute the corresponding SRC orbital entanglement entropies. In this work, we are only responsible for the entropy between the different SRC channels and the ratio of the nuclear contacts. Using Eq. (13) one should get the expression which describes the ratio of SRC entanglement entropies and reduced nuclear contacts with different channels
\[R(c_{pp},c_{np})=\frac{S_{pp}^{SRC}/S_{np}^{SRC}}{c_{pp}/c_{np}}, \tag{15}\]
where the \(pp\)-channel we consider the contribution of spin 0, and the \(np\)-channel we consider the total contribution of spin 0 and 1. The nuclear contacts of the symmetric nuclei were extracted in Ref. [56] and the ratios defined by Eq. (15) are shown in Table. 1.
We show the specific ratio relationships in Figures.1 and 2, corresponding to the extracted contacts from \(k\)-space and \(r\)-space, respectively. The vertical axis in the Figure. 2 represents the corresponding ratio of Eq. (15). The source of uncertainties is taken from the uncertainties of the nuclear contacts extracted from Ref. [56]. From Table.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline \(A\) & \multicolumn{4}{c|}{**k-space**} & \multicolumn{4}{c|}{**r-space**} \\ \cline{2-7} & \(c_{np}\) & \(c_{pp}\) & \(R=\frac{S_{pp}/S_{np}}{c_{pp}/c_{np}}\) & \(c_{np}\) & \(c_{pp}\) & \(R=\frac{S_{pp}/S_{np}}{c_{pp}/c_{np}}\) \\ \hline \({}^{4}\)**He** & 0.1299\(\pm\)0.0010 & 0.0065\(\pm\)0.0003 & 2.029\(\pm\)0.167 & & & & \\ \hline \({}^{4}\)**He (exp)** & 0.157\(\pm\)0.007 & 0.008\(\pm\)0.002 & 2.104\(\pm\)0.098 & & & & \\ \hline \({}^{6}\)**Li** & 0.1103\(\pm\)0.0011 & 0.0049\(\pm\)0.0003 & 2.007\(\pm\)0.021 & 0.1056\(\pm\)0.0004 & 0.00415\(\pm\)0.00004 & 2.030\(\pm\)0.004 \\ \hline \({}^{8}\)**Be** & 0.1406\(\pm\)0.0022 & 0.0079\(\pm\)0.0007 & 2.021\(\pm\)0.033 & 0.126\(\pm\)0.001 & 0.00603\(\pm\)0.00003 & 2.032\(\pm\)0.006 \\ \hline \({}^{10}\)**B** & 0.1259\(\pm\)0.0022 & 0.0079\(\pm\)0.0006 & 1.941\(\pm\)0.028 & 0.1127\(\pm\)0.0020 & 0.0057\(\pm\)0.0002 & 1.973\(\pm\)0.016 \\ \hline \({}^{12}\)**C** & 0.182\(\pm\)0.008 & 0.013\(\pm\)0.002 & 2.047\(\pm\)0.071 & & & \\ \hline \({}^{12}\)**C (exp)** & 0.195\(\pm\)0.021 & 0.015\(\pm\)0.005 & 2.052\(\pm\)0.163 & & & \\ \hline \({}^{16}\)**O** & & & & 0.1208\(\pm\)0.0030 & 0.0068\(\pm\)0.0003 & 1.963\(\pm\)0.022 \\ \hline \({}^{40}\)**Ca** & & & & 0.1233\(\pm\)0.0030 & 0.0073\(\pm\)0.0004 & 1.953\(\pm\)0.025 \\ \hline \end{tabular}
\end{table}
Table 1: The nuclear contacts and the corresponding ratios defined in Eq. (15) for a variety of nuclei. The contacts come from Ref. [56], which are divided by \(A/2\) and give the percent of nucleons above Fermi energy \(k_{F}\) in the different SRC channels. Only the symmetric nuclei case is taken into account, and we find that the ratios computed from the reduced contacts given by either \(k\)-space or \(r\)-space converge almost to 2.
I and Figures. 1 and 2 one should find that the value of Eq. (15) barely depends on the nucleon number \(A\) of the atomic nucleus and converges to a constant - 2. However, considering the ratio of entanglement entropy or the ratio of nuclear contacts alone does not seem to determine that they are mass number dependent. This is a very specific result, discussing separately the ratios of nuclear contacts for different channels and the raitos of SRC entanglement entropy for different channels have no unambiguous convergence behavior. Next we try to analyze and discuss this phenomena.
Generally speaking, there are at least two possibilities for making \(R(c_{pp},c_{np})=constant\) in Eq. (15) to hold: the numerator and denominator are each constants; neither the numerator nor the denominator is constant, but the total ratio implies some kind of constant relation. Both possibilities are related to nuclear contacts. The question of how to obtain the nuclear contacts of a symmetric nucleus is the focus of our discussion, which conceptually requires a two-body density distribution of nucleon pairs. To simplify the discussion, we consider the description of nuclear contacts in Ref. [63]. The authors used the two-body nucleon charge density to construct nuclear contacts instead of
Figure 1: The ratio between the nuclear contacts of proton-proton and neutron-proton channels and the ratio corresponding SRC single-orbital entanglement entropies. Nuclear contacts are extracted from the form of the two-body wave function in \(k\)-space (left) and \(r\)-space (right) mentioned in Ref. [56].
Figure 2: Left: Calculated ratios from reduced nuclear contacts extracted from the \(k\)-space nuclear two-body density distribution versus the nuclei mass numbers, where the red data points indicate the results obtained from the experiments [20; 21; 24; 25; 56]. Right: Calculated ratios from reduced nuclear contacts extracted from the \(r\)-space nuclear two-body density distribution versus the nuclei mass numbers. The blue dashed line represents a value of 2 on the vertical axis.
the nuclear two-body density and came to the following conclusions [63]:
\[\begin{split} C^{s=0}_{pp}&=C^{s=0}_{np}\approx\frac{9} {40\pi}\frac{1}{R_{0}^{3}}\frac{1}{|\varphi^{s=0}_{pp}(r_{0})|^{2}}\frac{Z^{2}} {A},\\ C^{s=1}_{pn}&=L\frac{NZ}{A}C^{s=1}_{pn}(d).\end{split} \tag{16}\]
All parameters and details of their physical meanings should be referred to the original literature. Since all the parameters of the above equation are constants, one can simplify them as
\[\begin{split} C^{s=0}_{pp}&=C^{s=0}_{np}=k_{1} \frac{Z^{2}}{A},\\ C^{s=1}_{pn}&=k_{2}A,\end{split} \tag{17}\]
where \(k_{1}\simeq 0.023\) and \(k_{2}\simeq 0.085\) are constants come from the parameters in Eq. (16). If we consider the symmetric nuclei, \(Z=A/2\), and \(C_{pp}/C_{np}=c_{pp}/c_{np}\) with \(c=C/\frac{A}{2}\), one should conclude
\[\frac{c_{pp}}{c_{np}}=\frac{\frac{1}{4}k_{1}}{\frac{1}{4}k_{1}+k_{2}}\simeq 0.06. \tag{18}\]
This leads to a constant value for the ratio \(R\) in Eq. (15) if the reduced nuclear contacts do not depend on the nucleon number \(A\). This is not consistent with the results in Ref. [56]. Although there is not enough evidence to say whether the reduced contacts are \(A\)-dependent, for the time being we identify the extraction results in Ref. [56]. As an exercise, we just keep the proportionality of Eq. (17) but do not force the numerator and denominator to be constants.
As another possibility, although we have not mathematically found the hidden relation that makes \(R\) a constant, the results (see below) show that they can reasonably well correspond to the currently extracted nuclear contacts. Figure. 3 presents a comparison of the results of the above two possibilities. The black and blue data points come from Ref. [56] in \(k\)-space and \(r\)-space distribution fitting, while the red data come from Refs. [20; 21; 24; 25; 56]. Note that the only symmetric nuclei case is considered. The green dashed line comes from Eq. (17). From Eq. (18) it can be seen that when we consider reduced nuclear contacts \(c_{pp}\) and \(c_{np}\) are determined by the parameters \(k_{1}\) and \(k_{2}\) respectively, i.e. there is only one point in the \(c_{pp}-c_{np}\) plane. Based on the current extracted nuclear contacts results we have no reason to think that this is reasonable. Thus we only consider proportionality conclusions but do not insist on whether nuclear contacts are constant or not.
Let's try to discuss the main points of this section. First, based on the nuclear contacts shown in Ref. [56] we compute the SRC orbital entanglement entropy for different channels and obtain a ratio relation as mentioned in the text, which is denoted by Eq. (15) The main results are shown in Table. 1 and Figures. 1-2. Second, we attempt to
Figure 3: The reduced nuclear contacts with two channels \(pp\) and \(pp\). All data points were extracted in Ref. [56]. The green dashed-dotted line shows the result in Eq. (18) and we do not force the numerator and denominator to be constants. The violet line represents \(R=2\) in Eq. (15), we do not have the formula which gives us the exact relation of \(c_{pp}\) and \(c_{np}\).
understand what causes this fixed ratio. In principle, making \(R\simeq 2.0\) in the joint action of Eqs. (16-18) one also needs to consider that the reduced nuclear contacts do not depend on the number of nucleons \(A\). Furthermore, although the exact reason why \(R\) converges to 2 in any current nuclei is not yet known, the introduction of entanglement entropy gives an additional constraint on the value of nuclear contacts (see the violet line in Figure. 3).
## IV Conclusion and outlook
In this work, we introduce nuclear contacts to characterize the effects of nuclear SRC from GCF. We also compute implicit constraints on the values of nuclear contracts using the single-orbital SRC entanglement entropy method. On this basis, we currently consider that the ratio of SRC entanglement entropy to the corresponding nuclear contacts for different channels does not depend on the nucleon number of the atomic nuclei (Figure. 2). We mainly discuss the symmetric nuclei case in the paper. For asymmetric nuclei case, it has little impact on the main results. We therefore consider the findings are universal for any nuclei.
It should be noted that we only consider the single-orbital SRC entanglement entropy computational model, and for the two-orbital case, one can refer to Ref. [52] for some details. Our result is an extension of Ref. [50] and suggests that the introduction of entanglement entropy can constrain the value of nuclear contacts taken in the GCF approach.
Our results give a continuum of nuclear contacts assignments for different channels and are likely to apply to all nuclei. Since nuclear contacts in GCF can characterize the ratio of SRC nucleon pairs to the nuclear mass number, it becomes a possibility that our results provide a prediction for future work on measuring the ratio of SRC. In principle, the conclusions we give can be used when determining the specific information of an SRC channel (\(np\)) to get the SRC information of another channel (\(pp\)).
###### Acknowledgements.
This work is supported by the Strategic Priority Research Program of Chinese Academy of Sciences (Grant NO. XDB34030301).
|
2305.00568 | Discrete quadratic model QUBO solution landscapes | Many computational problems involve optimization over discrete variables with
quadratic interactions. Known as discrete quadratic models (DQMs), these
problems in general are NP-hard. Accordingly, there is increasing interest in
encoding DQMs as quadratic unconstrained binary optimization (QUBO) models to
allow their solution by quantum and quantum-inspired hardware with
architectures and solution methods designed specifically for such problem
types. However, converting DQMs to QUBO models often introduces invalid
solutions to the solution space of the QUBO models. These solutions must be
penalized by introducing appropriate constraints to the QUBO objective function
that are weighted by a tunable penalty parameter to ensure that the global
optimum is valid. However, selecting the strength of this parameter is
non-trivial, given its influence on solution landscape structure. Here, we
investigate the effects of choice of encoding and penalty strength on the
structure of QUBO DQM solution landscapes and their optimization, focusing
specifically on one-hot and domain-wall encodings. | Tristan Zaborniak, Ulrike Stege | 2023-04-30T20:19:46Z | http://arxiv.org/abs/2305.00568v3 | # Discrete quadratic model QUBO solution landscapes
###### Abstract
Many computational problems involve optimization over discrete variables with quadratic interactions. Known as discrete quadratic models (DQMs), these problems in general are NP-hard. Accordingly, there is increasing interest in encoding DQMs as quadratic unconstrained binary optimization (QUBO) models to allow their solution by quantum and quantum-inspired hardware with architectures and solution methods designed specifically for such problem types. However, converting DQMs to QUBO models often introduces invalid solutions to the solution space of the QUBO models. These solutions must be penalized by introducing appropriate constraints to the QUBO objective function that are weighted by a tunable penalty parameter to ensure that the global optimum is valid. However, selecting the strength of this parameter is non-trivial, given its influence on solution landscape structure. Here, we investigate the effects of choice of encoding and penalty strength on the structure of QUBO DQM solution landscapes and their optimization, focusing specifically on one-hot and domain-wall encodings.
Discrete quadratic model, QUBO, penalty weights, constraint handling, quantum computing, domain-wall, one-hot
## I Introduction
Optimization of discrete quadratic models (DQMs) covers a large number of combinatorial optimization problems known to be NP-hard [1, 2], including the quadratic assignment problem and travelling salesman problem [3, 4]. Recently, quantum computers and quantum-inspired computers (e.g., digital annealers) have been put to these problems, reformulated as quadratic unconstrained binary optimization (QUBO) models, as their specific architectures and solution methods may provide resource use benefits versus conventional methods [5, 6].
Mapping a DQM to a QUBO model involves encoding the discrete variables of the model into binary variables, while preserving quadratic interactions. Three such encodings are currently known: _one-hot_, _domain-wall_, and _binary_[7]. Whereas one-hot and domain-wall encodings of discrete variables are natively quadratic, this is not guaranteed with binary encoding, where auxiliary variables are generally needed to quadrtize higher-order terms if the encoding is to be lossless [8]. For this reason, one-hot and domain-wall encodings are preferred.
An artifact of these two encodings concerns their introduction of _invalid_ solutions to the QUBO model solution space, which have no meaning with respect to the original DQM [9]. As such, it must be ensured that these invalid solutions do not occupy the optimal positions within the QUBO model solution landscape. This is accomplished by the introduction of constraints to the QUBO model objective function, each weighted by a tunable penalty parameter of sufficient strength [10, 11]. Merely satisfying this sufficiency condition might result in the problem being more or less difficult to solve, though, as varying the strength of this penalty parameter drastically changes the structure of the solution landscape [12].
In this work we systematically investigate the structure of QUBO DQM solution landscapes under one-hot and domain-wall encodings as functions of penalty parameter strength. Specifically, we first present the one-hot and domain-wall encodings of an arbitrary, unconstrained DQM. Then, we derive and discuss features of their solution landscapes with respect to penalty parameter strength, focusing on the conditions under which all solutions of a given class (valid or invalid) either occupy local minima, or do not occupy local minima. These conditions are important to establishing penalty strengths that reduce the number of local minima while guaranteeing that valid solutions of interest continue to occupy local minima, which may comprise a set of solutions instead of only that which is minimum energy [13].
Our findings are as follows. For one-hot QUBO DQMs, no invalid solution occupies a local minimum for sufficiently large penalty strengths. Similarly, all valid solutions occupy local minima for sufficiently large penalty strengths. Furthermore, no valid solution occupies a local minimum for sufficiently small penalty strengths. By contrast, we find for domain-wall QUBO DQMs that we cannot in general guarantee that all invalid solutions will not occupy local minima, regardless of our selection of penalty strength. Moreover, it is never the case with domain-wall QUBO DQMs that all valid solutions occupy local minima. Finally, we discuss the significance of understanding solution landscape structures to selection of an appropriate encoding for a given problem. We concentrate this discussion on the encodings investigated here, but emphasize a wider applicability.
## II DQMs as QUBOs
DQMs are polynomials over discrete variables (numeric or categorical), limited to terms less than degree two. An arbitrary DQM is expressed as follows:
\[H_{DQM}=\sum_{i}A_{(i)}(d_{i})+\sum_{i\geq j}B_{(i,j)}(d_{i},d_{j}) \tag{1}\]
where \(d_{i}\) are the discrete variables, and \(A_{(i)}\) and \(B_{(i,j)}\) are real-valued functions over these variables.
To express a DQM in binary variables, we follow the convention in Ref. [7], denoting each variable \(d_{i}\) by a set of _sub-variables_\(x_{i,\alpha}\), where \(i\) refers to the variable index (register) and \(\alpha\) to the index of the variable value (state). These \(x_{i,\alpha}\) are such that:
\[x_{i,\alpha}=\begin{cases}1,&\text{variable $i$ matches value indexed by $\alpha$}\\ 0,&\text{otherwise}\end{cases} \tag{2}\]
The DQM objective function may then be expressed as:
\[H_{DQM}=\sum_{i\geq j}\sum_{\alpha,\beta}C_{(i,j,\alpha,\beta)}x_{i,\alpha}x_{ j,\beta} \tag{3}\]
where the coefficients \(C_{(i,j,\alpha,\beta)}\) express both the linear and quadratic interaction energies between the DQM sub-variables (given that \(x_{i,\alpha}x_{i,\alpha}=x_{i,\alpha}\)). Throughout this work, for simplicity we let the number of variables be \(k\) and the number of values per variable be \(l\).
### _One-hot encoding_
Per DQM sub-variable \(x_{i,\alpha}\), a one-hot QUBO encoding assigns a corresponding binary variable \(b_{i,\alpha}\), identical to \(x_{i,\alpha}\) except in notation, to emphasize its binary character. The following penalty function, \(H_{OH}^{P}\), then ensures that all invalid solutions, which are with at least one register \(i^{\prime}\) such that \(\sum_{\alpha}b_{i^{\prime},\alpha}\neq 1\), are assigned a positive penalty:
\[H_{OH}^{P}=\sum_{i}\Big{(}\sum_{\alpha}b_{i,\alpha}-1\Big{)}^{2} \tag{4}\]
Note that the penalty per register is proportional to the number of bits with value \(1\) (minus one) squared. With \(k\) registers of length \(l\) each, the magnitude of this penalty therefore ranges from \(0\) to \(k(l-1)^{2}\).
This penalty is then added to the objective function while being multiplied by a penalty parameter \(\gamma_{OH}\) such that the overall one-hot QUBO DQM, \(H_{OH}\), is:
\[H_{OH}= \sum_{i\geq j}\sum_{\alpha,\beta}C_{(i,j,\alpha,\beta)}b_{i, \alpha}b_{j,\beta}\] \[+\gamma_{OH}\sum_{i}\Big{(}\sum_{\alpha}b_{i,\alpha}-1\Big{)}^{2} \tag{5}\]
It is worth noting explicitly that this encoding scheme requires \(kl\) binary variables and their associated linear terms, and \(kl(kl-1)/2\) interaction terms to represent a DQM of \(k\) variables with \(l\) possible values each. Moreover, we point out that of the \(2^{kl}\) possible binary solution vectors, only \(l^{k}\) are valid in the absence of further constraints, the remaining \(2^{kl}-l^{k}\) being invalid solutions.
### _Domain-wall encoding_
Domain-wall encoding is a relatively recent innovation as compared to one-hot encoding for DQMs, and is based on the physics of domain-walls in one-dimensional Ising spin chains [14]. While typically formulated in terms of spin variables \(\sigma_{i,\alpha}\in\{-1,+1\}\) before being translated into binary variables, here we directly use binary variables. Domain-wall encoding is a unary encoding, with the value of a register represented by the position of a domain-wall within the register. Binary variables \(b_{i,\alpha}\) are defined for \(\alpha\in{0,...,l-2}\), and the boundary conditions \(b_{i,-1}=1\) and \(b_{i,l-1}=0\) (which do _not_ correspond to physical bits in computations) are enforced per register.
Specifically, we replace each \(x_{i,\alpha}\) in Equation 3 with \(b_{i,\alpha-1}-b_{i,\alpha}\). This transformation is such that valid solutions contain only one such term which is non-zero (one domain-wall) per register, while invalid solutions contain more than one domain-wall in at least one register. The penalty function, \(H_{DW}^{P}\), ensuring that all invalid solutions are assigned a positive penalty is given by Ref. [15] as:
\[H_{DW}^{P}=\sum_{i}\sum_{\alpha}(b_{i,\alpha}-b_{i,\alpha}b_{i,\alpha-1}) \tag{6}\]
Note that the penalty per register is proportional to the number of domain walls present (minus one). With \(k\) registers each of length \(l-1\) (as opposed to length \(l\) in the one-hot encoding scheme), the magnitude of this penalty ranges from \(0\) to \(k\lfloor(l-1)/2\rfloor\).
As with the one-hot penalty function, this domain-wall penalty function is then added to the objective function and multiplied by a penalty parameter \(\gamma_{DW}\), giving the overall domain-wall QUBO DQM, \(H_{DW}\), as:
\[H_{DW}= \sum_{i\geq j}\sum_{\alpha,\beta}C_{(i,j,\alpha,\beta)}(b_{i, \alpha-1}-b_{i,\alpha})(b_{j,\beta-1}-b_{j,\beta})\] \[+\gamma_{DW}\sum_{i}\sum_{\alpha}(b_{i,\alpha}-b_{i,\alpha}b_{i, \alpha-1}) \tag{7}\]
With this encoding, we require \(k(l-1)\) binary variables and their associated linear terms, and \(k(l-1)(k(l-1)-1)/2=kl(kl-1)/2-k(k+1)/2\) interaction terms, amounting to a savings of \(k\) binary variables and \(k(k+1)/2\) interaction terms versus a one-hot encoding of the same problem [15]. Of the \(2^{k(l-1)}\) possible solution vectors, \(l^{k}\) are valid solutions, and the remaining \(2^{k(l-1)}-l^{k}\) are invalid solutions in the absence of further constraints.
## III The effect of penalty strength on solution landscape features
As described above, we can represent both one-hot and domain-wall encodings of DQMs as a sum of a _cost_
function, \(c(x)\) (Equation 3), and _penalty_ function, \(p(x)\) (Equations 4 or 6), such that their QUBO model functions, \(f(x)\), can then be written as:
\[f(x)=c(x)+\gamma p(x) \tag{8}\]
where \(x\in\{0,1\}^{n}\) is a binary solution vector of a length appropriate to the encoding under consideration (i.e., in our case \(n\in\{kl,k(l-1)\}\)), and \(\gamma\in\{\gamma_{OH},\gamma_{DW}\}\). Denoting the optimum valid solution \(x^{*}\), we must select a \(\gamma\) to satisfy that \(f(x^{*})<f(x^{\prime})\) for all \(x^{\prime}\in S^{\prime}\), where \(S^{\prime}\) is the set of invalid solutions, so that \(x^{*}\) occupies the global minimum of our objective function. It follows that:
\[\gamma>\gamma^{*}=\max_{x^{\prime}\in S^{\prime}}\left(\frac{c(x^{*})-c(x^{ \prime})}{p(x^{\prime})}\right) \tag{9}\]
In general, for DQMs encoded as QUBOs, this quantity is not easily computable, as it requires that we find \(x^{*}\), and we do not expect to find \(x^{*}\) without evaluating a number of candidates exponential in the size of the solution vectors [11]. Therefore, several heuristics for selecting a \(\gamma\) that satisfies the inequality have been proposed, including setting it to the upper bound of the objective function, to the maximum QUBO coefficient, or to the maximum change possible to the objective function in flipping a single bit [10, 11, 12]. However, it remains to be understood how the strength of the penalty parameter specifically influences the solution landscape and its optimization, provided satisfaction of Equation 9.
Qualitatively, given that the energies of valid solutions do not depend on \(\gamma\) whereas those of invalid solutions do, as \(\gamma\) increases from \(\gamma^{*}\), peak height and valley depth in the solution landscape correspondingly increase. Steep QUBO solution landscapes are known to be difficult to traverse for classical algorithms such as simulated annealing [16], but quantum algorithms (quantum annealing, QAOA) have been suggested to better navigate such spaces given their ability to tunnel through high energy barriers [17, 18]. However, since quantum hardware implementations currently remain noisy, in combination with limited precision on qubit control, steep solution landscapes increase the failure rate of finding optimal solutions, by forcing valid solutions to occupy a narrower energy band [19, 20, 21]. Therefore, we conjecture shallower QUBO solution landscapes at a first estimation to lend themselves to better solution by both classical and quantum approaches.
As a result, we would then set \(\gamma=\gamma^{*}+\epsilon\), where \(0<\epsilon\ll|\gamma^{*}|\), so as to have our landscape as shallow as possible while still satisfying the condition that \(x^{*}\) is the minimum energy solution out of all \(x\in S\cup S^{\prime}\), \(S\) being the set of valid solutions and \(S^{\prime}\) the set of invalid solutions. (We will maintain this notation throughout.) However, this is not sufficient if we are interested in other low-energy valid solutions [22, 23], which may not occupy local minima under such a penalty. Moreover, it is possible that we may be selecting for a large number of invalid solutions occupying low-energy local minima under such a penalty, which could negatively influence the performance of our search. To be clear, we say that \(x_{a}\) is a local minimum if: \(f(x_{b})>f(x_{a})\) for all \(x_{b}\) such that \(|x_{b}-x_{a}|=1\).
While for \(\gamma\) greater than some \(\gamma^{\dagger}\) all \(x^{\prime}\in S^{\prime}\) are with higher energy than all \(x\in S\), for \(\gamma<\gamma^{\dagger}\) at least some \(x^{\prime}\) exist with lower energies than some \(x\). This simultaneously increases the possibility that a valid solution does not occupy a local minimum, and alters the possibility of invalid solutions occupying local minima due to solution landscape structure changes. These changes also shuffle the rank order of invalid solution energies as a result of the differences in slope of different invalid solutions with respect to \(\gamma\), as may be seen in Figure 1.
We now examine these solution landscape structure changes more rigorously. Sections III-A and III-B begin by highlighting our findings for one-hot and domain-wall QUBO DQM encoding schemes, respectively, before presenting these findings in detail.
### _One-hot encoding_
First, we show the existence of a problem-dependent \(\gamma^{\prime}_{OH}\) such that for \(\gamma_{OH}>\gamma^{\prime}_{OH}\) no invalid one-hot QUBO DQM solutions occupy local minima. Then, we show that \(\gamma^{\prime}_{OH}\) does not necessarily equal \(\gamma^{*}_{OH}\) by counterexample. Specifically, we demonstrate with this counterexample that \(\gamma^{\prime}_{OH}>\gamma^{*}_{OH}\), indicating that whenever \(\gamma_{OH}\) is large enough to isolate \(x^{*}\) as the global minimum of \(f_{OH}(x)\), there possibly exist invalid local minima. We then proceed to show the existence of a \(\gamma^{\prime\prime}_{OH}\) such that for all \(\gamma_{OH}<\gamma^{\prime\prime}_{OH}\), no valid solution occupies a local minimum. By similar argument, we also show that there exists a \(\gamma^{\prime\prime\prime}_{OH}\) such that for all \(\gamma_{OH}>\gamma^{\prime\prime\prime}_{OH}\), all valid solutions occupy local minima. Finally, we show by counterexample that \(\gamma^{\prime\prime}_{OH}\) does not necessarily equal \(\gamma^{*}_{OH}\). Specifically, we consider a case where \(\gamma^{\prime\prime}_{OH}<\gamma^{*}_{OH}\), which indicates that when \(\gamma_{OH}\) is large enough to isolate \(x^{*}\) as the global minimum of \(f_{OH}(x)\), there exist valid local minima other than that of \(x^{*}\). We close this section with some comments on the implications of our findings for solution landscape navigability, more fully addressing these in Section IV.
Now, let us first establish that the change in the one-hot penalty between adjacent solutions \(x_{a}\) and \(x_{b}\) of Hamming distance one (\(|x_{b}-x_{a}|=1\)) is never zero. This is important to our subsequent proofs, wherein this quantity appears as a denominator.
**Lemma 1**.: _Let \(f_{OH}(x)=c(x)+\gamma_{OH}p(x)\) be a one-hot QUBO DQM function and \(x_{a}\) and \(x_{b}\) be neighboring solutions such that \(|x_{b}-x_{a}|=1\). Then \(p(x_{b})-p(x_{a})\neq 0\)._
Proof.: We first choose some register \(i^{\prime}\) in which to flip a bit either \(0\to 1\) or \(1\to 0\). Then writing \(\sum_{a}b_{i,\alpha}=N_{i}\), we can express the penalty of \(x_{a}\) in Equation 4 as:
\[p(x_{a})=\sum_{i\neq i^{\prime}}(N_{i}-1)^{2}+(N_{i^{\prime}}-1)^{2} \tag{10}\]
Similarly, we can express the penalty of \(x_{b}\) as:
\[p(x_{b})=\sum_{i\neq i^{\prime}}(N_{i}-1)^{2}+((N_{i^{\prime}}\pm 1)-1)^{2} \tag{11}\]
such that their difference, \(p(x_{a})-p(x_{b})\) is:
\[p(x_{a})-p(x_{b}) =(N_{i^{\prime}}-1)^{2}-((N_{i^{\prime}}\pm 1)-1)^{2} \tag{12}\] \[=\begin{cases}1-2N_{i^{\prime}},&0\to 1\text{ in register }i^{\prime}\\ 2N_{i^{\prime}}-3,&1\to 0\text{ in register }i^{\prime}\end{cases}\]
Given that \(N_{i^{\prime}}\) is an integer, we can conclude that \(p(x_{a})-p(x_{b})\neq 0\) for all \(|x_{b}-x_{a}|=1\).
We can now show that there exists a \(\gamma^{\prime}_{OH}\) such that for \(\gamma_{OH}>\gamma^{\prime}_{OH}\), no invalid solution occupies a local minimum. In the proof that follows, we first show that for any particular invalid solution, there exists at least one \(0\to 1\) or \(1\to 0\) bit-flip that reduces the overall penalty of the solution. We then demonstrate that for all such bit-flips, we can guarantee that at least one per invalid solution involves a reduction in total energy (i.e., \(f_{OH}(x_{b})<f_{OH}(x_{a})\), where we move from \(x_{a}\to x_{b}\)) by selecting an appropriately large value of \(\gamma_{OH}\). Finally, we guarantee that this condition is simultaneously met for all invalid solutions by selecting the maximum of the set of \(\gamma_{OH}\) values found in the previous step.
**Theorem 1**.: _Let \(f_{OH}(x)=c(x)+\gamma_{OH}p(x)\) be a one-hot QUBO DQM function. Denote the set of valid solutions \(S\) and the set of invalid solutions \(S^{\prime}\). Then there exists a \(\gamma^{\prime}_{OH}\) such that for \(\gamma_{OH}>\gamma^{\prime}_{OH}\) there is guaranteed to exist an \(x_{b}\in S\cup S^{\prime}\) such that \(f_{OH}(x_{b})<f_{OH}(x_{a})\) for all \(x_{a}\in S^{\prime}\) where \(|x_{b}-x_{a}|=1\)._
Proof.: For all \(x_{a}\in S^{\prime}\), we claim that there must exist at least one 1-local neighbor \(x_{b}\in S\cup S^{\prime}\) such that \(f_{OH}(x_{b})<f_{OH}(x_{a})\):
\[c(x_{b})+\gamma_{OH}p(x_{b})<c(x_{a})+\gamma_{OH}p(x_{a}) \tag{13}\]
Isolating for \(\gamma_{OH}\), this requires at least one of the following conditions be true for some \(x_{b}\) neighboring \(x_{a}\):
\[\gamma_{OH} >\frac{c(x_{b})-c(x_{a})}{p(x_{a})-p(x_{b})}, p(x_{a})-p(x_{b})>0 \tag{14}\] \[\gamma_{OH} <\frac{c(x_{b})-c(x_{a})}{p(x_{a})-p(x_{b})}, p(x_{a})-p(x_{b})<0 \tag{15}\]
Note that by Lemma 1, \(p(x_{a})-p(x_{b})\neq 0\), which avails us of having to consider the case where \(p(x_{a})-p(x_{b})=0\).
We now show that for all \(x_{a}\in S^{\prime}\) and \(\gamma_{OH}>\gamma^{\prime}_{OH}\), at least one neighboring \(x_{b}\) always satisfies the first condition (Equation 14). From this it follows that there exists an upper, finite bound to \(\gamma_{OH}\), above which we are guaranteed to satisfy Equation 13 for all \(x_{a}\in S^{\prime}\).
When flipping a bit \(0\to 1\) in the \(i^{\prime}\) register of \(x_{a}\), the difference in penalty between \(x_{a}\) and \(x_{b}\) is \(1-2N_{i^{\prime}}\) (Equation 12). This expression we call \(-\Delta p^{+}\):
\[-\Delta p^{+}=\begin{cases}1-2N_{i^{\prime}}<0,&N_{i^{\prime}}\neq 0\\ 1-2N_{i^{\prime}}>0,&N_{i^{\prime}}=0\end{cases} \tag{16}\]
Figure 1: Valid and invalid solution energies as a function of \(\gamma\) for a \(k=2\), \(l=3\) DQM, expressed as (a) a one-hot QUBO DQM, and (b) a domain-wall QUBO DQM. Notice that the solution energies of valid solutions are constant, while invalid solutions linearly increase in \(\gamma\). Different invalid solutions exhibit different slopes and intercepts such that their rank order changes with \(\gamma\), and it may be seen that steeper slopes are present in the one-hot QUBO DQM versus the domain-wall QUBO DQM. \(\gamma^{*}\) is the penalty parameter which above which the optimal valid solution, \(x^{*}\), occupies the global minimum. \(\gamma^{\prime\prime}\) is the penalty parameter below which no valid solution occupies a local minimum.
Similarly, when flipping a bit \(1\to 0\) in the \(i^{\prime}\) register of \(x_{a}\), we have the difference in penalty \(2N_{i^{\prime}}-3\), which we call \(-\Delta p^{-}\). This difference is such that:
\[-\Delta p^{-}=\begin{cases}2N_{i^{\prime}}-3<0,&N_{i^{\prime}}=1\\ 2N_{i^{\prime}}-3>0,&N_{i^{\prime}}\neq 1\end{cases} \tag{17}\]
We select an invalid register (a register \(i^{\prime}\) whose vector \(x_{a}(i^{\prime})\) is such that \(|x_{a}(i^{\prime})|\neq 1\)) in particular to flip a bit within, either \(0\to 1\) or \(1\to 0\). From Equations 16 and 17, we can see that one of \(-\Delta p^{+}\) or \(-\Delta p^{-}\) is necessarily greater than zero if we select an appropriate bit-flip. That is, if \(|x_{a}(i^{\prime})|>1\) and we flip a bit from \(1\to 0\), \(-\Delta p^{-}>0\), and if \(|x_{a}(i^{\prime})|<1\) and we flip a bit from \(0\to 1\), \(-\Delta p^{+}>0\).
At this stage, we have shown that for any \(x_{a}\in S^{\prime}\) in particular, there exists a neighbor \(x_{b}\) and a \(\gamma^{\prime}_{OH}\) such that \(f(x_{b})<f(x_{a})\) for \(\gamma_{OH}>\gamma^{\prime}_{OH}\). We now identify the specific bound on \(\gamma_{OH}\) that ensures that for all \(x_{a}\in S^{\prime}\) there exists a neighbor \(x_{b}\), where we require that \(-\Delta p^{\pm}>0\), such that \(f_{OH}(x_{b})<f_{OH}(x_{a})\).
For a specific invalid solution \(x_{a}\in S^{\prime}\), the smallest \(\gamma_{OH}\) possible that guarantees to admit at least one \(x_{b}\in S\cup S^{\prime}\) such that \(f_{OH}(x_{b})<f_{OH}(x_{a})\) is as follows:
\[\gamma_{OH}>\min_{x_{b}\in S\cup S^{\prime}}\left(\frac{c(x_{b})-c(x_{a})}{p( x_{a})-p(x_{b})}\right) \tag{18}\]
where \(|x_{b}-x_{a}|=1\) and \(p(x_{a})-p(x_{b})>0\). Now, so that this is satisfied by one value of \(\gamma_{OH}\) for all \(x_{a}\in S^{\prime}\), \(\gamma_{OH}\) must satisfy:
\[\gamma_{OH}>\gamma^{\prime}_{OH}=\max_{x_{a}\in S^{\prime}}\left\{\min_{x_{b} \in S\cup S^{\prime}}\left(\frac{c(x_{b})-c(x_{a})}{p(x_{a})-p(x_{b})}\right)\right\} \tag{19}\]
where the maximum is over the set of minimal \(\gamma_{OH}\) values required to satisfy the existence of \(x_{a}\to x_{b}\) transitions satisfying \(f_{OH}(x_{b})<f_{OH}(x_{a})\) for all particular \(x_{a}\in S^{\prime}\), again where \(|x_{b}-x_{a}|=1\) and \(p(x_{a})-p(xb)>0\). Note that each value of this set is finite and well-defined; \(|c(x_{b})-c(x_{a})|<\infty\) and \(|p(x_{a})-p(x_{b})|\neq 0\). It follows that \(\gamma^{\prime}_{OH}\) is finite and well-defined.
We should note again that \(\gamma^{\prime}_{OH}\) represents an _upper_ bound to that threshold value of \(\gamma_{OH}\) above which no invalid solution occupies a local minimum. That is, for \(\gamma_{OH}<\gamma^{\prime}_{OH}\) it may be possible for all invalid solutions to not occupy local minima, supposing that Equation 15 holds true where the Equation 14 does not. However, we cannot guarantee that this will be always true, as the example proposed in Equation 20 demonstrates.
We now consider this example in showing that \(\gamma^{*}_{OH}\) does not equal \(\gamma^{\prime}_{OH}\) in general. Ideally, we might hope for this to be true, as it would indicate that as soon as the minimum-energy valid solution exists as the global minimum to the objective function in tuning \(\gamma_{OH}\), no invalid solution occupies a local minimum. It would then not be possible to sample invalid solutions given a quantum or classical algorithm that ends with a greedy descent step (such as the final stage of simulated annealing).
**Theorem 2**.: _Let \(\gamma^{*}_{OH}\) be the one-hot QUBO DQM penalty parameter such that for all \(\gamma_{OH}>\gamma^{*}_{OH}\), \(\min_{x\in S\cup S^{\prime}}f_{OH}(x)=f_{OH}(x^{*})\). Let \(\gamma^{\prime}_{OH}\) be the one-hot QUBO DQM penalty parameter such that for all \(\gamma_{OH}>\gamma^{\prime}_{OH}\), no invalid solution \(x_{a}\in S^{\prime}\) exists as a local minimum. It is not true in general that \(\gamma^{*}_{OH}=\gamma^{\prime}_{OH}\)._
Proof.: Consider the following one-hot QUBO DQM cost function, where \(k=2\) and \(l=2\):
\[c(x)=\begin{pmatrix}3&0&2&4\\ 0&3&1&2\\ 0&0&4&0\\ 0&0&0&7\end{pmatrix} \tag{20}\]
Straightforward calculation of Equations 9 and 19 gives that \(\gamma^{*}_{OH}=5\) and \(\gamma^{\prime}_{OH}=6\).
For this example (Equation 20), we find additionally that there exists an invalid solution that occupies a local minimum when \(\gamma^{*}_{OH}<\gamma_{OH}<\gamma^{\prime}_{OH}\), namely (1 0 0 0). This reinforces our claim that only when \(\gamma_{OH}>\gamma^{\prime}_{OH}\) can we guarantee that no invalid solutions occupy local minima. We note too that this counterexample to the conjecture that \(\gamma^{*}_{OH}=\gamma^{\prime}_{OH}\) was generated by randomly sampling \(k=2\), \(l=2\) one-hot QUBO DQM instances, restricting their coefficients to the integers between 1 and 10. That a counterexample was found easily under these arbitrary restrictions suggest that such instances where \(\gamma^{*}_{OH}\neq\gamma^{\prime}_{OH}\) are common in general.
We now turn our attention to the valid solutions, focusing on establishing bounds on \(\gamma_{OH}\) above which all valid solutions are local minima, and below which no valid solution occupies a local minimum. We approach our proofs in a similar manner to that for Theorem 1, in this case considering our starting solutions \(x_{a}\) to be valid.
**Theorem 3**.: _Let \(f_{OH}(x)=c(x)+\gamma_{OH}p(x)\) be a one-hot QUBO DQM function. Denote the set of valid solutions \(S\) and the set of invalid solutions \(S^{\prime}\). Then there exists a \(\gamma^{\prime\prime}_{OH}\) such that for \(\gamma_{OH}<\gamma^{\prime\prime}_{OH}\) no \(x_{a}\in S\) occupy local minima. Further, there exists a \(\gamma^{\prime\prime\prime}_{OH}\) such that for \(\gamma_{OH}>\gamma^{\prime\prime\prime}_{OH}\) all \(x_{a}\in S\) occupy local minima._
Proof.: First, we recall that all bit-flips \(0\to 1\) and \(1\to 0\) from a given \(x_{a}\in S\) move to invalid solutions \(x_{b}\in S^{\prime}\) where \(|x_{b}-x_{a}|=1\). Such moves come with an increase in penalty (i.e., \(p(x_{a})-p(x_{b})=-1\)), and as such, for \(f_{OH}(x_{b})<f_{OH}(x_{a})\), from Equation 15 we require:
\[\gamma_{OH}<c(x_{a})-c(x_{b}) \tag{21}\]
for some neighboring \(x_{b}\). That Equation 21 is satisfied for at least one \(x_{b}\) for a given \(x_{a}\in S\), we take the maximum of this expression over all neighboring \(x_{b}\):
\[\gamma_{OH}<\max_{x_{b}\in S^{\prime}}\left(c(x_{a})-c(x_{b})\right) \tag{22}\]
We may then say that all valid solutions are not local minima under one penalty parameter \(\gamma_{OH}\) if:
\[\gamma_{OH}<\gamma_{OH}^{\prime\prime}=\min_{x_{a}\in S}\left\{\max_{x_{b}\in S ^{\prime}}\left(c(x_{a})-c(x_{b})\right)\right\} \tag{23}\]
where \(|x_{b}-x_{a}|=1\). Similarly, all valid solutions are local minima when:
\[\gamma_{OH}>\gamma_{OH}^{\prime\prime\prime}=\max_{x_{a}\in S}\left\{\max_{x_{ b}\in S^{\prime}}\left(c(x_{a})-c(x_{b})\right)\right\} \tag{24}\]
where, necessarily, \(\gamma_{OH}^{\prime\prime\prime}\geq\gamma_{OH}^{\prime\prime}\) and \(|x_{b}-x_{a}|=1\).
Whereas Theorem 1 for invalid solutions established an upper bound to \(\gamma_{OH}\) above which it is guaranteed that no invalid solutions occupy local minima, Theorem 3 for valid solutions provides exact thresholds on \(\gamma_{OH}\) above which all valid solutions are local minima and below which all valid solutions are not local minima. This exactness is a result of \(-\Delta p^{\pm}\) always equalling \(-1\) when starting from a valid solution and flipping a bit.
We now show by counterexample that \(\gamma_{OH}^{\prime\prime}\) does not equal \(\gamma_{OH}^{*}\) in general, and that it is possible that \(\gamma_{OH}^{\prime\prime}<\gamma_{OH}^{*}\). This indicates that as soon as the minimum-energy valid solution exists as the global minimum to the objective function through tuning of \(\gamma_{OH}\), it is possible that other valid solutions already occupy local minima.
**Theorem 4**.: _Let \(\gamma_{OH}^{*}\) be the one-hot QUBO DQM penalty parameter such that for all \(\gamma_{OH}>\gamma_{OH}^{*}\), \(\min_{x\in S\cup S^{\prime}}f_{OH}(x)=f_{OH}(x^{*})\). Let \(\gamma_{OH}^{\prime\prime}\) be the one-hot QUBO DQM penalty parameter such that for all \(\gamma_{OH}<\gamma_{OH}^{\prime\prime}\) no valid solution \(x_{a}\in S\) exists as a local minimum. It is not true in general that \(\gamma_{OH}^{*}=\gamma_{OH}^{\prime\prime}\). More specifically, there exist cases where \(\gamma_{OH}^{\prime\prime}<\gamma_{OH}^{*}\)._
Proof.: Consider the following one-hot QUBO DQM cost matrix, where \(k=2\) and \(l=2\):
\[c(x)=\begin{pmatrix}7&0&5&4\\ 0&7&5&9\\ 0&0&2&0\\ 0&0&0&6\end{pmatrix} \tag{25}\]
Calculation of Equations 9 and 23 gives that \(\gamma_{OH}^{*}=12\) and \(\gamma_{OH}^{\prime\prime}=11\). The value of \(\gamma_{OH}^{\prime\prime}\) stems from a valid solution \(x\neq x^{*}\), indicating that this solution is a local minimum as soon as \(x^{*}\) becomes the lowest-energy minimum within the solution energy landscape.
We note that the above counterexample (Equation 25) to the conjecture that \(\gamma_{OH}^{*}=\gamma_{OH}^{\prime\prime}\) was generated by randomly sampling \(k=2\), \(l=2\) one-hot QUBO DQM instances, restricting their coefficients to the integers between \(1\) and \(10\). That such a counterexample was found easily under these arbitrary restrictions suggests that instances where \(\gamma_{OH}^{*}\neq\gamma_{OH}^{\prime\prime}\) are common in general.
Commenting briefly now on the applicability of the results achieved in this section, we remark that ideally-navigable QUBO DQM solution landscapes maximize the number of invalid solutions which do not occupy local minima, while simultaneously maximizing the number of non-interesting valid solutions which do not occupy local minima (maintaining \(\gamma_{OH}>\gamma_{OH}^{*}\)). We state these conditions as ideal given the well-understood point that a reduction in the number of minima in the solution landscape reduces the chances of search algorithms getting stuck in local minima. In certain cases (easily verified for \(k=2\), \(l=2\)), it is indeed true that we can select \(\gamma_{OH}\) such that the only local minimum of the full solution landscape is occupied by \(x^{*}\), which means that from any solution we can greedily descend to this optimum, which is our desired best solution. However, a procedure for identifying in advance which one-hot QUBO DQM cases lend themselves to this solution landscape structure remains unknown.
Generally, in reducing \(\gamma_{OH}\) from \(\gamma_{OH}^{\prime\prime\prime}\), we see a decreasing number of valid solutions occupying local minima (with preferential preservation of low energy valid minima), while increasing \(\gamma_{OH}\) from \(\gamma_{OH}^{*}\) decreases the number of invalid solutions occupying local minima. For this reason, given that \(\gamma_{OH}\) in the vicinity of \(\gamma_{OH}^{*}\) minimizes the number of valid local minima and that low \(\gamma_{OH}\) minimizes the jaggedness of the solution landscape, we suggest that such \(\gamma_{OH}\) very likely encode solution landscapes whose search times are minimized. However, to guarantee this requires further investigation.
### _Domain-wall encoding_
We start out with several qualitative observations of domain-wall QUBO DQM solution landscapes which are in contrast to their one-hot counterparts. First, whereas the 1-local neighbors of valid one-hot solutions are all invalid solutions, there exist between \(k\) and \(2k\) 1-local neighbors of a valid domain-wall solution that are also valid. In this sense there is a much different connectivity between solutions within the domain-wall solution landscape than the one-hot solution landscape. Secondly, given that the maximum penalty a domain-wall solution may incur is \(k\lfloor(l-1)/2\rfloor\) versus \(k(l-1)^{2}\) for the worst-case one-hot solution, the domain-wall energy landscape exists in a vertically-compressed form with respect to changing \(\gamma\) versus the corresponding one-hot landscape (see Figure 1). These features suggest that the structure of domain-wall QUBO DQM landscapes markedly differ from one-hot QUBO DQM solution landscapes, as we now demonstrate.
First, we show that no \(\gamma_{DW}\) is sufficient to guarantee that all invalid solutions do not occupy local minima, where \(\gamma_{DW}\) is the penalty parameter specific to domain-wall QUBO DQMs. We then proceed to show the existence
of a \(\gamma^{\prime}_{DW}\) such that for all \(\gamma_{DW}<\gamma^{\prime\prime}_{DW}\), no valid solution occupies a local minimum. Contrary to the one-hot case, we also show that there are no \(\gamma_{DW}\) such that all valid solutions occupy local minima. Finally, we show by counterexample that \(\gamma^{\prime\prime}_{DW}\) does not necessarily equal \(\gamma^{*}_{DW}\). Specifically, we consider a case where \(\gamma^{\prime\prime}_{DW}<\gamma^{*}_{DW}\), which indicates that when \(\gamma_{DW}\) is large enough to isolate \(x^{*}\) as the global minimum of \(f_{DW}(x)\), there exist valid local minima other than that corresponding to \(x^{*}\). We close this section with some comments on the implications of these findings for landscape navigability, more fully addressing these in Section IV.
Now, let us first establish that the change in domain-wall penalty between adjacent solutions \(x_{a}\) and \(x_{b}\) of Hamming distance one (\(|x_{b}-x_{a}|=1\)) can be zero. This is important to our subsequent proofs.
**Lemma 2**.: _Let \(f_{DW}(x)=c(x)+\gamma_{DW}p(x)\) be a domain-wall QUBO DQM function and \(x_{a}\) and \(x_{b}\) be neighboring solutions such that \(|x_{b}-x_{a}|=1\). Then it is possible that \(p(x_{a})-p(x_{b})=0\)._
Proof.: We first choose some register \(i^{\prime}\) in which to flip a bit either \(0\to 1\) or \(1\to 0\). We then express the penalty of \(x_{a}\) as:
\[p(x_{a}) =\sum_{i\neq i^{\prime}}\sum_{\alpha}(b_{i,\alpha}-b_{i,\alpha}b_ {i,\alpha-1})\] \[\qquad+\sum_{\alpha}(b_{i^{\prime},\alpha}-b_{i^{\prime},\alpha}b _{i^{\prime},\alpha-1})\] \[=\sum_{i\neq i^{\prime}}(N_{i}-1)+(N_{i^{\prime}}-1) \tag{26}\]
where \(N_{i},N_{i^{\prime}}\) are the number of domain walls present in registers \(i,i^{\prime}\) (valid registers being with one domain-wall present). Similarly, we express the penalty of \(x_{b}\) as:
\[p(x_{b})=\sum_{i\neq i^{\prime}}(N_{i}-1)+(N^{\prime}_{i^{\prime}}-1) \tag{27}\]
where \(N^{\prime}_{i^{\prime}}\in\{N_{i^{\prime}}-1,N_{i^{\prime}},N_{i^{\prime}}+1\}\). To see this, consider the register \(i^{\prime}\) of length \(l-1=7\) with the register:
\[x_{a}(i^{\prime})=\texttt{1 0 1 0 0 0 1} \tag{28}\]
Indexing from zero, we see that to flip a bit \(0\to 1\) at position \(1\) removes a domain wall (\(N^{\prime}_{i^{\prime}}=N_{i^{\prime}}-1\)), to flip a bit \(0\to 1\) at position \(3\) extends a domain wall (\(N^{\prime}_{i^{\prime}}=N_{i^{\prime}}\)), and to flip a bit \(0\to 1\) at position \(4\) adds a domain wall (\(N^{\prime}_{i^{\prime}}=N_{i^{\prime}}+1\)). Therefore, we have:
\[p(x_{a})-p(x_{b})=N_{i^{\prime}}-N^{\prime}_{i^{\prime}}\in\{-1,0,+1\} \tag{29}\]
We now show that, unlike for one-hot QUBO DQMs, there does not exist in general an upper bound to \(\gamma_{DW}\) above which all invalid solutions do not occupy local minima for domain-wall QUBO DQMs. In the proof that follows, we first show that there exists a class of invalid solutions whose elements are such that for all neighbors of a given element within this class, no change in penalty is incurred. We then show that for such solutions to exist as local minima, there must be at least one neighbor whose cost is lower than the starting solution. Finally, we show that this condition cannot always be satisfied.
**Theorem 5**.: _Let \(f_{DW}(x)=c(x)+\gamma_{DW}p(x)\) be a domain-wall QUBO DQM function. Denote the set of valid solutions \(S\) and the set of invalid solutions \(S^{\prime}\). Then there is not guaranteed to exist a \(\gamma^{\prime}_{DW}\) such that for \(\gamma_{DW}>\gamma^{\prime}_{DW}\) there is at least one \(x_{b}\in S\cup S^{\prime}\) such that \(f_{DW}(x_{b})<f_{DW}(x_{a})\) for all \(x_{a}\in S^{\prime}\) where \(|x_{b}-x_{a}|=1\)._
Proof.: To show that this is true, we attempt to show the contrary, namely, that there exists a \(\gamma^{\prime}_{DW}\) such that for \(\gamma_{DW}>\gamma^{\prime}_{DW}\) there is at least one \(x_{b}\in S\cup S^{\prime}\) such that \(f_{DW}(x_{b})<f_{OH}(x_{a})\) for all \(x_{a}\in S^{\prime}\) where \(|x_{b}-x_{a}|=1\).
This requires that for all \(x_{a}\in S^{\prime}\), there is at least one \(x_{b}\) such that \(|x_{b}-x_{a}|=1\) satisfying:
\[c(x_{b})+\gamma_{DW}p(x_{b})<c(x_{a})+\gamma_{DW}p(x_{a}) \tag{30}\]
Unlike in the one-hot case, where \(p(x_{a})-p(x_{b})\neq 0\) allowed us to solve for \(\gamma_{OH}\) as in Equations 14 and 15, for a DQM problem expressed as a domain-wall encoded QUBO it is possible to have \(p(x_{a})-p(x_{b})=0\), as per Lemma 2. Given this, for a particular \(x_{a}\in S^{\prime}\) to have at least one neighbor \(x_{b}:|x_{b}-x_{a}|=1\) such that \(f_{DW}(x_{b})<f_{DW}(x_{a})\), there must exist an \(x_{b}\) or \((x_{b},\gamma_{DW})\) pair satisfying one of the following cases:
\[c(x_{b})<c(x_{a}),\ N^{\prime}_{i^{\prime}}=N_{i^{\prime}}\] \[\gamma_{DW}>c(x_{b})-c(x_{a}),\ N^{\prime}_{i^{\prime}}=N_{i^{\prime}}+1 \tag{31}\] \[\gamma_{DW}<c(x_{a})-c(x_{b}), N^{\prime}_{i^{\prime}}=N_{i^{\prime}}-1\]
We now note that there exist \(5\) distinct classes of invalid register, some for which certain of the above cases do not ever apply when a solution consists of only these kinds of register or their combination with valid registers. Consider first the following registers of various lengths, represented by the vectors \(x_{a}(i^{\prime})\):
\[x_{a}(i^{\prime})=\begin{cases}\texttt{1 0 1 0 0 0 1,}&\in A\\ \texttt{1 0 1 1 1 0 }&\in B\\ \texttt{1 1 0 0 0 1 1,}&\in C\\ \texttt{0 0 1 1 0,}&\in D\\ \texttt{0 1,}&\in E\end{cases} \tag{32}\]
Invalid registers in \(A\) are such that \(-\Delta p\in\{-1,0,+1\}\), in \(B\) are such that \(-\Delta p\in\{0,+1\}\), in \(C\) are such that \(-\Delta p\in\{-1,0\}\), in \(D\) are such \(-\Delta p=0\), and in \(E\) are such that \(-\Delta p=+1\). That a class where \(-\Delta p=-1\) only (amounting to a class of solutions where we can only _add_ a domain-wall) does not exist can be easily verified.
This means that only classes \(A\), \(B\), and \(E\) contain invalid registers that are able of losing a domain-wall upon a single bit-flip. Therefore, a \(\gamma_{DW}\) may always be selected for solutions containing only these classes of register (or in combination with valid registers) such that there exists an \(x_{b}\) allowing the satisfaction of:
\[\gamma_{DW}>c(x_{b})-c(x_{a}) \tag{33}\]
where \(N^{\prime}_{i^{\prime}}=N_{i^{\prime}}+1\), which is the second case of Equation 31. Specifically, for a particular invalid solution \(x_{a}\) containing either of a class \(A\), class \(B\), or class \(E\) register, the smallest \(\gamma_{DW}\) that admits this inequality is:
\[\gamma_{DW}>\min_{x_{b}\in S\cup S^{\prime}}\big{(}c(x_{b})-c(x_{a})\big{)} \tag{34}\]
such that if one \(\gamma_{DW}\) is to suffice for all particular instances of such \(x_{a}\):
\[\gamma_{DW}>\gamma^{\prime}_{DW}=\max_{x_{a}\in S^{\prime}}\left\{\min_{x_{b} \in S\cup S^{\prime}}\big{(}c(x_{b})-c(x_{a})\big{)}\right\} \tag{35}\]
Shifting our attention to those invalid solutions that do not contain a class \(A\), class \(B\), or class \(E\) register, we consider solutions that either admit only \(-\Delta p\in\{-1,0\}\) (class \(C\)) or \(-\Delta p=0\) (class \(D\)). Focusing on class \(D\) invalid solutions, which form a part of the solution space for domain-wall DQMs whenever \(l-1\geq 4\), we see that for the elements of this class to not occupy local minima, there must exist for a particular \(x_{a}\) in this class at least one 1-local neighbor \(x_{b}\) such that \(c(x_{b})<c(x_{a})\). This we cannot guarantee in general; considering a domain-wall QUBO DQM of one register with length \(l-1=4\) satisfies to show this rather trivially.
We therefore conclude that in general there does not exist a \(\gamma_{DW}\) that is guaranteed to ensure that all \(x_{a}\in S^{\prime}\) do not occupy local minima for a domain-wall encoded DQM. However, in certain cases it is possible that this condition might be satisfied.
We just established that for domain-wall QUBO DQMs it cannot be guaranteed that there is an upper bound on \(\gamma_{DW}\) (\(\gamma^{\prime}_{DW}\)) above which all invalid solutions are not local minima. Further, we identified a subset of invalid solutions such that to increase \(\gamma_{DW}\) decreases the number of solutions within this subset that occupy local minima (classes \(A\), \(B\) and \(E\) in particular, where \(-\Delta p^{\pm}\) can equal \(+1\)). For another subset of invalid solutions (classes \(A\) and \(C\)), decreasing \(\gamma_{DW}\) decreases the number of solutions within this subset that occupy local minima. Finally, we identify a subset of invalid solutions for which \(\gamma_{DW}\) has no effect on whether they occupy local minima or not (class \(D\)), meaning that if one of this class happens to occupy a local minimum, it will do so irrespective of \(\gamma_{DW}\). As such, we are unable to provide a general rule of thumb which suggests how the number of invalid solutions occupying local minima changes as \(\gamma_{DW}\) is changed.
We now turn our attention to valid solutions, seeking the bound to \(\gamma_{DW}\) below which all valid solutions do not occupy local minima. We find that in contrast to one-hot QUBO DQM solution landscapes, we cannot find a bound to \(\gamma_{DW}\) above which all valid solutions do occupy local minima. Our proof proceeds in a manner similar to our approach to Theorem 5.
**Theorem 6**.: _Let \(f_{DW}=c(x)+\gamma_{DW}p(x)\) be a domain-wall QUBO DQM function. Denote the set of valid solutions \(S\) and the set of invalid solutions \(S^{\prime}\). Then there exists a \(\gamma^{\prime}_{DW}\) such that for \(\gamma_{DW}<\gamma^{\prime\prime}_{DW}\) no \(x_{a}\in S\) occupy local minima. Further, there does not exist a \(\gamma^{\prime\prime}_{DW}\) such that all \(x_{a}\in S\) occupy local minima for \(\gamma_{DW}>\gamma^{\prime\prime}_{DW}\)._
Proof.: First, we point out that all single bit-flip moves away from a given \(x_{a}\in S\) move to either invalid solutions that contain one additional domain-wall as compared to \(x_{a}\), or to valid solutions having the same number of domain-walls as \(x_{a}\). (All valid solutions have \(k\) total domain-walls, uniformly distributed across \(k\) registers.)
Calling \(x_{b}\) the solution moved to upon a single bit-flip from \(x_{a}\), if \(x_{b}\in S^{\prime}\), it is clear from Equation 29 that \(p(x_{a})-p(x_{b})=-1\). Similarly, if \(x_{b}\in S\), we have that \(p(x_{a})-p(x_{b})=0\). Therefore, for a particular \(x_{a}\in S\) to have at least one neighbor \(x_{b}\) where \(|x_{b}-x_{a}|=1\) such that \(f_{DW}(x_{b})<f_{DW}(x_{a})\), there must exist an \(x_{b}\) or \((x_{b},\gamma_{DW})\) pair satisfying either:
\[\begin{split} c(x_{b})<c(x_{a}),\ x_{b}\in S\\ \gamma_{DW}<c(x_{a})-c(x_{b}),\ x_{b}\in S^{\prime}\end{split} \tag{36}\]
That the first case is satisfied is true except for when \(c(x_{a})<c(x_{b})\)\(\forall x_{b}\in S:|x_{b}-x_{a}|=1\). Therefore, to guarantee that all \(x_{a}\in S\) do not occupy local minima, we focus on making sure that the second case holds true for such \(x_{a}\), where \(c(x_{a})<c(x_{b})\)\(\forall x_{b}\in S:|x_{b}-x_{a}|=1\). Now, for a particular such \(x_{a}\), that there exists at least one \(x_{b}\in S^{\prime}\) satisfying this expression (case 2, Equation 36), we take the maximum over all neighboring \(x_{b}\in S^{\prime}\):
\[\gamma_{DW}<\max_{x_{b}\in S^{\prime}}\big{(}c(x_{a})-c(x_{b})\big{)} \tag{37}\]
We may then say that all valid solutions are not local minima under one penalty parameter \(\gamma_{DW}\) if:
\[\gamma_{DW}<\gamma^{\prime\prime}_{DW}=\min_{x_{a}\in S}\left\{\max_{x_{b}\in S ^{\prime}}\big{(}c(x_{a})-c(x_{b})\big{)}\right\} \tag{38}\]
where \(x_{a}:c(x_{a})<c(x_{b})\)\(\forall x_{b}\in S:|x_{b}-x_{a}|=1\).
Considering again the cases of Equation 36, we note that irrespective of our choice of \(\gamma_{DW}\), there must exist at least one \(x_{a}\in S\) with a set of 1-local neighbors in \(S\) such that for a particular \(x_{b}\) in this set \(c(x_{b})<c(x_{a})\) is satisfied. If not, then all \(x_{a}\in S\) would be such that
\(c(x_{a})<c(x_{b})\)\(\forall x_{b}\in S:|x_{b}-x_{a}|=1\), which implies a contradiction unless \(\forall x_{a},x_{b}\in S\), \(c(x_{a})=c(x_{b})\), which corresponds to the trivial DQM. Therefore, there does not exist a \(\gamma^{\prime\prime\prime}_{DW}\) above or below which all valid solutions occupy local minima.
We now show by counterexample that \(\gamma^{\prime\prime}_{DW}\neq\gamma^{*}_{DW}\) in general, and indeed that it is possible that \(\gamma^{\prime\prime}_{DW}<\gamma^{*}_{DW}\), which indicates that as soon as the condition is achieved that the minimum-energy valid solution exists as the global minimum to the objective function, it is possible that other valid solutions might already occupy local minima.
**Theorem 7**.: _Let \(\gamma^{*}_{DW}\) be the domain-wall QUBO DQM penalty parameter such that for all \(\gamma_{DW}>\gamma^{*}_{DW}\), \(\min_{x\in\otimes\cup S^{\prime}}f_{DW}(x)=f_{DW}(x^{*})\). Let \(\gamma^{\prime\prime}_{DW}\) be the domain-wall QUBO DQM penalty parameter such that for all \(\gamma_{DW}<\gamma^{\prime\prime}_{DW}\) no valid solution \(x_{a}\in S\) exists as a local minimum. It is not true in general that \(\gamma^{*}_{DW}=\gamma^{\prime\prime}_{DW}\). More specifically, there exist cases where \(\gamma^{\prime\prime}_{DW}<\gamma^{*}_{DW}\)._
Proof.: Consider the following domain-wall QUBO DQM cost function, where \(k=2\) and \(l=3\):
\[c(x)=\begin{pmatrix}4&-2&3&1\\ 0&1&2&-2\\ 0&0&-1&4\\ 0&0&0&-4\end{pmatrix}-5 \tag{39}\]
Calculation of Equations 9 and 37 gives that \(\gamma^{*}_{DW}=3\) and \(\gamma^{\prime\prime}_{DW}=-3\), concluding the proof.
We note that the above counterexample (Equation 39) to the conjecture that \(\gamma^{*}_{DW}=\gamma^{\prime\prime}_{DW}\) was generated by randomly sampling \(k=2\), \(l=3\) domain-wall QUBO DQM instances, restricting their coefficients to the integers between \(-9\) and \(9\). That such a counterexample was found easily under these arbitrary restrictions suggest that such instances where \(\gamma^{*}_{DW}>\gamma^{\prime\prime}_{DW}\) are common in general.
We now comment briefly on the applicability of the results of this section. As noted in the one-hot QUBO DQM section above, ideally-navigable QUBO DQM solution landscapes maximize the number of invalid solutions not occupying local minima, and simultaneously maximize the number of non-interesting valid solutions not occupying local minima (maintaining \(\gamma_{DW}>\gamma^{*}_{DW}\)). Unlike in the one-hot QUBO DQM case, here we are unable to claim with certainty that the number of invalid solutions either increases or decreases with increasing \(\gamma_{DW}\), beginning from \(\gamma^{*}_{DW}\). However, we can say that the number of valid solutions occupying local minima decreases if \(\gamma_{DW}\gg\gamma^{\prime\prime}_{DW}\) and is subsequently reduced toward \(\gamma^{\prime\prime}_{DW}\). In certain cases (easily verified by selecting from random \(k=2\), \(l=2\) instances), indeed, we can select \(\gamma_{DW}\) such that the only local minimum of the full solution landscape is occupied by \(x^{*}\). This means that from any solution we can greedily descend to this optimum, which is our desired best solution. However, we are currently without a procedure for identifying _a priori_ DQM problems that may be encoded this way.
## IV Discussion
The task of solving DQMs as QUBO models given the availability of increasingly-powerful, special-purpose QUBO solvers (including quantum and digital annexlers) motivated our investigation into the structure of their solution landscapes upon one-hot or domain-wall encoding. These encodings are such that they demonstrate large structural differences in comparison to one another. In this section, we discuss these differences, and point out their various benefits and shortcomings.
To begin, recall that a one-hot encoding of a \(k\)-variable DQM with \(l\) possible values per variable requires \(kl\) binary variables, and \(kl(kl-1)/2\) pairwise interactions between these variables. Considering a domain-wall encoding of the same problem, only \(k(l-1)\) binary variables and \(k(l-1)(k(l-1)-1)/2\) pairwise interactions are required. This savings in the number of variables and interactions has the corollary that there are \(2^{kl}(1-2^{-k})\) less invalid solutions in a domain-wall QUBO DQM solution landscape versus a one-hot QUBO DQM solution landscape. Based on this, a domain-wall QUBO DQM encoding is more desirable compared to a one-hot QUBO DQM encoding if we are concerned with spatial resources, such as the number of qubits available on a quantum annealer.
However, given the fact that domain-wall QUBO DQM matrix entries involve sums of one-hot QUBO DQM matrix entries, the largest entries in the domain-wall case can be larger than the largest entries in the one-hot case. The importance of this concerns the limited dynamic range of quantum annealer bias and coupling devices in particular, which may be thought of as the physical instantiations of QUBO matrix entries. The entries of a QUBO matrix must be made to fit within the dynamic range of the bias and coupling devices, and the larger the entries of the QUBO matrix, the more rescaling of these values must take place to ensure this fit, which may be compromising in the face of integrated control errors and noise [19]. This suggests that one-hot encodings may better lend themselves to noisy solvers versus domain-wall encodings.
Another potential drawback to domain-wall QUBO DQMs results from the connectivity of the solution space, which allows any valid solution to be transformed into any other valid solution without having to pass through an invalid solution. As a particular example of why this might be detrimental, we consider molecular docking, a problem of structural biology. Here, we aim to find low-energy configurations of a set of molecules in a discretized space, typically in 2 or 3 dimensions. The points within this space form the basis of our registers, and one molecule is assigned per register. Note that the multi-dimensional real space of the problem is encoded to a "one-dimensional" binary space when transformed into a QUBO, in that
adjacent valid solutions involve sequential bit-flips (e.g., 000, 100, 110, and 111). This dimension-reduction can place solutions that are far from one another in the real space of the problem exactly adjacent to one another upon domain-wall QUBO DQM encoding. If we are interested in multiple low-energy solutions, it is then possible that a low-energy valid solution within the real space of the problem will not occupy a local minimum in the solution landscape of the QUBO DQM encoding, by virtue of its being adjacent to another low-energy valid solution.
By contrast, one-hot QUBO DQMs allows full separation between valid solutions of interest if \(\gamma_{OH}\) is selected so that no invalid solution is with an energy below those valid solutions of interest. This is a consequence of Theorems 3 and 6. It is difficult to know in advance if a problem may suffer from a situation similar to that just described; in any case, one has to be wary of this only when more than one valid solution is sought.
Another difference between one-hot and domain-wall QUBO DQMs concerns whether invalid solutions occupy local minima or not as a function of the penalty parameter. In the one-hot case, we proved in Theorem 1 that we can select a \(\gamma_{OH}\) that guarantees that all invalid solutions do not occupy local minima, whereas in the the domain-wall case we proved in Theorem 5 that we cannot guarantee this through selection of any \(\gamma_{DW}\). This means that with a one-hot encoding, we can ensure that we sample only valid solutions provided we have selected \(\gamma_{OH}\) properly, whereas we cannot avoid the possibility of sampling invalid solutions with a domain-wall encoding in advance. Our results do not suggest how many such invalid solution local minima are unavoidable with a domain-wall encoding, but given that they belong only to class \(D\) (see Equation 32), which represents a restricted subset of all possible invalid solutions, we suspect that their number is relatively small. This remains to be understood rigorously.
Now, apart from our observations regarding solution landscape structures between one-hot and domain-wall QUBO DQMs, we also point out the following consequence of their encoding structures. Namely, domain-wall QUBO DQMs lack the ability to efficiently leverage a certain kind of symmetry that might be present in a DQM problem, whereas one-hot QUBO DQMs can be adapted to accommodate these symmetries in ways which prove more compact than a domain-wall encoding. Again with reference to molecular docking, we can imagine a situation in which we are with \(k\) copies of some molecule \(M\), to be docked in a space containing \(l>k\) points. Usually, we assign one register per molecule in a one-hot encoding, but in this case we are permitted a representation using just one register of \(l\) points, requiring that \(k\) bits are one for valid solutions, corresponding to the unique placement of the \(k\) molecules. In this case, the one-hot penalty of Equation 4 becomes a \(k\)-hot penalty [7]:
\[H_{KH}^{P}=\Big{(}\sum_{\alpha}b_{i,\alpha}-k\Big{)}^{2} \tag{40}\]
over a single register. By contrast, a domain-wall encoding scheme cannot be adapted so readily in this way. Namely, whereas a register of length \(l\) can accommodate \(l\) bits set to 1, it can only accommodate \(\lceil l/2\rceil\) domain-walls, under the restriction we have been working with that these domain walls are represented by 10, and not 01. Given this, certain problems are more compactly represented by an adaption of a one-hot encoding scheme (\(k\)-hot) versus the domain-wall encoding scheme.
Finally, we close our discussion of solution landscape structures in remarking of our proofs of various thresholds on \(\gamma_{OH}\) and \(\gamma_{DW}\) that they are proofs of existence, and non-constructive. That is, though these thresholds are well-defined in terms of max-min, min-max, or max-max procedures, they generally require evaluating the costs of all valid solutions. This defeats the very purpose of calculating these thresholds, which we might seek to know in order to produce a best-navigable solution landscape in advance as to avoid exhaustive evaluation of all valid solutions. We suggest to develop computationally-feasible means of estimating these thresholds, and determine their relation to optimal \(\gamma_{OH}\) and \(\gamma_{DW}\).
## V Conclusions
Solving a discrete quadratic model as a QUBO model requires translating the DQM to this form, commonly via one-hot or domain-wall encoding. Both encodings introduce invalid solutions to the solution space, and a parameter to penalize these solutions. Differences between encodings manifest in their respective solution spaces differing in the connectivity between valid solutions, the distribution of local minima, and their response to changing penalty parameter strength. We have conducted a preliminary investigation of these differences, noting the shifting structure of local minima relative to penalty parameter strength, and finding that best selection between a one-hot and domain-wall encoding is problem-dependent.
This work represents a first attempt at characterizing the solution landscape features of QUBO DQM encodings, and emphasizing the importance of these features to penalty parameter and encoding choice. We suggest the following as future work to build on our findings. First, a minor goal is to understand and explain the relative abundance of unavoidable invalid minima for domain-wall QUBO DQMs and how this affects solution space navigation. Second, we suggest characterizing the sensitivity of solver performance to changes in solution landscape structure to inform the goal of developing robust guidelines for problem-dependent selection of optimal \(\gamma_{OH}\) and \(\gamma_{DW}\). Finally, we propose to systematically classify DQM problems according to the QUBO encoding scheme which optimizes their resource use. |
2308.16490 | Latent Painter | Latent diffusers revolutionized the generative AI and inspired creative art.
When denoising the latent, the predicted original image at each step
collectively animates the formation. However, the animation is limited by the
denoising nature of the diffuser, and only renders a sharpening process. This
work presents Latent Painter, which uses the latent as the canvas, and the
diffuser predictions as the plan, to generate painting animation. Latent
Painter also transits one generated image to another, which can happen between
images from two different sets of checkpoints. | Shih-Chieh Su | 2023-08-31T06:52:43Z | http://arxiv.org/abs/2308.16490v2 | # Latent Painter
###### Abstract
Latent diffusers revolutionized the generative AI and inspired creative art. When denoising the latent, the predicted original image at each step collectively animates the formation. However, the animation is limited by the denoising nature of the diffuser, and only renders a sharpening process. This work presents Latent Painter, which uses the latent as the canvas, and the diffuser predictions as the plan, to generate painting animation. Latent Painter also transits one generated image to another, which can happen between images from two different sets of checkpoints.
## 1 Introduction
Can you imagine the generated picture having its own painting action? Or transit between multiple images generated by different pre-trained diffusers? Enabling such function extends the capability of the denoising diffusion, and the proposed method works on any existing diffuser.
Recently, denoising diffusers gain a lot of traction in generative AI, for its high quality outcome without adversarial training [1], its efficiency [2; 3], content diversity with easy text conditioning [4], and reasonable footprint [5]. Although the convenience of text-to-image largely spurs the creativity, little has be studied about the composition of its generated art. This work presents Latent Painter, which uses the existing diffuser to generate painting animation along with the output image.
During the diffusion denoising, the latent is denoised step-by-step into the state representing an image matching the text input. The predicted original image, which is the progressive estimate \(\bar{x_{0}}\) of the reverse diffusion process in [1], becomes sharper and sharper when the latent being denoised. Collecting the predicted original images forms an animation about the sharpening process, where information is updated omnipresently in the same frame. However, the update is uneven across frames, with higher total pixel value change toward earlier frames. Latent painter prioritizes the update locally to just like the brush strokes. Once the released information is close enough to match the current predicted original, the residue is accumulated for later updates. This mechanism provides update more evenly over frames.
## 2 Method
Let \(Z(x,y,c)\) denote the state of the latent, having shape of width by height by channels, and can be initiated as zeros. During each inference step \(t\), the scheduler provides an updated denoised latent sample \(D_{t}\). The policy \(\mathcal{P}\) decides whether the information difference between \(Z\) and \(D_{t}\) qualifies an update and if so, which subset (\(\bar{C}\)) of all latent channels \(C\) needs to be updated.
\[\bar{C}=\mathcal{P}(Z,D_{t})\text{, where }\bar{C}\in C \tag{1}\]
During a channel update, a channel \(c\) is chosen out of \(\bar{C}\). Then, a qualifying threshold \(\theta\) picks the region \(R\) to be stroked on.
\[R=\{(x,y):|Z(x,y,c)-D_{t}(x,y,c)|>\theta\} \tag{2}\]
Let \(G_{c}(x,y)\) denote the information gain at the current channel \(c\).
\[G_{c}(x,y)=|Z_{c}(x,y)-D_{t,c}(x,y)| \tag{3}\]
The first stroke is placed at the location whose neighborhood \(\mathcal{N}(x,y)\) has the largest information gain.
\[p(\hat{x},\hat{y})=\operatorname*{argmax}_{(x,y)}\sum_{(x^{\prime},y^{\prime}) \in\mathcal{N}(x,y)}G_{c}(x^{\prime},y^{\prime}) \tag{4}\]
The whole neighborhood \(\mathcal{N}(x,y)\), presented as the stroke of the Latent Painter, is then updated with the current scheduler output.
\[Z(x,y,c)=D_{t}(x,y,c),\text{for all }(x,y)\in\mathcal{N}(\hat{x},\hat{y}) \tag{5}\]
The stroke-able region in Eq. 2 is then updated to exclude the newly stroke region. Following the same procedure, the strokes are placed one by one until \(R\) being empty, or an early stopping criteria \(\mathcal{E}\) has been reached. Continue the stroke action in other channels in \(\bar{C}\) likewise.
Upon finishing all \(\bar{C}\) channels, the painter steps through another iteration \(t\) of the scheduler to get a new \(D_{t}\), then starts from Eq. 1 to get the channels to be painted this iteration. However, there could be very little difference between the new \(D_{t}\) and current \(Z\). This is particularly true toward the end of the denoising process.
To ensure the animation frames having meaningful update, one idea is to release \(D_{t}\) more evenly across strokes, each being presented in one frame. Therefore, \(\mathcal{P}\) requires the total difference between \(Z\) and \(D_{t}\) being larger than a portion of the largest total difference over all previous iterations. With \(D_{t}\) sufficiently diverse from \(Z\), \(\mathcal{P}\) provides the channels needing update and to be painted, as in Eq. 1.
Note the current scheduler output \(D_{t}\) may not fully pass onto the latent \(Z\) at time \(t\), when the early stopping condition \(\mathcal{E}\) exists. The un-updated residue is carried over to next scheduler output \(D_{t+1}\). The accumulated residue becomes the momentum for the next strokes.
### Cost
```
Initialize painter state \(Z\) as zeros of the latent shape \(w\times h\times|C|\) for\(t\) in diffuser schedule do Compute \(D_{t}\), the diffuser latent space prediction of \(x_{0}\) at current time \(t\) Based on policy \(\mathcal{P}\), decides the channels to be updated \(\bar{C}=\mathcal{P}(Z,D_{t})\), where \(\bar{C}\in C\) for\(c\in\bar{C}\)do Compute the stroke region \(R=\{(x,y):|Z(x,y,c)-D_{t}(x,y,c)|>\theta\}\) Initialize move cost \(M(x,y)\) as \(w\times h\) of ones while\(R\) not empty, as step \(s\), do Compute gain \(G_{c}(x,y)=|Z_{c}(x,y)-D_{t,c}(x,y)|\) at channel \(c\) Compute motivation \(V(x,y)=G_{c}(x,y)\cdot M_{c}(x,y)\) Pick the stroke point \(p(\hat{x},\hat{y})=\operatorname*{argmax}_{(x,y)}\sum_{(x^{\prime},y^{\prime} )\in\mathcal{N}(x,y)}V(x^{\prime},y^{\prime})\) Stroke to make \(Z(x,y,c)=D_{t}(x,y,c)\) for all \((x,y)\in\mathcal{N}(\hat{x},\hat{y})\) Update \(R\gets R\setminus\{(x,y):(x,y)\in\mathcal{N}(\hat{x},\hat{y})\}\) Compute move cost \(M(x,y)\) as an inverse Gaussian filter centered \((\hat{x},\hat{y})\) endwhile endfor endfor
```
**Algorithm 1** Latent Painter - Strokes
The human painter typically considers optimizing the effort in painting, such as to stroke the nearby area first, and keep using the same brush and the same color as much as possible, before changing or cleaning the brush. As for machine, while the convolutional layers are trained to capture congruent patterns, placing the strokes within the same channel allows more congruent patterns being stroked.
The moving cost of the brush, denoted by \(M_{c}(x,y)\), is modeled as the inverse of a Gaussian kernel centered at the current stroke location. Rather than the location with largest information gain, the stroke is placed where having the largest motivation \(V\), which is defined as the information gain modulated by the moving cost,
\[V(x,y)=G_{c}(x,y)\cdot M_{c}(x,y) \tag{6}\]
The motivation-based stroke method is presented in Alg. 1.
## 3 Result
Sample outputs from the Latent Painter Strokes (Alg. 1) are shown in Fig. 1. The samples were first generated with stable diffusion [5] using a text sentence. The incurred latent series of the predicted original images is fed into the painter to produce the strokes. From the bottom rows of each block in Fig. 1, the denoising outputs quickly converge close to the final state in the first couple iterations, each of which provides only one frame in the animation.
In contrast, the Latent Painter slows down the rapid update during early denoising iterations. This prevents frames being updated too quickly, and helps rendering the new information more evenly over frames. Here, the new information is released in the form of strokes. Each denoising iteration can be released in tens to hundreds of strokes (frames). The stroke maps in the middle rows indicate the regions with large information gap between \(D_{T}\) and \(Z\), thus being stroked.
The brighter regions in the stroke map are from the accumulated strokes in those regions, aggregated across channels. Since the Latent Painter Strokes is driven by the differential content between
Figure 1: Stroke location of Latent Painter. Within each block, the bottom row shows the predicted original images \(\hat{x_{0}}\) after each of the first 12 denoising iterations, each with only one frame. The top row shows the Latent Painter stroke snapshots, each containing 20-40 strokes (frames), during the same period. The middle row are the corresponding accumulated stroke map, stacked over channels. The animations are available at [https://latentpainter.github.io/](https://latentpainter.github.io/)
denoising updates, the stroke map provides a way to observe the informative regions that require intensive updates to yield the final detail.
### Stroke Content
What have been stroked onto the canvas? Do the spatially and temporally nearby strokes share similar pattern, color or style? Some stroke samples from the Latent Painter are shown in Fig. 2. The snapshots each row only accumulates the strokes from the same channel update, which starts from Eq. 2. Within the same channel, the strokes tend to be congruent in color or style, or both. The channels of the latent space are different from those at the visible layer, where the RGB channels each only account for a color.
Ultimately, the cardinality in color and style is bounded by the number of trainable neurons, i.e. network parameters. Latent Painter paints at the latent space, which has only four channels per the diffusion system in [5]. Designed to be a compact representation space, the latent space has only four channels. Thus even within the same channel, the strokes can contain different styles and colors. However, the denoising process provides the guidance to differentiate the further detail within the channel. Therefore, one can attain better congruence via either lifting the early stop condition \(\mathcal{E}\) or ensuring \(Z(x,y,c)=D_{t}(x,y,c)\) for all \((x,y)\) at the end of the update of current \(c\).
### Beyond Strokes
Besides the painting action in Alg. 1, there are other ways to leverage the differential response between updates. The glow effect collects updates of the differential latent into the mass center, where the update radiates concentrically.
Figure 2: Stroke content of Latent Painter. Within each block, the rows show the cumulative stroke output of channel 0, when it was chosen out of \(\bar{C}\) for the first three times as the target channel to stroke.
In addition to the content-driven fashion, Latent Painter can also paint regardless of the differential latent response. Some use cases including the flip effect that mimics page flipping and the fade effect that release the update uniformly. The dissolve effect, however, can be either content-driven or random. Some sample trails of the mentioned effects are visualized in Fig. 3.
### Image Transition
Existing image transiting animation is based on interpolating the seeded embeddings between the source and the destination, or interpolating the latents, or both. However, it takes either more memory or more time to denoise the interpolated embeddings. Latent Painter animates based on the predicted original images from only two denoising trails, the source and the destination. With the chosen painting effect, the update runs the source image denoising schedule backward, and then the destination schedule forward. Through the constraints of the update release, the transition time can be traded with the detail.
When the source and destination images share a certain part of background, such as in image editing, the interpolated latents between the source latent and the destination latent can be used as the prediction guidance. This avoids using the denoising schedules to guide. Fig 4 illustrates the painting progress to transit the semantically edited images from [6].
As an extra credit, Latent Painter can transit the generated images from two different sets of denoising checkpoints, given the same VAE is used to decode the latent.
## 4 Conclusion
Latent Painter is presented in this work. It turns existing latent diffuser output into painting actions via evenly releasing the information update during the denoising iterations. Several extension beyond
Figure 3: Extensions from strokes. The glow effect is content-driven, while the dissolve here isn’t. See [https://latentpainter.github.io/](https://latentpainter.github.io/) for animations and more examples.
the stroke algorithm has been covered. Since the latent space of the stable diffusion [5] has only four channels, it is possible to investigate the VAE for further painting behaviors.
|
2309.06582 | Electron Energy Regression in the CMS High-Granularity Calorimeter
Prototype | We present a new publicly available dataset that contains simulated data of a
novel calorimeter to be installed at the CERN Large Hadron Collider. This
detector will have more than six-million channels with each channel capable of
position, ionisation and precision time measurement. Reconstructing these
events in an efficient way poses an immense challenge which is being addressed
with the latest machine learning techniques. As part of this development a
large prototype with 12,000 channels was built and a beam of high-energy
electrons incident on it. Using machine learning methods we have reconstructed
the energy of incident electrons from the energies of three-dimensional hits,
which is known to some precision. By releasing this data publicly we hope to
encourage experts in the application of machine learning to develop efficient
and accurate image reconstruction of these electrons. | Roger Rusack, Bhargav Joshi, Alpana Alpana, Seema Sharma, Thomas Vadnais | 2023-09-12T20:09:59Z | http://arxiv.org/abs/2309.06582v1 | # Electron Energy Regression in the CMS High-Granularity Calorimeter Prototype
###### Abstract
We present a new publicly available dataset that contains simulated data of a novel calorimeter to be installed at the CERN Large Hadron Collider. This detector will have more than six-million channels with each channel capable of position, ionisation and precision time measurement. Reconstructing these events in an efficient way poses an immense challenge which is being addressed with the latest machine learning techniques. As part of this development a large prototype with 12,000 channels was built and a beam of high-energy electrons incident on it. Using machine learning methods we have reconstructed the energy of incident electrons from the energies of three-dimensional hits, which is known to some precision. By releasing this data publicly we hope to encourage experts in the application of machine learning to develop efficient and accurate image reconstruction of these electrons.
HGCAL FAIR Data Energy Regression Machine Learning DNN
## 1 Introduction
To measure the energy of particles produced in collisions at the large hadron collider (LHC) the Compact Muon Solenoid (CMS) experimental detector currently has in each of its two endcaps an electromagnetic calorimeter (ECAL), equipped with a preshower (ES) detector, and a hadronic calorimeter (HCAL). Between the interaction point (IP) where the collisions occur there is a silicon tracking detector to measure the momentum of charged particles as they move through the solenoidal magnetic field. Towards the end of this decade the LHC will be upgraded to the High-Luminosity LHC (HL-LHC) where the collision rate of the colliding beams will be increased by a factor of three or more. To cope with the high radiation levels from the particles produced in the collisions the calorimeters in the endcaps will be replaced with a new type of calorimeter, the high-granularity calorimeter (HGCAL), which tracks the progression of the loss of energy by high energy particles by sampling of the shower at different depths inside it. The HGCAL will be constructed from radiation hard silicon sensors, or plastic scintillator sensors, where the radiation levels are lower, that are sandwiched between passive layers of absorber material made of steel or lead. The location within the CMS detector and an outline of the design are shown in Fig. 1.
In the HGCAL there will be approximately three million detector channels in each of the two endcaps. The information of the energy deposited by particles and the time of their arrival in each channel is measured and digitized. This information is transmitted to off-detector electronics for processing and storage. How this information is used to reconstruct the energy of a incident electron, it's impact on the calorimeter and its angle of incidence is a challenge that we discuss in this paper. In calorimetry the typical method to reconstruct of electrons is with seeding and clustering methods. With the HGCAL design1, which has considerably more information available than in earlier examples of calorimeters, new algorithms based on modern machine learning (ML) methods can be developed to solve the reconstruction problem, which in a sense is like a three-dimensional image reconstruction problem. In this paper we discuss the problem of reconstructing high-energy electrons from the energy deposits in the sensors in the HGCAL.
For this we have generated a large volume of simulated data using the GEANT4[1] simulation package, which accurately simulates electromagnetic showers generated by electrons impacting the calorimeter. This data is available at Zenodo1 and can be used to test new ML methods to address this problem. To accompany the data we provide exemplar software and metadata to permit non-specialist access to the data and development of novel solutions. The exemplar software describes how to access the data and provides a simple reconstruction example that is based on a Deep Neural Network (DNN). In this paper we describe the problem to be solved in more detail and the results that we have obtained with the DNN model.
Footnote 1: [https://zenodo.org/](https://zenodo.org/)
## 2 The High Granularity Calorimeter
The entire assembly of each of the two HGCAL calorimeters weighs approximately 230 T and will be used to measure the energies of particles produced at the IP with angles of approximately 10 to 30 degrees from the beam axis 2. In the final detector the first 26 layers will form the electromagnetic (CE-E) [2] section which will have hexagonal silicon sensors of about 8" width divided into hexagonal cells with areas of 1.1 and 0.5 cm\({}^{2}\). Behind the CE is the 21-layer hadronic section (CH). In this the first eight layers will consist of silicon sensors similar to the CE-E section, and the last 12 layers will have a mixture of silicon sensors and plastic scintillators.
Footnote 2: The coverage is between 1.5 and 3.0 in pseudorapidity defined as \(\eta=-ln|\tan\frac{\theta}{2}|\), where \(\theta\) is the azimuthal angle relative to the beam axis.
### The Prototype Setup
To evaluate the performance of the detector and to qualify many aspects of the design a large-scale prototype of the HGCAL was built and tested in the H2 beamline at CERN's Prevessin site (Figure 2). A beam of positrons is provided by Super Proton Synchrotron (SPS) accelerator. Since, the positron is an anti-particle of the electron differing only in electric charge, the response of the interaction of positron in the prototype is same as that of an electron without any external magnetic field. The prototype consisted of 3 sections, Electromagnetic (CE-E), Hadronic (CE-H) and a CALICE Analog Hadronic Calorimeter (AHCAL)[3, 4], arranged in series in that order. This is similar to the final configuration of the HGCAL. The CE-E [5] section consists of 28 sampling layers made using 14 double-sided mini-cassettes (Figure 3 right). Each cassette consist of a lead clad with stainless steel or Cu/CuW absorber sandwiched between two silicon sensor layers. The hexagonal silicon sensors are subdivided into 128 hexagonal silicon detector channels. Each channel is equipped with electronics to measure the energy and the time of the particle interactions in the sensor. The entire CE-E section corresponds to a total of 26 radiation lengths or 1.4 nuclear interaction lengths.
Figure 1: Current design of the CMS detector (left) to the human scale. The highlighted regions in blue and yellow color represent the ECAL and the HCAL detectors. These regions will be replaced by the newly designed calorimeter (right). It consists of three successive layers which combine the functionalities of both, the ECAL and the HCAL.
In the prototype the CE-H [6] section was composed of 12 sampling layers each with seven Si modules arranged in daisy structure, each layer was sandwiched between a 40 mm thick steel plate. Due to the limited availability of silicon sensor modules, the last three layers of CE-H were equipped with only one sensor module placed at the center of the layer.The CE-H is followed by a 4.4 nuclear interaction length deep prototype of the AHCAL that was built with 39 sampling layers of SiPM-on-scintillator-tile active layers interspersed between steel absorbers.
## 3 Electromagnetic Showers
When energetic particles pass through a media, they typically loses energy through coulomb interactions with the electrons in the media. Energetic electrons (E 1 GeV), on the other lose energy primarily via emission of _bremsstrahlung_ radiations. When the electron passes through a dense media, it get accelerated or decelerated quickly due to the strong electric fields of the nuclei which causes it to emit radiations or photons. Energetic photons, on the other hand, produce pairs of electrons and positrons as they interact with the nuclei of the atoms. This results in a cascade of secondary particles know as an Electromagnetic Shower4 and the process continues until the energy of the decay products falls below a critical energy E\({}_{c}\).
These showers can be characterized by several parameters, which include the _radiation length_ and _Moliere radius_. The _radiation length_ is defined as the distance over which the energetic electron loses 1/e fraction of its energy. Thus, the "shower depth" can be written in terms of the _radiation length_ as follows
\[X=X_{0}\frac{\ln(E/E_{c})}{\ln(2)} \tag{1}\]
Figure 3: A front view of a prototype of the CE-E minicassette (left). It consists of two Hexagonal module mounted onto a Cu cooling plate on either side. The module is an assembly (right) of a baseplate made of copper or copper-tungsten, a 100 \(\mu\)m thick gold-plated Kapton® sheet, a hexagonal silicon sensor, and a printed circuit board called ’hexaboard’. Araldite® is used an an epoxy to glue different components in the module.
Figure 2: The test beam setup of the prototype along the H2 beam line. The four delay wire chambers (DWCs) track the position of the incoming positron. For triggering on signal events two plastic scintillators and fast multiplying tubes are used.
where E\({}_{c}\) is the critical energy3 of electron in a given material.
Footnote 3: [https://pdg.lbl.gov/2022/AtomicNuclearProperties/critical_energy.html](https://pdg.lbl.gov/2022/AtomicNuclearProperties/critical_energy.html)
As the electron dissipates energy, the size of the spread increases in directions orthogonal to its momentum. The the _Moliere_ radius can be used to define the lateral spread of the shower till the critical energy reached as the electron traverses \(X_{0}\) through the medium. By definition, a cylinder of _Moliere radius_ contains about 90% of the total deposited energy.
The electromagnetic calorimeters are designed to capture the highly energetic photons and electrons and measure their energies. They can also localise the position of the incoming particle in space and, in some cases, measure its direction. The part of the calorimeter that produces showers is known as the absorber material, whereas, the material that measures the energy is known as the active part. Ideally a calorimeter has a small \(X_{0}\) and _Moliere_ radius to contain the showers as effectively as possible. The electromagnetic calorimeters can either be of homogeneous type or of sampling type. Homogeneous calorimeters typically have one block of absorber, where the incoming particle dissipates energy and the active material surrounding it measures the energy. In a sampling calorimeter, there are alternating layers of absorbers and active materials, and the energy dissipated in one layer is measured using the energy deposited in the layers before and after the absorber. Finally, the sum of energies over all the layers gives the total energy deposited can be used to measure the energy of the incoming particle.
The energy resolution of a calorimter gives its precision in measuring the energy. For an electromagnetic calorimeter, the energy resolution can be written as follows.
\[\frac{\sigma}{E}=\frac{S}{\sqrt{E}}\oplus\frac{N}{E}\oplus C, \tag{2}\]
where the first term on the right-hand side is the _stochastic_ or _sampling_ term, the middle term is the _noise_ term and the last term is the _constant_ term. The _stochastic_ term arises from the fact that the number of primary and secondary particles produced in the interactions fluctuates. The _noise_ term, on the other hand, comes from the noise in the detector electronics. Furthermore, this term receives contributions from other simultaneous interactions or collisions happening in the same event known as "pileup". Finally, the constant term is the measure of quality of the detector construction. It accounts for the imperfections in the geometry, non-uniformity in the response and energy losses that cannot be measured by its electronics.
## 4 Dataset
The dataset consists of simulations of reconstructed hits, known as "rechits", produced by positrons passing through the HGCAL test beam prototype. For simulations, Monte Carlo method is used to produce positrons with energy ranging from 10 to 350 GeV. In the next step, GEANT4 [1] package is used to simulate their interactions with the detector material. The conditions used in generating positrons are fine tuned to account for real detector effects such as energy losses in the beam. The simulated hits are then digitized using the CMS software. The digitized information was then processed through the CMS software to reconstruct the signals as hits within the detector. The rechits along with their
Figure 4: A schematic showing the development of an electromagnetic shower by an incoming electron in an absorber.
details pertaining to signal reconstruction was stored in root [7] format. These files were then skimmed using uproot [8] package to obtain the final dataset. A set of preselections is applied to ensure that the event selection is identical to the one used in performing the analysis [5] published by the CMS collaboration. The hits are chosen to have a minimum energy of 0.5 MIP4, which is well above the HGCAL noise levels. Events with more than 50 hits in CE-H layers are rejected. The track of electron extrapolated using the hits from the DWC chambers is required to be within a 2x2 cm\({}^{2}\) window within the first layer. The final dataset is a set of 3.2 million events, each event containing position coordinates of rechits within the detector and their calibrated energies. HDF5 format is used to organize the data in hierarchical arrays. The file contains following the arrays:
Footnote 4: Minimum Ionizing Particle (MIP) is the unit used to count the energy of digitized hits.
* **nhits**: An integer array representing number of reconstructed hits (rechits) in each event.
* **rechit_x**: A nested array of length equal to the number of events and sub-arrays of length of nhits. Each sub-array contains a floating value representing x-coordinate of the position of the rechits in units of centimeters.
* **rechit_y**: A nested array with a structure and size same as rechit_x. Each floating value represents the y-coordinate of the position of a rechit in units of centimeters.
* **rechit_z**: A nested array with a structure and size same as rechit_x. Each floating value represents the z-coordinate of the position of a rechit in units of centimeters.
* **rechit_energy**: A nested array with a structure and size same as rechit_x. Each floating value represents the calibrated energy of a rechit in units of MIPs.
* **target**: The true energy of the incoming positron in units of GeV.
To ensure the FAIR-ness of the publication of the dataset, it has been published [9] on Zenodo [10] platform, which was launched in May 2013 as part of the OpenAIRE project, in partnership with CERN. The dataset[9] consists of two files in _gzip_ format. These can be uncompressed to obtain two files in HDF5 format. The smaller sample of 648,000 events with a label "0001" has a file size of 2.8 GB and the full dataset with a label "large" has a file size of 14.0 GB. The code to unpack and use the dataset has been made available on Github5. The metadata describing the contents of the file are available in JSON format under the same repository.
Footnote 5: [https://github.com/FAIR-UMN/FAIR-UMN-HGCAL](https://github.com/FAIR-UMN/FAIR-UMN-HGCAL)
## 5 Summary
The purpose of the release of the dataset is to make it open for everyone for building models for estimating the resolution with better precision, develop visualization tools and benchmarking ML techniques such as Generative Adversarial Networks (GANs), which can be used for generating EM showers with reduced computational time. For the purpose of exploring the dataset, the source code of the simple DNN model that was developed in python for energy regression has been added to the aforementioned Github repository. The respository has been built using the using the "cookiecutter" template used by the FAIR4HEP group for ensuring Findability and reproducibility of the results. An example notebook in the repository also demonstrates a way to make event displays (Figure 5) of individual events in the dataset.
After training on the simulated dataset using a fully connected DNN, the performance of the network can be evaluated by computing the energy resolution in different bins of energies. To achieve this, the difference between measured and true energies from the simulations are plotted for energies ranging from 20 to 300 GeV in 14 bins of 25 GeV width. In each bin, the resulting distribution has a shape of a Gaussian distribution. This distribution is then fit using a \(\chi^{2}\) minimization technique to obtain the mean and the variance. The mean represents the bias in the estimation in each bin, whereas the ratio of the variance to the mean gives the estimate of the energy resolution. Without any contributions from pileup, the _noise_ term in (Equation 1) is assumed to be zero. The squares of the resolutions obtained from the 14 energy bins can be fitted as the sum of quadratures of the _stochastic_ term and the constant term. The slope and the intercept of the linear fit (Figure 6) provides an estimate for the _stochastic_ term and the constant term respectively.
## 6 Acknowledgements
This work has been supported by the Department of Energy, Office of Science, Office of Advanced Scientific Computing under award number DE-SC0021395. The authors would like to express their gratitude to the CMS Collaboration, and in particular to the CMS HGCAL community for making the providing the configurations files to generate simulated events. We would also like to thank our colleagues from the FAIR4HEP group for discussions and their invaluable inputs and suggestions for writing this paper. |
2309.07844 | Predicting the mechanical properties of spring networks | The elastic response of mechanical, chemical, and biological systems is often
modeled using a discrete arrangement of Hookean springs, either representing
finite material elements or even the molecular bonds of a system. However, to
date, there is no direct derivation of the relation between a general discrete
spring network and it's corresponding elastic continuum. Furthermore,
understanding the network's mechanical response requires simulations that may
be expensive computationally. Here we report a method to derive the exact
elastic continuum model of any discrete network of springs, requiring network
geometry and topology only. We identify and calculate the so-called
"non-affine" displacements. Explicit comparison of our calculations to
simulations of different crystalline and disordered configurations, shows we
successfully capture the mechanics even of auxetic materials. Our method is
valid for residually stressed systems with non-trivial geometries, is easily
generalizable to other discrete models, and opens the possibility of a rational
design of elastic systems. | Doron Grossman, Arezki Boudaoud | 2023-09-14T16:39:47Z | http://arxiv.org/abs/2309.07844v4 | # Predicting the mechanical properties of spring networks
###### Abstract
The elastic response of mechanical, chemical, and biological systems is often modeled using a discrete arrangement of Hookean springs, either modeling finite material elements or even the molecular bonds of a system. However, to date, there is no direct derivation of the relation between discrete spring network, and a general elastic continuum. Furthermore, understanding the networks' mechanical response requires simulations that may be expensive computationally. Here we report a method to derive the exact elastic continuum model of any discrete network of springs, requiring network geometry and topology only. We identify and calculate the so-called "non-affine" displacements. Explicit comparison of our calculations to simulations of different crystalline and disordered configurations, shows we successfully capture the mechanics even of auxetic materials. Our method is valid for residually stressed systems with non-trivial geometries, is easily generalizable to other discrete models, and opens the possibility of a rational design of elastic systems.
Since the 19th century [1; 2], the theory of elasticity has been phenomenological. That is - to date, it has never been derived from first principles as continuum limit, and the elastic properties of a material whose microscopic characteristics are known, could not be computed, in general. Barring some exceptions [3; 4; 5; 6; 7; 8; 9] that are limited to flat systems, which are not easily generalized to non flat problems. Despite this, it is widely accepted that in essence, linear elasticity may be described using spring-like interactions between constituents (e.g. first order approximation of intermolecular forces around equilibrium). In fact, elasticity is often described using a spring network either for computational or analytical aspects [10; 11] in a plethora of different systems and cases - from modeling the shape of self assembled membranes [12; 13; 14; 15], through biological systems [16; 17], to modeling crack propagation [18] and various bio-inspired, and meta materials [19; 20; 21]. Typically, calculation of the network's elastic response can only be done via direct simulation of a loading scheme (i.e. simulating a mechanical load and the response to it).
In this paper we directly derive a generalized elastic continuum limit of any triangulated spring network, with arbitrary reference lengths and spring constants, in two and three dimensions, solving an age-long question. The resulting continuum limit depends solely on the network geometry and topology, as expressed by reference lengths, spring constants, and bonds. From this description, any macroscopic elastic quantity can be extracted, such as Poisson's ratio. We demonstrate the strength of this approach, by calculating Poisson's ratios for different test cases, bot ordered and disordered, recovering even auxetic behavior. We identify the so-called "non-affine" displacements that are local deformations deviating from the local average deformation, and are responsible of the wide range of responses seen in disordered elastic media. The results are valid for residually stressed elastic systems.
The continuum limit we derive is formulated within the theory of incompatible elasticity,[22] which is a modern formulation of elasticity, that successfully describes residually stressed elastic systems [23; 24; 25]. In this formulation, an elastic material is described by a metric \(\mathbf{g}\) with elements \(g_{\mu\nu}\), which describes actual distances between material elements, and a reference metric \(\mathbf{\bar{g}}\) with elements \(\bar{g}_{\mu\nu}\) describing ideal distances. The elastic energy then depends on the squared difference of \(\mathbf{g}-\mathbf{\bar{g}}\), \(E_{\underline{el}}\propto\left\|\mathbf{g}-\mathbf{\bar{g}}\right\|^{2}\), for some proper choice (yet to be defined) of the norm \(\|\cdot\|^{2}\), through the elastic (four-indexed) tensor \(\bar{A}\) (with elements \(A^{\mu\nu\alpha\beta}\)).
This description, via use of metrics, is very similar in essence and form to the classical description of Hookean springs. It is independent from assumptions about the existence of a rest configuration, which enables the treatment of residual stresses. In the following, we will consider a discrete network of springs and show how such formulation naturally arises. We will then coarse grain the network and will identify the non-affine quantities, and show how they contribute to the elastic continuum energy.
## Results
### Framework
The theory of incompatible elasticity [22] is the framework to which the results of this paper are anchored to. Within it, the elastic energy is given by:
\[E_{el}=\int A^{\mu\nu\alpha\beta}\left(g_{\mu\nu}-\bar{g}_{\mu\nu}\right)\left(g_ {\alpha\beta}-\bar{g}_{\alpha\beta}\right)\,\mathrm{d}V_{\bar{g}} \tag{1}\]
Where \(g_{\mu\nu}\) is the actual metric, describing distances between neighboring material points, \(\bar{g}_{\mu\nu}\) is the reference metric, describing ideal distances. \(\,\mathrm{d}V_{\bar{g}}=\sqrt{\bar{g}}d^{D}x\) is the volume element in \(D\) dimensions, \(\bar{g}=\det\bar{\mathbf{g}}\). \(A^{\mu\nu\alpha\beta}\) is the elastic tensor. Einstein' summation is assumed for repeating upper and lower Greek indices. Greek indices refer to coordinates within the volume of the \(D\) dimensional manifold.
In an isotropic material, \(A^{\mu\nu\alpha\beta}=\frac{Y}{16(1-\nu^{2})}\left[\frac{1}{2}(1-\nu)\left( \bar{g}^{\mu\alpha}\bar{g}^{\nu\beta}+\bar{g}^{\nu\alpha}\bar{g}^{\mu\beta} \right)+\nu\bar{g}^{\mu\nu}\bar{g}^{\alpha\beta}\right]\), where \(\bar{g}^{\mu\nu}\) is the inverse reference metric, \(Y\) is Young's modulus, setting the rigidity scale of the system, and \(\nu\) is Poisson's ratio, describing the amount a material contracts in one axis, when the other is stretched (negative values indicate expansion). In non isotropic materials Poisson's ratio is orientation dependent, and the expression of \(A^{\mu\nu\alpha\beta}\) will typically depend on additional terms.
The elastic stress is given by the variation \(\sigma^{\mu\nu}=\frac{\delta E_{el}}{\delta g_{\mu\nu}}\), and the material satisfies the usual force balance equation:
\[\bar{\nabla}_{\mu}\sigma^{\mu\nu}+\sigma^{\mu\alpha}\left(\Gamma^{\nu}_{\mu \alpha}-\bar{\Gamma}^{\nu}_{\mu\alpha}\right)=f^{\nu}_{ext} \tag{2}\]
where \(f^{\nu}_{ext}\) are the external forces acting on the systems, and \(\sigma^{\mu\nu}=\frac{\delta E_{el}}{\delta g_{\mu\nu}}\) is the elastic stress, \(\bar{\nabla}_{\mu}\) is the covariant derivative with respect to \(\bar{g}_{\mu\nu}\) and \(\Gamma^{\alpha}_{\beta\gamma},\bar{\Gamma}^{\alpha}_{\beta\gamma}\) are the christoffel symbols associated with the metrics \(g_{\mu\nu}\) and \(\bar{g}_{\mu\nu}\) respectively.
### Analytical Derivation
We begin by considering a triangulated mesh of springs, each with reference length \(\ell_{e}\), spring constant \(k_{e}\) and an actual length \(l_{e}\), where the index \(e\) enumerates the springs. The elastic energy of the systems is exactly given by
\[E_{el}=\sum_{e}\frac{1}{2}k_{e}\left(l_{e}-\ell_{e}\right)^{2}. \tag{3}\]
A triangulated network is easily divided into sum of specific simplexes (cells). In three dimensions these are tetrahedrons, and in two dimensions these are simple triangles
\[E_{el}=\frac{1}{2}\sum_{s}\sum_{e\in s}\frac{1}{2}k_{e}\left(l_{e}-\ell_{e} \right)^{2}. \tag{4}\]
Here, the index \(s\) enumerates simplexes, \(e\in s\) means summation over all the springs associated with the simplex \(s\). When left to relax, the network assumes some configuration (not necessarily unique) \(\{\vec{f}_{v}\}\) in \(\mathbb{R}^{n}\) for every vertex \(v\). By Setting coordinates \(x^{\mu}_{v}\) to each vertex \(v\) (\(\mu\) is the coordinate component), given actual lengths \(\{l_{e}\}\), we may uniquely define a "local metric" \(g^{(s)}_{\mu\nu}\) associated with a cell \(s\), so that
\[l_{e}^{2}=g^{(s)}_{\mu\nu}\Delta x^{\mu}_{e}\Delta x^{\nu}_{e}\quad\forall e\in s. \tag{5}\]
Where \(\Delta x^{\mu}_{(1,2)}=x^{\mu}_{2}-x^{\mu}_{1}\) is the "coordinate difference" of the edge \(e=(1,2)\) connecting vertexes \(1\) and \(2\). (5) is not an approximation, rather it is an exact definition of the local quantity \(g^{(s)}_{\mu\nu}\), over the whole simplex.
Hence, a given simplex uniquely defines a local metric, \(g^{(s)}_{\mu\nu}\), associated to it. A physical systems is constrained such that any two local metrics \(\mathbf{g}^{(i)}\) and \(\mathbf{g}^{(j)}\) with a shared edge \(\Delta x_{e}\), agree on its length- \(l_{e}[g^{(i)}]=l_{e}[g^{(j)}]\), where \(l_{e}[g]\) is
the edge's length, as measured using the metric \(\mathbf{g}\). We can now rewrite the energy -
\[E_{el}=\frac{1}{4}\sum_{s}\sum_{e\in s}k_{e}\left(\sqrt{g^{(s)}}_{\mu\nu}\Delta x _{e}^{\mu}\Delta x_{e}^{\nu}-\ell_{e}\right)^{2}. \tag{6}\]
In order to advance, we introduce three assumption. First we assume that the reference lengths \(\{\ell_{e}\}\) are compatible, so that a single simplex can assume the shape described by the lengths \(\ell_{e}\). This means that the reference lengths locally define a reference metric \(\bar{g}_{\mu\nu}^{(s)}\). Under these assumptions we can write \(\ell_{e}=\sqrt{\bar{g}_{\mu\nu}^{(s)}\Delta x_{e}^{\mu}\Delta x_{e}^{\nu}}\).
Second, we assume that in an equilibrium configuration (again, not necessarily unique), deviations of actual lengths from the reference lengths are small.
\[l_{e}-\ell_{e}=\frac{l_{e}^{2}-\ell_{e}^{2}}{l_{e}+\ell_{e}}\simeq\frac{1}{2 \ell_{e}}\left(l_{e}^{2}-\ell_{e}^{2}\right)+\cdots \tag{7}\]
\(\cdots\) marks higher order terms of \(l_{e}^{2}-\ell_{e}^{2}\). Thus
\[E_{el}=\sum_{s}\sum_{e\in s}\frac{k_{e}}{16\ell_{e}^{2}}\left(g_{\mu\nu}^{(s) }\Delta x_{e}^{\mu}\Delta x_{e}^{\nu}-\bar{g}_{\mu\nu}^{(s)}\Delta x_{e}^{\mu }\Delta x_{e}^{\nu}\right)^{2}+\cdots \tag{8}\]
We expand the energy:
\[E_{el}= \sum_{s}\left(g_{\mu\nu}^{(s)}-\bar{g}_{\mu\nu}^{(s)}\right)\left(g_ {\alpha\beta}^{(s)}-\bar{g}_{\alpha\beta}^{(s)}\right)\sum_{e\in s}\frac{k_{e }\Delta x_{e}^{\mu}\Delta x_{e}^{\nu}\Delta x_{e}^{\alpha}\Delta x_{e}^{\beta }}{16\ell_{e}^{2}}. \tag{9}\]
Marking the local elastic tensor
\[A_{(s)}^{\mu\nu\alpha\beta}=\sum_{e\in s}\frac{k_{e}\Delta x_{e}^{\mu}\Delta x _{e}^{\nu}\Delta x_{e}^{\alpha}\Delta x_{e}^{\beta}}{16\ell_{e}^{2}}, \tag{10}\]
we write -
\[E_{el}= \sum_{s}A_{(s)}^{\mu\nu\alpha\beta}\left(g_{\mu\nu}^{(s)}-\bar{g}_{ \mu\nu}^{(s)}\right)\left(g_{\alpha\beta}^{(s)}-\bar{g}_{\alpha\beta}^{(s)}\right) \tag{11}\]
This equation is very similar to (1), and may be considered as a discrete version of that equation.
The last assumption introduced, is that \(\bar{g}_{s}\) varies slowly on some large enough region. Without this assumption the continuum limit cannot hold (though an effective continuum may be derived, in principle).
Defining an average metric, \(g_{\mu\nu}\), on some neighborhood, we may expand \(g_{\mu\nu}^{(s)}=g_{\mu\nu}+\delta g_{\mu\nu}^{(s)}\). Formally, \(\delta g_{\mu\nu}^{(s)}\) describe the "non-affine" deformations. The energy then reads -
\[E_{el}=\sum_{\Omega}\left(\sum_{s\in\Omega}A_{(s)}^{\mu\nu\alpha\beta}\Delta g _{\mu\nu}\Delta g_{\alpha\beta}+\sum_{s\in\Omega}A_{(s)}^{\mu\nu\alpha\beta} \Delta g_{\mu\nu}\delta g_{\alpha\beta}^{(s)}+\sum_{s\in\Omega}A_{(s)}^{\mu \nu\alpha\beta}\delta g_{\mu\nu}^{(s)}\delta g_{\alpha\beta}^{(s)}\right), \tag{12}\]
where \(\Delta g_{\mu\nu}=g_{\mu\nu}-\bar{g}_{\mu\nu}\), and \(\sum_{\Omega}\) is the sum over all the neighborhoods in which \(\mathbf{g}\) and \(\mathbf{\bar{g}}\) may be regarded as constant. Since, under our assumptions, if \(g_{\mu\nu}=\bar{g}_{\mu\nu}\), \(\delta g_{\mu\nu}^{(s)}=0\,\forall s\), then for small deviations from \(\bar{g}_{\mu\nu}\), \(\delta g_{\mu\nu}^{(s)}=W_{(s)\,\mu\nu}^{\alpha\beta}\Delta g_{\alpha\beta}\). While mathematically different, we identify the proportion tensors \(W_{(s)\,\mu\nu}^{\alpha\beta}\), with "non affine" deformations of each simplex, which are yet unknown.
The elastic energy \(E_{el}^{\Omega}\) within a single neighborhood (\(E_{el}=\sum_{\Omega}E_{el}^{\Omega}\)) then reads
\[E_{el}^{\Omega}= \sum_{s}\left(A_{(s)}^{\mu\nu\alpha\beta}+A_{(s)}^{\mu\nu\rho \sigma}W_{(s)\,\rho\sigma}^{\alpha\beta}+A_{(s)}^{\alpha\beta\rho\sigma}W_{(s) \,\rho\sigma}^{\mu\nu}+A_{(s)}^{\gamma\lambda\rho\sigma}W_{(s)\,\rho\sigma}^{ \alpha\beta}W_{(s)\,\tau\lambda}^{\mu\nu}\right)\Delta g_{\mu\nu}\Delta g_{ \alpha\beta} \tag{13}\] \[+\chi^{\alpha\beta}\sum_{s}W_{(s)\alpha\beta}^{\mu\nu}\Delta g_{ \mu\nu}\]
The second line is a Lagrange term forcing the requirement that \(\sum_{s}\delta g_{(s)\mu\nu}=0\). We note, that as \(g_{\mu\nu}\) is an average
metric, \(\sum_{s}\delta g^{(s)}_{\mu\nu}=0\). This translates to \(\sum_{s}W^{\alpha\beta}_{(s)\,\mu\nu}=0\). Marking \(n\) the number of simplexes in the neighborhood \(\Omega\), and the averages:
\[A^{\mu\nu\alpha\beta} = \frac{1}{n}\sum_{s}A^{\mu\nu\alpha\beta}_{(s)} \tag{14}\] \[\tilde{A}^{\mu\nu\alpha\beta} = \frac{1}{n}\sum_{s}\left(A^{\mu\nu\alpha\beta}_{(s)}+A^{\mu\nu \rho\sigma}_{(s)\,\rho\sigma}W^{\alpha\beta}_{(s)\,\rho\sigma}+A^{\alpha\beta \rho\sigma}_{(s)}W^{\mu\nu}_{(s)\,\rho\sigma}+A^{\tau\lambda\rho\sigma}_{(s)}W ^{\alpha\beta}_{(s)\,\rho\sigma}W^{\mu\nu}_{(s)\,\tau\lambda}\right)\] (15) \[\delta A^{\mu\nu\alpha\beta}_{(s)} = A^{\mu\nu\alpha\beta}_{(s)}-A^{\mu\nu\alpha\beta}, \tag{16}\]
we may find \(W^{\alpha\beta}_{(s)\,\mu\nu}\) for a finite strain by solving the coupled set of equations
\[\tilde{\nabla}_{\mu}\sigma^{\mu\nu}+\sigma^{\mu\alpha}\left( \Gamma^{\nu}_{\mu\alpha}-\bar{\Gamma}^{\nu}_{\mu\alpha}\right)=f^{\nu}_{ext} \tag{17}\] \[\left(A^{\mu\nu\alpha\beta}_{(s)}+A^{\lambda\tau\alpha\beta}_{(s) }W^{\mu\nu}_{(s)\,\lambda\tau}\right)\Delta g_{\mu\nu}+\frac{1}{2}\chi^{\alpha \beta}=0\]
The first line is actually the elastic equation(2). At this point it is enough to note that \(\sigma^{\mu\nu}=\frac{\delta E_{el}}{\delta g_{\mu\nu}}\), where \(E_{el}\) is given by eq. (13), and that \(\Gamma^{\alpha}_{\beta\gamma}\) and \(\bar{\Gamma}^{\alpha}_{\beta\gamma}\) are the Christoffel symbols associates with the metrics \(\mathbf{g}\) and \(\mathbf{\bar{g}}\), and \(\bar{\nabla}_{\mu}\) is the covariant derivative associated with the metric \(\mathbf{\bar{g}}\). Marking \(\langle A_{(s)}W_{(s)}\rangle^{\mu\nu\alpha\beta}=\frac{1}{n}\sum_{s}A^{ \lambda\tau\alpha\beta}_{(s)}W^{\mu\nu}_{(s)\,\lambda\tau}\), and using \(\langle A_{(s)}W_{(s)}\rangle^{\mu\nu\alpha\beta}=\langle\delta A_{(s)}W_{(s)} \rangle^{\mu\nu\alpha\beta}\), we may rewrite the second line of (17), after a little algebra
\[\delta A^{\mu\nu\alpha\beta}_{(s)}+A^{\lambda\tau\alpha\beta}_{(s)}W^{\mu\nu}_ {(s)\,\lambda\tau}-\langle\delta A_{(s)}W_{(s)}\rangle^{\mu\nu\alpha\beta}=0. \tag{18}\]
This is a linear equation for the non-affine deformation terms, \(W\). It is solved by mapping the tensor components and indexes unto a multi - index notation and using the symmetries of the tensors (in two dimensions \(W^{\alpha\beta}_{(s)\,\mu\nu}\) has only 9 independent entries, while in three dimension it has 36)
\[\delta A_{S}+\sum_{S^{\prime}}\left(A_{SS^{\prime}}-B_{SS^{\prime}}\right)W_{S ^{\prime}}=0 \tag{19}\]
Where \(\delta A_{S}\), \(A_{SS^{\prime}}\) and \(B_{SS^{\prime}}\) are reorganizations of the elements of \(\{A^{\mu\nu\alpha\beta}_{s}\}\) and \(\{\delta A^{\mu\nu\alpha\beta}_{s}\}\) into matrices compatible with the new multi index (see appendix A). The solution -
\[W_{S}=-\sum_{S^{\prime}}\left[A-B\right]^{-1}_{SS^{\prime}}\delta A_{S^{\prime}}. \tag{20}\]
We may now write the elastic energy -
\[E_{el}= \sum_{\Omega}n\tilde{A}^{\mu\nu\alpha\beta}\left(g_{\mu\nu}-\bar{g}_ {\mu\nu}\right)\left(g_{\alpha\beta}-\bar{g}_{\alpha\beta}\right). \tag{21}\]
This equation is essentially already coarse grained -
\[E_{el}= \sum_{\Omega}n\tilde{A}^{\mu\nu\alpha\beta}\left(g_{\mu\nu}-\bar{ g}_{\mu\nu}\right)\left(g_{\alpha\beta}-\bar{g}_{\alpha\beta}\right)\frac{V^{ \Omega}_{\mathbf{\bar{g}}}}{V^{\Omega}_{\mathbf{\bar{g}}}}=\sum_{\Omega}\rho_ {\Omega}\tilde{A}^{\mu\nu\alpha\beta}\left(g_{\mu\nu}-\bar{g}_{\mu\nu}\right) \left(g_{\alpha\beta}-\bar{g}_{\alpha\beta}\right)\int_{\Omega}\,\mathrm{d}V _{\mathbf{\bar{g}}} \tag{22}\] \[= \int\tilde{A}^{\mu\nu\alpha\beta}\left(g_{\mu\nu}-\bar{g}_{\mu\nu} \right)\left(g_{\alpha\beta}-\bar{g}_{\alpha\beta}\right)\,\mathrm{d}V_{\bar{g}}\]
where \(V^{\Omega}_{\mathbf{\bar{g}}}\) is the volume of the neighborhood \(\Omega\), \(\rho_{\Omega}=n/V^{\Omega}_{\mathbf{\bar{g}}}\) is the local density (which we absorb into the definition of \(\tilde{A}^{\mu\nu\alpha\beta}\)),\(\int_{\Omega}\,\mathrm{d}V_{\mathbf{\bar{g}}}\) is an integral over the region \(\Omega\) and we use the fact the \(\sum_{\Omega}\int_{\Omega}\,\mathrm{d}V_{\mathbf{\bar{g}}}=\int\,\mathrm{d}V_{ \mathbf{\bar{g}}}\) over all the network.
Eqs. (22), and (20) form the central result of this work. Together with the definitions of \(A^{\mu\nu\alpha\beta},\tilde{A}^{\mu\nu\alpha\beta}_{s},W^{\mu\nu}_{(s)\, \lambda\tau}\), they fully describe the response of the network and offer a novel way of computing it directly from the network geometry, without the need to consider any specific load. Under this view the metrics \(\mathbf{g}\) is the the actual, coarse grained, metric of the system, \(\mathbf{\bar{g}}\) describes the reference geometry of the system, and \(\tilde{A}^{\mu\nu\alpha\beta}\) is the coarse grained elastic tensor, governing the mechanical response (as opposed to the local or "bare" term \(A^{\mu\nu\alpha\beta}_{(s)\,\alpha\beta}\)). \(W^{\mu\nu}_{(s)\,\alpha\beta}\) describe the non-affine
displacements.
### Comparison to simulation
Results were tested numerically by comparing the expected Poisson's ratio using the above scheme, to that of simulated two dimensional triangulated spring networks. In general, we find a very good agreement between theory and simulation, the details of which are described in the methods, section. We considered three cases - ordered, foam-like, and honeycomb networks.
#### Ordered networks
In the ordered case we simulated a triangular lattice of varying unit cells' shapes, and computed the angle dependence Poisson's ratio. In this case all of the non-affine tensors \(W^{\alpha\beta}_{(s)\;\mu\nu}=0\) identically vanish, leading to a simple calculation using eq.(10) (detailed analysis in appendix C). In Figure 1, we see comparison of the analytical solution, and the numerical estimation. Insets show the lattice structure.
#### Foam-like
Following [20] we simulate a random, foam-like, network, exhibiting an auxetic behavior at certain parameter range. Network is produced by deforming each vertex position of a regular triangular lattice, in random direction by a fixed amount \(0<\eta<0.5\).Calculation was done several times to average the results. Our results are consistent with the those in [20] - \(\nu\) decreases as a function of \(\eta\) reaching \(\nu=0\) at \(\eta\simeq 0.46\) and reaching \(\nu=-0.1\) when \(\eta\to 0.5\). In fig. 2 we compare the simulation (discrete triangles) and the semi-analytical computation described in this paper.
#### Hexagonal (honeycomb) network
We consider a honeycomb network, in which the basic hexagonal unit can vary between regular and a re-entrant hexagon, continuously with a diameter (distance between two opposing vertexes) of \(2\ell\;0<\ell<1\). (see inset in figure). In such a case, Poisson's ratio is analytically given [26]
\[\nu(\ell)=-\frac{\left(2-4\ell\right)\left(\ell+1/2\right)}{3+4\ell-4\ell^{2}} \tag{23}\]
In order to calculate the elastic response, we use a triangularized hexagon, with a vertex at the center, and set the spring constant of the radial springs connecting the center with each corner of the hexagon to a very small value (1/1000'th of the peripheral springs). Without this, the original formulation become singular when the spring constant vanishes completely. Results are shown in fig. 3 with a very high degree of agreement between the analytical result and our formulation, despite the use of a large difference between springs constants, strengthening our approach.
## Conclusions
In this work we give, to our knowledge, the first analytical derivation of the effective, coarse grained, elastic description of a general spring network. Comparison of computational results stemming from this derivation to known/numerical results shows a high degree of agreement. Additionally, we identified the "non-affine" deformations, and have shown how they affect the resulting continuum elastic model. In systems such as granular media, these quantities play a role in local stress release by means of plastic deformations [27].
While derived over a spring-network, the results' derivation shown is relevant to many fields and systems in two ways. First the interaction between elements is almost always approximated as that of a simple spring, especially at small deformation. This is true for mechanical systems such as meta materials [19; 20; 21] and coarse grained mechanical models[18], chemical systems such as self assemblies [12; 13; 14; 15], and many biological systems as well [16; 17]. As such, the resulting theory, as is,is relevant to engineers, physicists, chemists, and biologist, and opens the possibility of rational design of materials.
Second, the coarse graining process described here can be generalized to other, more complicated interaction, and is not limited to point masses connected by linear springs. Nonlinearities can be addressed by using higher orders terms (shape - related nonlinearities are actually addressed by the usage of a metric description). Formulated correctly it could apply to complex molecules, cell-cell interaction, and to polymer-networks. Activity may be involved in it as well.
Finally, the introduction of the new, \(W\), quantities invites further investigation as to the nature of the solutions eq.(20), both analytically, and numerically. It is known, for example, that the non-affine deformations have a characteristic scale [27], this formulation may allow further insight into their scale dependence. Other usages route would involve intelligent design - relating the required mechanical behavior to the non-affine deformations, and from that to the network structure.
Figure 1: Ordered networks. Simulation (yellow) vs analytical estimation (blue) of Poisson’s ratio as a function of the angle depending on different shape parameters \((\phi,\psi)\). Begining in the middle row, left image, and advancing clockwise: \((\phi=1,\psi=1)\), \((2,2)\), \((3,0.9)\), \((1.5,1)\), \((2,0.5)\), \((1,0.5)\). Insets- the spring network shape (up to rotations)
## Methods
We used our assumptions that \(\mathbf{\bar{g}}\) is well defined on a large enough region to limit our numerical and analytical solution for the case \(\bar{g}_{\mu\nu}=\delta_{\mu\nu}\), as we can always work in a locally flat frame. This condition is sufficient as we want to isolate the effects of the non-trivial structure of the network itself, not the whole (non uniform) mechanical response of a complex, possibly residually stressed structure. We compared the results of 3 test cases, ordered (non isotropic),
Figure 3: Hexagonal networks. Theoretical (solid line) and computational (points) results for a honeycomb made out of uniform hexagons with diameter \(2\ell\). To avoid singular expressions, each hexagon was divided to triangles (as indicated by dashed lines). Such that the added edges had negligible, but finite, rigidity (\(k_{dashed}=10^{-3}k_{solid}\)), insets (from left t right - re-entrant hexagon, general hexagon, regular hexagon)
Figure 2: Random foam-like networks. Comparison between theoretical calculation (solid line) and numerical simulation (triangles). Solid line represents the average of 7 different realizations of about 150 vertexes each (shaded region is typical deviation). Numerical simulations were done over 10 realizations of 676 vertexes each, error bars mark the deviations. Note that the last triangle is pointing down, indicating the average is beyond plot boundaries. Inset- an example of a \(\eta=0.45\) realization.
foam like (following procedure in [20]), and a honeycomb, despite the latter being strictly - non triangulated. The latter can be calculated analytically, rather than simulated. Exact simulation protocols and details, can be found in the methods section.
In parallel to simulation, for every network architecture we calculated \(\tilde{A}^{\mu\nu\alpha\beta}\) and using it, we calculated the response to a hypothetical small strain \(\epsilon\) by setting \(g_{yy}=1+\epsilon\). Using the elastic equation (17), and working in a geometric mean-field approximation, we solved the other terms \(g_{xx}\) and \(g_{xy}\) and calculate Poisson's ratio \(\nu=-\frac{g_{xx}-1}{\epsilon}\) (see appendix B for details).
### Simulation
The simulation was created for the purpose of this research. In each run we simulated a strip with length to width ratio of 4:1. A total of about \(13\times 13\times 4=676\) vertexes, corresponding to about 1000 edges, depending on the exact details of each simulation.
When creating a lattice, vertexes were positioned using the base vectors -
\[v_{1}= \,(1,0) \tag{24}\] \[v_{2}= \,\left(\phi\frac{1}{2},\psi\frac{\sqrt{3}}{2}\right)\]
where \(0<\phi,\psi\) are the shear and elongation parameters, respectively, and are used to control the shape of the triangles. \(\psi=\phi=1\) corresponds to an equilateral triangle, and any \(\phi=1\) is an isosceles triangle. The strip was created by keeping all vertexes whose coordinates satisfy \(0\leq x\leq 13\) and \(0\leq y\leq 52\) ("trimming"). Different orientations lattices were created by rotating the base vectors before trimming, so that the strip orientation remains constant, but the orientation of triangles relative to it changes.
The set of vertexes was then used to create the list of edges via triangulation, and extracting the list of neighbors. The energy of each edge was directly calculated from the positions of its vertexes, using a simple spring energy. In the simulation, the coordinates of the top and bottom vertexes is held constant and all other vertexes are allowed to move in order to minimize the total energy.
The two vertexes initially closest to \(x=0,y=26\) and \(x=13,y=26\) were identified to measure the strain between them \(\delta=\frac{\Delta x_{final}}{\Delta x_{initial}}\), where \(\Delta x_{final}\) is the final \(x\)-coordinate difference between the two vertexes, and \(\Delta x_{initial}\) is the initial difference. After setting the top vertexes at \(y=52(1+\epsilon)\) (\(\epsilon=0.01\)). And letting the system to relax elastically, Poisson's ratio was calculated via \(\nu=-\delta/\epsilon\). And averaged over several simulations, if required (in the more stochastic simulations).
Simulating the foam-like structure, is stochastic in essence. We used the same initialization process, with the following differences. After generating a triangular lattice strip with \(\phi=\psi=1\), and triangulation, we changed the position of each vertex by an amount \(0<\eta<0.5\) at a random direction, and used the resulting distances as the reference lengths of each vertex. We then followed the regular procedure by stretching the strip, and letting the system relax (with the reference lengths calculated just a moment before).
### Calculation through eq.(20)
A square patch was generated, independently of the simulation. Generation of the network itself was done similar to the way described in the simulation. However, once that calculated, instead of stretching the network we calculate \(A^{\mu\nu\alpha\beta}\) using the \(\{W^{\mu\nu}_{(s)\,\lambda\tau}\}\) which are calculated using eq.(20). Poisson's ratio is calculated in the mean field approximation as described in appendix B.
## Acknowledgements
D.G would like to thank Amos Grossman for his help, patience and useful discussions, and to Alessio Zaccone for pointing out important references. |
2309.16791 | Group rings and hyperbolic geometry | For a group acting on a hyperbolic space, we set up an algorithm in the group
algebra showing that ideals generated by few elements are free, where few is a
function of the minimal displacement of the action, and derive algebraic,
geometric, and topological consequences. | Grigori Avramidi, Thomas Delzant | 2023-09-28T18:38:33Z | http://arxiv.org/abs/2309.16791v1 | # Group rings and hyperbolic geometry
###### Abstract
For a group acting on a hyperbolic space, we set up an algorithm in the group algebra showing that ideals generated by few elements are free, where few is a function of the minimal displacement of the action, and derive algebraic, geometric, and topological consequences.
## 1 Introduction
Let \(G\) be a group and \(\mathbb{K}\) a field. A natural problem is to study relations between the group \(G\) and its group algebra \(\mathbb{K}[G]\). For instance, in 1953 Fox suggested that
"_It seems reasonable to conjecture that a group ring \(\mathbb{Z}[G]\) can not have divisors of zero unless \(G\) has elements of finite order; this seems to be not an easy question._"([17], p.557).
This was also conjectured by Kaplansky in [22] and by Higman in his (unpublished) thesis (see [25]). It is equivalent to the statement that ideals generated by one element are free modules. In the early 60's, Cohn [9, 11, 12] investigated rings in which ideals generated by any number of elements are free as \(\mathbb{K}[G]\)-modules (calling them _free ideal rings_, or _firs_ for short), and showed that group algebras of free groups have this property. Soon after, Stallings proved his celebrated result on ends of groups [26] which implies no other group algebras do.
**Theorem 1** (Cohn [9]+(a consequence of) Stallings [26]).: _The group \(G\) is a free group if and only if all ideals in the group algebra \(\mathbb{K}[G]\) are free as submodules._
A related question is to describe, for a ring \(R\), the automorphism group \(\operatorname{GL}_{n}(R)\) of a free \(R\)-module. Denote by \(\operatorname{GE}_{n}(R)\) the subgroup generated by elementary and diagonal matrices. Cohn also showed that for group algebras of free groups this is the entire automorphism group.
**Theorem 2** (Cohn [10]).: _For every \(n\), \(\operatorname{GE}_{n}(\mathbb{K}[F])=\operatorname{GL}_{n}(\mathbb{K}[F])\)._
Bass [5] used these two theorems to show projective modules over the integral group ring of a free group, \(\mathbb{Z}[F]\), are free. This algebraic result has a striking topological consequence:
**Corollary 3** (3.3. in [27]).: _Any two-dimensional complex with free fundamental group is homotopy equivalent to a wedge of circles and \(2\)-spheres._
Our goal is to establish similar results for groups \(G\) acting on hyperbolic spaces. Earlier steps in this direction were taken in [14] and [3], where it was proved that ideals in \(\mathbb{K}[G]\) generated by one, respectively two elements are free if the minimum displacement of the action is large enough. Going further, we show in this paper that ideals generated by \(n\) elements are free if
\((\mathcal{H}_{n,\delta})\): \(G\) _acts on a \(\delta\)-hyperbolic space \(\mathcal{H}\) with displacement greater than \((2n+11)^{2}\delta\)_
and--under the same hypothesis--describe the automorphism groups of free, rank \(n\) modules.
**Theorem 4**.: _Assume the group \(G\) satisfies \(\mathcal{H}_{n,\delta}\). Then_
1. _Every_ \(n\)_-generated ideal in_ \(\mathbb{K}[G]\) _is a free_ \(\mathbb{K}[G]\)_-module, and_
2. \(\operatorname{GE}_{n}(\mathbb{K}[G])=\operatorname{GL}_{n}(\mathbb{K}[G])\)_._
**Example**.: _If \(G\) is the fundamental group of a compact riemannian manifold of curvature \(\leq-1\) acting on its universal cover, then the minimal displacement is twice the injectivity radius of the manifold and the hyperbolicity constant is \(\delta=\log 2\). So, if the injectivity radius is greater than \((2n+11)^{2}(\log 2)/2\) then \(G\) satisfies \(\mathcal{H}_{n,\delta}\). If \(G\) is any residually finite hyperbolic group, then it has a finite index subgroup satisfying \(\mathcal{H}_{n,\delta}\). The free product of any two groups satisifying \(\mathcal{H}_{n,\delta}\) also satisfies it._
### Applications
To augment the description of \(\operatorname{GL}_{n}\), one needs to understand diagonal matrices, which amounts to describing the units in \(\mathbb{K}[G]\). This was already done in [14], where it was shown (assuming minimal displacement is greater than \(4\delta\)) that all units are trivial, i.e. have the form \(\lambda g\) for \(\lambda\in\mathbb{K}^{*},g\in G\). In particular, for a finite field \(\mathbb{K}\) and finitely generated group \(G\) the unit group of \(\mathbb{K}[G]\) is finitely generated. Combining this with our theorem, we obtain an analogous result for \(\operatorname{GL}_{n}\).
**Theorem 5**.: _Assume \(G\) satisfies \(\mathcal{H}_{n,\delta}\). If the field \(\mathbb{K}\) is finite and the group \(G\) is finitely generated, then \(\operatorname{GL}_{n}(\mathbb{K}[G])\) is finitely generated._
In a different direction, a geometric consequence of our theorem is a lower bound for critical points of Morse functions of a given index on essential manifolds.
**Theorem 6**.: _Let \(X^{d}\) be a closed \(d\)-manifold, \(G\) a group that satisfies \(\mathcal{H}_{n,\delta}\) and \(BG\) its classifying space. If there is a continuous map \(f:X^{d}\to BG\) with \(f_{*}[X]\neq 0\) in \(H_{d}(BG;\mathbb{K})\) then for each \(0<k<d\), a Morse function on \(X^{d}\) has at least \(n+1\) critical points of index \(k\)._
The theorem applies--for instance--if the classifying space \(BG\) is a closed, riemannian manifold of curvature \(\leq-1\) and injectivity radius greater than \((2n+11)^{2}\) and \(X=BG\), or more generally if \(X\) has a map of non-zero degree to such a \(BG\)
Using Bass's local-to-global method, we also obtain the following version of Corollary 3 for some \(2\)-dimensional hyperbolic groups (e.g. very high genus surface groups).
**Theorem 7**.: _Assume that \(G\) satisfies \(\mathcal{H}_{n,\delta}\), and that there is an aspherical \(2\)-complex \(Y\) with fundamental group \(G\). Then every presentation \(2\)-complex for \(G\) with less than \(n+1\) relations is homotopy equivalent to_
\[Y\lor S^{2}\vee\cdots\lor S^{2}.\]
It is known ([8]) that one relator groups have geometric dimenension at most two. In our setting of groups acting with large displacement on hyperbolic spaces, we can show that "few relator" groups have cohomological dimension \(\leq 2\).
**Theorem 8**.: _An \(n\)-relator group satisfying \(\mathcal{H}_{n,\delta}\) has cohomological dimension \(\leq 2\)._
Finally, let us mention a consequence that can be thought of either as a generalization of Theorem 8 to higher dimensions or as a generalization of (the \(X=BG\) case of1) Theorem 6 to hyperbolic groups.
Footnote 1: See also Theorem 37.
**Theorem 9**.: _Assume that the group \(G\) satisfies \(\mathcal{H}_{n,\delta}\) and has cohomological dimension \(d\). Then, every aspherical complex with fundamental group \(G\) has more than \(n\) cells in each dimension \(0<k<d\)._
### Algorithms
The proof of the first part of Theorem 4 is based on an algorithm which can be seen as a geometric version of the euclidean algorithm. To describe it, let us first sketch the approach to the "only if" direction of Theorem 1 given in Cohn's book [12]. It consists of two distinct steps:
* Fix a basis for the free group \(F\), let \(F_{+}\) be the (monoid of) non-negative words in this basis and denote by \(\mathbb{K}[F_{+}]\subset\mathbb{K}[F]\) the subring of the group algebra generated by these words. Cohn first shows that ideals in \(\mathbb{K}[F_{+}]\) are free. (Corollary 2.5.2 and Theorem 2.4.6)
* He then gives a localization procedure for passing from \(\mathbb{K}[F_{+}]\) to \(\mathbb{K}[F]\) that preserves the free ideal property. (Corollary 7.11.8)
Cohn measures size via filtrations. Recall that a _filtration_ on a ring \(R\) is a map \(|\cdot|:R\to\mathbb{N}\cup\{-\infty\}\) such that \(|0|=-\infty\), \(|1|=0\), and for all \(x,y\in R\)\(|x-y|\leqslant\max(|x|,|y|)\), \(|xy|\leqslant|x|+|y|\).
Given a generating set for a group \(G\), the group algebra \(\mathbb{K}[G]\) has a natural filtration by word length \(l(g)\) of elements \(g\) in \(G\):
\[\left|\sum_{g\in G}\lambda^{g}\cdot g\right|:=\max_{\lambda^{g}\neq 0}l(g).\]
On the ring \(\mathbb{K}[F_{+}]\) of positive words in the free group, this filtration satisfies \(\left|xy\right|=\left|x\right|+\left|y\right|\).
The relevant notion of dependence says that some linear combination has smaller size than the maximum dictated by its terms: A family \(\xi_{1},\ldots,\xi_{n}\) in \(R\) is \(\left|\cdot\right|\)_-dependent_ if there is a non-zero \((\alpha_{1},\ldots,\alpha_{n})\in R^{n}\) such that
\[\left|\sum_{i}\alpha_{i}\xi_{i}\right|<\max_{i}\left|\alpha_{i}\xi_{i}\right|.\]
**Example**.: _This is the case if the family is linearly dependent in the usual sense._
The first step is accomplished via the following algorithmic theorem.
**Theorem 10** (Cohn [12]).: _Let \(\left|\cdot\right|\) be the word length filtration of \(\mathbb{K}[F_{+}]\). If \(\xi_{1},\ldots,\xi_{n}\) in \(\mathbb{K}[F_{+}]\) is a \(\left|\cdot\right|\)-dependent family then, up to reordering, there exist \(\beta_{2},\ldots,\beta_{n}\) in \(\mathbb{K}[F_{+}]\) such that_
\[\left|\xi_{1}+\sum_{i=2}^{n}\beta_{i}\xi_{i}\right|<\left|\xi_{1}\right|.\]
For a finitely generated ideal in \(\mathbb{K}[F_{+}]\), one can repeatedly apply this theorem to decrease the size of members of a finite generating set until one arrives at a generating set that does not satisfy any dependence relations and hence forms a basis, showing the ideal is free. Doing a bit more work, Cohn uses this theorem to show infinitely generated ideals in \(\mathbb{K}[F_{+}]\) are free, as well (Thm. 2.4.6). Finally, Cohn's localization proceedure shows the same is true for \(\mathbb{K}[F]\).
In [21], Hog-Angeloni gave a beautiful, geometric version of this algorithm that applies to the entire group algera \(\mathbb{K}[F]\). It can be used to bypass the localization step if one is interested exclusively in finitely generated ideals. Hog-Angeloni's proof goes by looking at the action of \(F\) on its Cayley graph, which is a tree \(\mathcal{T}\). It uses the same notion of dependence as Cohn's proof but the natural notion of size of group ring elements relevant to her argument is the _diameter_
\[\operatorname{diam}\left(\sum_{g\in G}\lambda^{g}\cdot g\right):=\max_{ \lambda^{g}\neq 0\neq\lambda^{h}}l(g^{-1}h).\]
**Remark**.: _Note that \(\left|\xi\right|=0\) means \(\xi\in\mathbb{K}^{*}\), while \(\operatorname{diam}(\xi)=0\) means \(\xi=\lambda g\) for a non-zero \(\lambda\in\mathbb{K}^{*}\) and group element \(g\in F\). The diameter is invariant by left translation, while \(\left|\cdot\right|\) is not._
**Theorem 11** (Hog-Angeloni [21]).: _Let \(\left|\cdot\right|\) be the word length filtration of \(\mathbb{K}[F]\). If \(\xi_{1},\ldots,\xi_{n}\) in \(\mathbb{K}[F]\) is a \(\left|\cdot\right|\)-dependent family then, up to reordering, there exist \(\beta_{2},\ldots,\beta_{n}\) in \(\mathbb{K}[F]\) such that_
\[\operatorname{diam}\left(\xi_{1}+\sum_{i=2}^{n}\beta_{i}\xi_{i}\right)< \operatorname{diam}(\xi_{1}).\]
An attentive reader of [21] can check that Hog-Angeloni does not use that the group is free nor that the tree is a Cayley graph, but only the fact that the group acts freely on a tree. We build on her geometric approach, replacing \(F\) acting on the tree \(\mathcal{T}\) by a group \(G\) that acts on a hyperbolic space \(\mathcal{H}\) with large minimum displacement. In our approach we will use the natural filtration on \(\mathbb{K}[G]\) obtained from the action of \(G\) on the hyperbolic space \(\mathcal{H}\).
**Notation**.: _Let \(\mathcal{H}\) be a geodesic metric space and \(o\) a basepoint (or origin). Denote by \(|p-q|\) the distance between two points in \(\mathcal{H}\), and by \(|p|=|p-o|\) the distance to the origin. If \(\mathcal{X}\subset\mathcal{H}\) is a finite subset then \(|\mathcal{X}|=\max_{p\in\mathcal{H}}|p|\) is called its absolute value and \(\operatorname{diam}(\mathcal{X})=\max_{p,q\in\mathcal{X}}|p-q|\) its diameter._
_If a group \(G\) acts isometrically on \(\mathcal{H}\) and \(\xi=\sum_{g\in G}\lambda^{g}\cdot g\) is an element in the group algebra, denote by \(\mathcal{X}=\{g\cdot o\ |\ \lambda^{g}\neq 0\}\) the orbit of the basepoint under group elements appearing with non-zero coefficient in \(\xi\) (i.e. under group elements in the algebraic support of \(\xi\)) and call it the geometric support of \(\xi\). The diameter \(\operatorname{diam}(\xi)\) and absolute value \(|\xi|\) are \(\operatorname{diam}(\xi)=\operatorname{diam}(\mathcal{X})\) and \(|\xi|=|\mathcal{X}|\). By convention, \(\operatorname{diam}(0)=|0|=-\infty\)._
_The minimal displacement of the action of \(G\) on \(\mathcal{H}\) is \(\min_{g\in G-\{1\},p\in\mathcal{H}}|g\cdot p-p|\)._
Now, we can state our hyperbolic version of Hog-Angeloni's theorem.
**Theorem 12**.: _Set \(\delta_{n}=(n^{2}+10n)\delta\). Let the group \(G\) act on a \(\delta\)-hyperbolic space \(\mathcal{H}\) with minimum displacement \(>4\delta_{n}+(10+2n)\delta\). If \(\xi_{1},\ldots,\xi_{n}\in\mathbb{K}[G]\) and there is a non-zero \((\alpha_{1},\ldots,\alpha_{n})\in\mathbb{K}[G]^{n}\) such that_
\[\left|\sum_{i}\alpha_{i}\xi_{i}\right|<\max_{i}|\alpha_{i}\xi_{i}|-\delta_{n},\]
_then, up to re-ordering, there exist \(\beta_{2},\ldots,\beta_{n}\) in \(\mathbb{K}[G]\) such that_
\[\operatorname{diam}\left(\xi_{1}+\sum_{i=2}^{n}\beta_{i}\xi_{i}\right)< \operatorname{diam}(\xi_{1})-\delta.\]
**Remark**.: _Theorem 12 will be used to replace the euclidean algorithm in the classical study of ideals, submodules of free modules, and matrices over euclidean rings. This is the key tool needed to obtain the geometric and algebraic applications (Theorems 4 through 9) as we shall see in the last section._
### Acknowledgements
We would like to thank Misha Gromov for his interest in this work and for mentioning the possibility of extending our results to essential manifolds. G.A. would like to thank the Max Planck Institut fur Mathematik for its hospitality and financial support.
## 2 Hyperbolic preliminaries
Fix a \(\delta\)-hyperbolic metric space \(\mathcal{H}\) with basepoint \(o\). See [18],[13], or [6] for background on hyperbolicity.
#### Radii and centers
Let \(\mathcal{X}\) be a bounded subset in the metric space \(\mathcal{H}\). Denote by
\[r(\mathcal{X}):=\inf\{r\mid\text{there is $c$ for which $B(c,r)\supset\mathcal{X}$}\}\]
the infimum of radii of closed balls containing \(\mathcal{X}\). We call it the _radius_ of \(\mathcal{X}\). If \(\mathcal{H}\) is a proper metric space, then this infimum is realized and there is a closed ball of radius \(r(\mathcal{X})\) containing \(\mathcal{X}\). In general, we only have for any positive \(\epsilon\) a closed ball \(B(c,r(\mathcal{X})+\epsilon)\) containing \(\mathcal{X}\). We call such a \(c\) an \(\epsilon\)-center of \(\mathcal{X}\).
**Remark**.: _If \(\mathcal{H}\) is a proper, complete \(\mathrm{CAT}(-1)\) space, then there is a unique \(0\)-center._
#### Gromov products
Recall ([18, 13, 6]) the definition of the Gromov product
\[\left\langle p,q\right\rangle_{r}:=\frac{1}{2}(|p-r|+|q-r|-|p-q|).\]
To simplify notation, recall that the distance from \(p\) to the origin \(o\) is denoted by \(|p|\) and set \(\left\langle p,q\right\rangle:=\left\langle p,q\right\rangle_{o}\). First, we give an estimate for the Gromov product of an \(\epsilon\)-center of \(\mathcal{X}\) with a point of \(\mathcal{X}\) that follows directly from the triangle inequality.
**Lemma 13**.: _Let \(p\) be a point in a set \(\mathcal{X}\) with \(\epsilon\)-center \(c\) and radius \(r\). Then_
\[|c|\geq\left\langle p,c\right\rangle\geq\frac{|\mathcal{X}|+|p|}{2}-r-\epsilon.\]
Proof.: The triangle inequality \(|c|\geq|p|-|c-p|\) implies
\[|c| \geq \frac{1}{2}\left(|c|+|p|-|c-p|\right)\] \[\geq |p|-|c-p|\] \[\geq |p|-r-\epsilon.\]
This proves the left inequality and, since it is true for all \(p\in\mathcal{X}\), shows \(|c|\geq|\mathcal{X}|-r-\epsilon\). Plugging it back into the formula for Gromov product give the right inequality:
\[\left\langle p,c\right\rangle = \frac{1}{2}(|c|+|p|-|p-c|)\] \[\geq \frac{1}{2}((|\mathcal{X}|-r-\epsilon)+|p|-(r+\epsilon)).\]
#### Thin triangles and projections in hyperbolic spaces
In a \(\delta\)-hyperbolic space, any geodesic triangle is \(\delta\)-thin. This means that if \([r,p,q]\) is a triangle then two (oriented) geodesics \([r,p]\) and \([r,q]\) parametrized by arc length \(p(t),q(t)\) remain \(\delta\)-close (i.e. \(\left|p(t)-q(t)\right|\leq\delta\)) until \(t=\left\langle p,q\right\rangle_{r}\). We will repeatedly use the following consequence of this: Given a segment \([a,b]\) and a point \(x\in\mathcal{H}\), the _projection of \(x\) to \([a,b]\)_ is the point \(x^{\prime}\) on \([a,b]\) such that \(\left|a-x^{\prime}\right|=\left\langle b,x\right\rangle_{a}\).2 Then, for points \(y_{1}\) in the initial subsegment \([a,x^{\prime}]\) of \([a,b]\) we have a \(\delta\)-converse to the triangle inequality
Footnote 2: This is also the point such that \(\left|b-x^{\prime}\right|=\left\langle a,x\right\rangle_{b}\), since \(\left\langle a,x\right\rangle_{b}+\left\langle b,x\right\rangle_{a}=\left|a-b\right|\).
\[\left|x-y_{1}\right|+\left|y_{1}-a\right|\leq\left|x-a\right|+\delta,\]
while for points \(y_{2}\) in the subsegment \([x^{\prime},b]\) we have
\[\left|x-y_{2}\right|+\left|y_{2}-b\right|\leq\left|x-b\right|+\delta.\]
#### Diameter vs radius
First, we note that in a \(\delta\)-hyperbolic space--just like in a tree--the diameter and radius are closely related. More precisely,
**Lemma 14**.: _Let \([a,b]\) be a diameter realizing segment of a finite set \(\mathcal{X}\) with radius \(r\) and let \(m\) be the midpoint of \([a,b]\). Then \(\mathcal{X}\subset B\left(m,\frac{\left|a-b\right|}{2}+\delta\right)\). In particular, \(m\) is a \(\delta\)-center of \(\mathcal{X}\) and_
\[\frac{\left|a-b\right|}{2}\leq r\leq\frac{\left|a-b\right|}{2}+\delta.\]
Proof.: The left inequality holds in any geodesic space. For the right one, let \(p\) be a point in \(\mathcal{X}\) and assume (without loss of generality) that its projection to \([a,b]\) is on \([m,b]\). Then
\[\frac{\left|a-b\right|}{2}+\left|m-p\right| = \left|a-m\right|+\left|m-p\right|\] \[\leq \left|a-p\right|+\delta\] \[\leq \left|a-b\right|+\delta,\]
whence the result.
#### Center vs midpoint
Next, we observe that an \(\epsilon\)-center is \((2\delta+\epsilon)\)-close to the midpoint of any diameter realizing segment.
**Lemma 15**.: _Let \(m\) be the midpoint of a diameter \([a,b]\) of a finite set \(\mathcal{X}\) with \(\epsilon\)-center \(c\). Then_
\[\left|c-m\right|\leq\epsilon+2\delta.\]
Proof.: Without loss of generality, the projection of \(c\) to \([a,b]\) is on \([m,b]\), so
\[|c-m|+\frac{|a-b|}{2} = |c-m|+|m-a|\] \[\leq |c-a|+\delta\] \[\leq r+\epsilon+\delta\]
implies \(|c-m|\leq r-\frac{|a-b|}{2}+\epsilon+\delta\leq\epsilon+2\delta\).
In particular, any two \(\epsilon\)-centers of a finite set are \((4\delta+2\epsilon)\)-close to each other and the midpoints of any two diameters are \(3\delta\)-close to each other.
An equivariant choice of centers**Corollary 16**.: _Suppose \(G\) acts on a \(\delta\)-hyperbolic space \(\mathcal{H}\) with displacement greater than \(3\delta\). Then \(G\) acts freely on the collection of all finite subsets of \(\mathcal{H}\)._
Proof.: Let \(\mathcal{X}\) be a finite subset and \(m\) the midpoint of a diameter of \(\mathcal{X}\). If \(g\mathcal{X}=\mathcal{X}\) for some \(g\in G\) then \(m\) and \(gm\) are \(\delta\)-centers by Lemma 14 and hence \(3\delta\)-close by Lemma 15. Therefore, \(g=1\).
Therefore, as long as the displacement is greater than \(3\delta\) we can choose for each finite subset \(\mathcal{X}\subset\mathcal{H}\) an \(\epsilon\)-center \(c(\mathcal{X})\) such that \(c(g\mathcal{X})=gc(\mathcal{X})\) for each \(g\in G\).
Distance from origin to centerNow, we estimate the distance from the origin to an \(\epsilon\)-center of a set \(\mathcal{X}\) in terms of its radius and \(|\mathcal{X}|\).
**Lemma 17**.: _Let \(\mathcal{X}\) be a finite set with \(\epsilon\)-center \(c\). Then_
\[|\mathcal{X}|-r-\epsilon\leq|c|\leq|\mathcal{X}|-r+4\delta+\epsilon\]
Proof.: The left inequality already appeared in Lemma 13. For the right one, we again pick a diameter realizing segment \([a,b]\) and let \(m\) be its midpoint. Without loss of generality, the projection of \(o\) to \([a,b]\) is on \([m,b]\) and we have
\[|c| \leq |m|+|m-c|\] \[\leq (|a|-|a-m|+\delta)+2\delta+\epsilon\] \[\leq |\mathcal{X}|-\frac{|a-b|}{2}+3\delta+\epsilon.\]
Therefore
\[|c|+r\leq|c|+\left(\frac{|a-b|}{2}+\delta\right)\leq|\mathcal{X}|+4\delta+\epsilon\]
which proves the desired inequality.
#### Diameter of intersection of two balls
Another property of \(\delta\)-hyperbolic spaces we will need is a bound on the diameter of the intersection of two balls.
**Lemma 18**.: _In a \(\delta\)-hyperbolic space \(\mathcal{H}\), the diameter of the intersection of two closed balls \(B(c_{1},r_{1})\cap B(c_{2},r_{2})\) is bounded above by \(r_{1}+r_{2}-|c_{1}-c_{2}|+2\delta\)._
Proof.: Let \([a,b]\) be a segment realizing the diameter of the intersection of balls, and let \(a^{\prime},b^{\prime}\) be the two projections of \(a,b\) on \([c_{1},c_{2}]\). We assume that \(a^{\prime}\) is on the left of \(b^{\prime}\) on this segment. Using \(a^{\prime}\in[c_{1},b^{\prime}]\), hyperbolicity and the fact that \(b\in B(c_{1},r_{1})\)
\[|b-b^{\prime}|+|b^{\prime}-a^{\prime}|+|a^{\prime}-c_{1}| = |b-b^{\prime}|+|b^{\prime}-c_{1}|\] \[\leqslant \delta+|b-c_{1}|\] \[\leqslant \delta+r_{1}.\]
Similarly, \(b^{\prime}\in[a^{\prime},c_{2}]\), hyperbolicity and the fact that \(a\in B(c_{2},r_{2})\) implies
\[|a-a^{\prime}|+|a^{\prime}-b^{\prime}|+|b^{\prime}-c_{2}| = |a-a^{\prime}|+|a^{\prime}-c_{2}|\] \[\leqslant \delta+|a-c_{2}|\] \[\leqslant \delta+r_{2}.\]
Adding these two inequalities together and using the triangle inequality on the left gives
\[|a-b|+|c_{1}-c_{2}|\leq 2\delta+r_{1}+r_{2}.\]
Since \(|a-b|\) is the diameter, this finishes the proof.
#### Gromov product inequality
Finally, we recall a key inequality that will be used to estimate distances:
**Lemma 19** ([18] &6 page 155 or [13] 8.2 page 91).: _For a sequence of \(2^{k}+1\) points \(x_{0},\ldots,x_{2^{k}}\), we have_
\[\langle x_{0},x_{2^{k}}\rangle\geq\min_{i}\left\langle x_{i},x_{i+1}\right\rangle -k\delta.\]
## 3 Extremal graphs
Let \(V\) be a finite set and suppose we have a family3 of bounded sets \(Y=(\mathcal{Y}_{v})_{v\in V}\). Let \(\mu\) be a fixed parameter. A point \(p\in\cup_{v\in V}\mathcal{Y}_{v}\) satisfying \(|p|\geq|\cup_{v\in V}\mathcal{Y}_{v}|-\mu\) is called a \(\mu\)_-extremal point_ (of \(Y\)). A \(0\)-extremal point will also be called an _extremal point_.
Footnote 3: Repetitions in \(Y=(\mathcal{Y}_{v})_{v\in V}\) are allowed, i.e. we may have \(\mathcal{Y}_{v}=\mathcal{Y}_{w}\) even if \(v\neq w\).
### The graph \(\Gamma_{\mu}(Y)\)
Let us first define the \(\mu\)_-extremal graph_\(\Gamma_{\mu}(Y)\) of the family \(Y\). The vertices of \(\Gamma_{\mu}(Y)\) are the indices \(v\in V\) such that \(\mathcal{Y}_{v}\) contains a \(\mu\)-extremal point (of \(Y\)), and there is an edge between \(v\) and \(w\) for each \(\mu\)-extremal point in \(\mathcal{Y}_{v}\cap\mathcal{Y}_{w}\).
#### Distance between centers
For each vertex \(v\in V\) we let \(r_{v}\) be radius of the set \(\mathcal{Y}_{v}\) and choose an \(\epsilon\)-center \(c_{v}\). We estimate the distance between two centers \(c_{v}\) and \(c_{w}\) in terms of the distance between the vertices \(v\) and \(w\) in the graph \(\Gamma_{\mu}(Y)\).
**Lemma 20**.: _For subset \(W\subset V\) denote \(r_{\max(W)}=\max_{v\in W}r_{v}\)._
1. _If_ \(v\) _and_ \(w\) _are connected by a path_ \(P\) _of length_ \(m\) _in_ \(\Gamma_{\mu}(Y)\)_, then_ \[|c_{v}-c_{w}|\leqslant(r_{\max(P)}-r_{v})+(r_{\max(P)}-r_{w})+2\mu+(8+2\lceil \log_{2}(2m)\rceil)\delta+4\epsilon.\]
2. _If_ \(v\) _and_ \(w\) _are adjacent vertices in_ \(\Gamma_{\mu}(Y)\)_, then we have_ \[|c_{v}-c_{w}|\leq|r_{v}-r_{w}|+2\mu+10\delta+4\epsilon.\]
Proof.: The path \(v=v_{0},e_{0},v_{1},e_{1},\ldots,e_{m-1},v_{m}=w\) in the graph gives an alternating sequence of centers and \(\mu\)-extremal points \(c_{v_{0}},p_{e_{0}},\ldots,p_{e_{m-1}},c_{v_{m}}\). The Gromov product inequality and Lemma 13 gives
\[\langle c_{v},c_{w}\rangle \geq \min_{v_{i}}(d-\mu-r_{v_{i}}-\epsilon)-\lceil\log_{2}(2m)\rceil\delta.\]
Using this and the inequality \(|c_{v}|\leq d-r_{v}+4\delta+\epsilon\) obtained in Lemma 17 we get
\[|c_{v}|-\langle c_{v},c_{w}\rangle\leq(\max_{v_{i}}r_{v_{i}})-r_{v}+\mu+(4+ \lceil\log_{2}(2m)\rceil)\delta+2\epsilon,\]
and a similar inequality for \(w\) in place of \(v\). Since
\[|c_{v}-c_{w}|=(|c_{v}|-\langle c_{v},c_{w}\rangle)+(|c_{w}|-\langle c_{v},c_{ w}\rangle),\]
we obtain the first inequality. For adjacent vertices \(v\) and \(w\), we have \(m=1\) and \(2\max\{r_{v},r_{w}\}-r_{v}-r_{w}=|r_{v}-r_{w}|\), which gives the second inequality.
#### Colors
Consider a partition \(V=V_{1}\sqcup\cdots\sqcup V_{n}\) into \(n\) subsets called _colors_ such that any two vertices of the same color have the same radius (if \(v,w\in V_{i}\) then \(r_{v}=r_{w}\)). Denote by \(r_{i}\) the radius of a set of color \(i\).
#### Bounding the diameter of \(\Gamma_{\mu}(Y)\)
Next, we will give an upper bound on the diameter of a component of an extremal graph in terms of the number \(n\) of colors. The result will be proved by induction on the number of colors \(n\). In fact, we prove something stronger, namely an upper bound on the length of an embedded path in the graph.
**Proposition 21**.: _Let \(V=V_{1}\sqcup\cdots\sqcup V_{n}\) be a finite set partitioned into \(n\) colors and let \(Y=(\mathcal{Y}_{v})_{v\in V}\) be a collection of bounded subsets of \(\mathcal{H}\) such that for each \(i\) and any two different \(v,w\in V_{i}\) we have_
* \(r_{v}=r_{w}\) _and_
* \(|c_{v}-c_{w}|>2\mu+(10+2n)\delta+4\epsilon\)_._
_Then every embedded path in \(\Gamma_{\mu}(Y)\) has length at most \(2^{n}-2\)._
Proof.: We argue by contradiction. If there is an embedded path in \(\Gamma_{\mu}(Y)\) of length more than \(2^{n}-2\), then there is an embedded path of length precisely \(2^{n}-1\). Let \(P_{n}\) be such a path. It has \(2^{n}\) vertices. Reorder the list of colors so that
\[r_{1}\geq\cdots\geq r_{n}.\]
Let \(P_{k}\) be the sequence of vertices obtained by throwing out from \(P_{n}\) all vertices of colors \(\{k+1,\ldots,n\}\). The sequence \(P_{k}\) is colored by the set \(\{1,\ldots,k\}\). We prove by reverse induction (starting with base case \(k=n\) and going down to \(k=1\)) that
* \(P_{k}\) has at least \(2^{k}\) vertices, and
* any two consecutive vertices in \(P_{k}\) have different colors.
Note that consecutive vertices of \(P_{n}\) have different colors by Lemma 20.2 and the second hypothesis. So \(P_{n}\) satisfies the two properties above, verifying the base case.
To prove the induction step, suppose we know the statement for \(P_{k+1}\). Since it has at least \(2^{k+1}\) vertices an no two consecutive vertices have the same color, at most \(\lceil|P_{k+1}|/2\rceil\) have the color \(r_{k+1}\), so that \(P_{k}\) has at least \(2^{k}\) vertices. Suppose two consecutive vertices \(v,w\) in the sequence \(P_{k}\) have the same color. Then \(r_{v}=r_{w}\) and the two vertices are connected by a path (of length at most \(2^{n}\)) in \(P_{n}\) through colors \(\{k+1,\ldots,n\}\) in which all vertices have radii \(\leq r_{v}=r_{w}\) by our choice of ordering. Therefore, Lemma 20.1 implies:
\[|c_{v}-c_{w}|\leq 2\mu+(8+2(n+1))\delta+4\epsilon.\]
This contradicts the second hypothesis of our proposition. So, we have shown that \(v\) and \(w\) have different colors, completing the induction step.
We have shown \(P_{1}\) has at least two vertices and consecutive ones have different colors. This is absurd since all vertices of \(P_{1}\) have the same color. So, we arrive at a contradiction to our initial assumption, and conclude that all embedded paths in \(\Gamma_{\mu}(Y)\) have length \(\leq 2^{n}-2\).
As a consequence, we see that there is a unique vertex of largest color in each component, and a similar statement for other colors with suitably large radii. Recall that \(r_{1}\geq\cdots\geq r_{n}\) so that \(r_{\max(V)}=r_{1}\).
**Corollary 22** (Uniqueness).: _In the situation of Prop. 21, if for any pair of distinct vertices of the same color \(v,w\in V_{i}\) we have_
\[|c_{v}-c_{w}|>2(r_{1}-r_{i})+2\mu+(10+2n)\delta+4\epsilon\]
_then there is at most one vertex of color \(i\) in each component of \(\Gamma_{\mu}(Y)\)._
Proof.: Assume \(v\) and \(w\) are in the same component of \(\Gamma_{\mu}(Y)\). Then, by Proposition 21, they are connected by an embedded path \(P\) of length at most \(2^{n}-2\). Therefore, Lemma 20.1 implies that \(|c_{v}-c_{w}|\leq 2(r_{1}-r_{i})+2\mu+(10+2n)\delta+4\epsilon\), contradicting the hypothesis. So, \(v\) and \(w\) must be in different components.
### The graph \(\Gamma_{\mu}(\Xi)\)
We will apply Proposition 21 and Corollary 22 in the following situation. Let \(\Xi=(\xi_{v})_{v\in V}\) in \(\mathbb{K}[G]\) be a finite family of group ring elements, such that no two are \(\mathbb{K}\)-scalar multiples of each other. Recall that the (geometric) support of \(\xi_{v}=\sum_{g\in G}\xi_{v}^{g}\cdot g\) is the set \(\mathcal{X}_{v}=\{g\cdot o\mid\xi_{v}^{g}\neq 0\}\). Let \(X=(\mathcal{X}_{v})_{v\in V}\) be the family of these supports, and set \(\Gamma_{\mu}(\Xi):=\Gamma_{\mu}(X)\). In this situation, there is a canonical partition of the index set \(V\) into colors \(V=V_{1}\sqcup\cdots\sqcup V_{n}\) defined as follows. We declare that \(v\) and \(w\) have the same color if and only if there is \(\lambda\in\mathbb{K}^{*}\) and \(g\in G\) such that \(\xi_{v}=\lambda g\xi_{w}\). Note that if \(v\) and \(w\) have the same color then \(g\mathcal{X}_{v}=\mathcal{X}_{w}\) and hence \(r_{v}=r_{w}\). Let \(r_{i}\) be the radius of elements with color \(i\).
**Corollary 23** (Uniform diameter bound+Uniqueness).: _Suppose that \(G\) acts on a \(\delta\)-hyperbolic space \(\mathcal{H}\) with displacement greater than \(2\mu+(10+2n)\delta\). Let \(\Xi=(\xi_{v})_{v\in V}\) in \(\mathbb{K}[G]\) be a family of group ring elements consisting of \(n\) colors, and such that no two are \(\mathbb{K}\)-scalar multiples of each other. Then,_
1. _Each component of_ \(\Gamma_{\mu}(\Xi)\) _has diameter at most_ \(2^{n}-2\)_._
2. _If the minimal displacement is greater than_ \(2(r_{1}-r_{i})+2\mu+(10+2n)\delta\)_, then there is at most one vertex of color_ \(i\) _in each component of_ \(\Gamma_{\mu}(\Xi)\)_._
Proof.: For two distinct elements \(v,w\in V_{i}\), since \(\xi_{v}\) is not a \(\mathbb{K}\)-scalar multiple of \(\xi_{w}\) there is a non-trivial \(g\in G\) such that \(\lambda g\xi_{v}=\xi_{w}\). So, we have \(r_{v}=r_{w}\). Since the displacement is greater than \(3\delta\) we can, by Corollary 16, pick \(\epsilon\)-centers equivariantly, so that \(gc_{v}=c_{w}\). Then \(|c_{v}-c_{w}|=|c_{v}-gc_{v}|>2\mu+(10+2n)\delta+4\epsilon\) for small enough \(\epsilon\). So, Proposition 21 and Corollary 22 apply.
#### Components of \(\Gamma_{\mu}(\Xi)\) and weak relations
A finite collection of group ring elements \((\xi_{v})_{v\in V}\) defines a \(\mu\)_-relation_ if
\[\left|\sum_{v\in V}\xi_{v}\right|<\max_{v\in V}|\xi_{v}|-\mu,\]
The components of \(\Gamma_{\mu}(\Xi)\) help us keep track of such \(\mu\)-relations.
**Lemma 24**.: _If the family \(\Xi=(\xi_{v})_{v\in V}\) define a \(\mu\)-relation, then every connected component \(C\) of the graph \(\Gamma_{\mu}(\Xi)\) that contains a vertex of \(\Gamma_{0}(\Xi)\) defines its own \(\mu\)-relation \(|\sum_{v\in C}\xi_{v}|<\max_{v\in C}|\xi_{v}|-\mu\)._
Proof.: Set \(d=\max_{v\in V}|\xi_{v}|\). Denote the complement of \(C\) by \(D=\Gamma_{\mu}-C\). Since the \((\xi_{v})_{v\in V}\) define a \(\mu\)-relation, we have
\[\left|\sum_{v\in C}\xi_{v}+\sum_{v\in D}\xi_{v}\right|<d-\mu.\]
Since \(C\) is a component of \(\Gamma_{\mu}(\Xi)\), the supports of \(\sum_{v\in C}\xi_{v}\) and \(\sum_{v\in D}\xi_{v}\) have no \(\mu\)-extremal points (that is, points \(p\) with \(|p|\geq d-\mu\)) in common. Therefore, we must have
\[\left|\sum_{v\in C}\xi_{v}\right|<d-\mu.\]
Our choice of \(C\) implies that \(d=\max_{v\in C}|\xi_{v}|\), proving the lemma.
## 4 Proof of Theorem 12
### Restatement of the main theorem
We will now restate Theorem 12 in terms of colors and \(\mu\)-relations. We assume that \(\xi_{1},\ldots,\xi_{n}\) are elements in the group algebra \(\mathbb{K}[G]\) so that there exist \(\alpha_{1},\ldots,\alpha_{n}\) in \(\mathbb{K}[G]\), not all zero, satisfying \(|\sum\alpha_{i}\xi_{i}|<\max_{i}|\alpha_{i}\xi_{i}|-\delta_{n}\), where \(\delta_{n}=(n^{2}+10n)\delta\).
Recall that two elements \(\xi_{1}\) and \(\xi_{2}\) of \(\mathbb{K}[G]\) have the same color if there exists a trivial unit \(\lambda g\) in \(\mathbb{K}[G]\) such that \(\xi_{1}-\lambda g\xi_{2}=0\). Certainly if two of the \(\xi_{i}\) have the same color, the conclusion of the theorem follows.
So, from now on, we assume that the \(\xi_{i}\) have different colors, so that the set \(\{1,\ldots,n\}\) can be used as the set of colors. For each \(\alpha_{i}=\sum_{g\in G}\alpha_{i}^{g}\cdot g\), we let \(A_{i}=\{g\in G\mid\alpha_{i}^{g}\neq 0\}\) be its (algebraic) support in the group \(G\). The group ring elements \((\alpha_{i}^{g}g\xi_{i})_{g\in A_{i},1\leq i\leq n}\) are distinct, no two of them are scalar multiples of each other, and for \(i\neq j\) they do not have the same color. We rename them \((\xi_{v})_{v\in V}\) which is a collection of elements in \(\mathbb{K}[G]\) having \(n\) colors. With this notation, the hypothesis \(|\sum\alpha_{i}\xi_{i}|<\max_{i}|\alpha_{i}\xi_{i}|-\delta_{n}\) implies that:
\[\left|\sum_{v\in V}\xi_{v}\right|<\max_{v\in V}|\xi_{v}|-\delta_{n}.\]
Recall that in this situation we say that the family \((\xi_{v})_{v\in V}\) defines a \(\delta_{n}\)-relation. Under this hypothesis, we will show that there exists a vertex \(v_{*}\) in \(V\) and a subset \(S\subset V\) made of elements of different colors from the color of \(v_{*}\) such that \(\operatorname{diam}(\xi_{v_{*}}+\sum_{v\in S}\xi_{v})<\operatorname{diam}(\xi _{v_{*}})-\delta\).
**Remark**.: _This conclusion is slightly stronger than that of Theorem 12. It implies in addition that--after reordering and replacing \(\xi_{1}\) by an element of the same color--the \(\beta_{i}\) have the form \(\beta_{i}=\sum_{g\in B_{i}}\alpha_{i}^{g}\cdot g\), where \(B_{i}\) is a subset of \(A_{i}\)._
Summing up, Theorem 12 will follow from
**Theorem 25**.: _Set \(\delta_{n}=(n^{2}+10n)\delta\). Suppose \(G\) acts on a \(\delta\)-hyperbolic space \(\mathcal{H}\) with minimal displacement \(\rho>4\delta_{n}+(10+2n)\delta\). Let \(\Xi=(\xi_{v})_{v\in V}\) be a finite collection of elements in \(\mathbb{K}[G]\) consisting of \(n\) colors, and with no two elements being scalar multiples of each other. If \((\xi_{v})_{v\in V}\) satisfies a \(\delta_{n}\)-relation, then there is a subset \(\{v_{*}\}\sqcup S\) of \(V\) such that no element of \(S\) has the same color as \(v_{*}\) and_
\[\operatorname{diam}\left(\xi_{v_{*}}+\sum_{v\in S}\xi_{v}\right)<\operatorname{ diam}(\xi_{v_{*}})-\delta.\]
**Remark**.: _The set \(S\sqcup\{v_{*}\}\) will be constructed as the set of vertices of a component of \(\Gamma_{\delta_{k}}(\Xi)\) for some \(1\leq k\leq n\)._
**Remark**.: _The constant \(\delta_{n}\) is obtained inductively as follows. Set \(\delta_{0}=0\) and define \(\delta_{k}=\delta_{k-1}+(2k+9)\delta\). Then \(\delta_{n}/\delta=\sum_{k=1}^{n}(9+2k)=9n+(n+1)n=n^{2}+10n\). Note also that the minimal displacement condition is_
\[\frac{\rho}{\delta}>4n^{2}+42n+10=(2n+10.5)^{2}-10.5^{2}+10.\]
_So, \(\rho>(2n+11)^{2}\delta\) is enough to obtain the conclusion of the theorem._
### The case of trees
We keep the notations of the previous paragraph unchanged, but assume that the space \(\mathcal{H}\) is a tree. This is the case (\(\delta=0\)) treated by Hog-Angeloni in [21]. We slightly rephrase her proof.
Recall that \(\Gamma_{0}=\Gamma_{0}(\Xi)\) is the graph whose vertices are the elements \(v\) in \(V\) such that if \(\mathcal{X}_{v}\) is the support of \(\xi_{v}\), then \(\mathcal{X}_{v}\) contains an extremal point, i.e. \(|\xi_{v}|=d=\max_{v\in V}|\xi_{v}|\), and there is an edge between \(v\) and \(w\) for each extremal point in \(\mathcal{X}_{v}\cap\mathcal{X}_{w}\). The colors \(\{1,\dots,n\}\) are organized so that the \(r_{i}\) (the radii of the \(\mathcal{X}_{i}\)) are in decreasing order: \(r_{1}\geq\dots\geq r_{n}\).
Let \(\hat{\Gamma}_{0}\) be a component of \(\Gamma_{0}\) containing a vertex \(v_{*}\) with color \(1\), the color of largest radius. By Lemma 24, this graph defines a \(0\)-relation
\[\left|\sum_{v\in\hat{\Gamma}_{0}}\xi_{v}\right|<\max_{v\in\hat{\Gamma}_{0}}| \xi_{v}|.\]
By Corollary 23, \(v_{*}\) is the unique vertex in \(\hat{\Gamma}_{0}\) of color \(1\). So, removing \(v_{*}\) from \(\hat{\Gamma}_{0}\) and setting \(S=\hat{\Gamma}_{0}-\{v_{*}\}\), we see that to conclude the proof of the theorem we need to estimate the diameter of the group ring element \(\sum_{v\in\hat{\Gamma}_{0}}\xi_{v}=\xi_{v_{*}}+\sum_{v\in S}\xi_{v}\).
Note that for any two adjacent vertices \(v\) and \(w\) in \(\hat{\Gamma}_{0}\), both centers \(c_{v}\) and \(c_{w}\) lie on a geodesic \([o,p]\) from the origin to an extremal point. The oriented geodesics \([o,c_{v}]\) and \([o,c_{w}]\) coincide along a segment of length \(\min\{|c_{v}|,|c_{w}|\}\geq d-r_{1}=|c_{v_{*}}|\). By following a path from \(v_{*}\) to \(v\) in the graph \(\hat{\Gamma}_{0}\) and noting that \(r_{1}\geq r_{w}\) for every vertex \(w\) along that path, we conclude that \(c_{v_{*}}\) belongs to every geodesic \([o,c_{v}]\), and
\[|c_{v_{*}}-c_{v}|=r_{1}-r_{v}.\]
For every \(v\), the support of \(\xi_{v}\) is contained in the closed ball \(B(c_{v_{*}},r_{1})\), so the same is true for the support of \(\sum_{v\in\hat{\Gamma}_{0}}\xi_{v}\). Moreover, because of the \(0\)-relation, this support has no \(0\)-extremal points, i.e. it is contained in the ball \(B(o,d-\alpha)\) for some \(\alpha>0\). So, applying Lemma 18, we see that the the diameter of the support of \(\sum_{v\in\hat{\Gamma}_{0}}\xi_{v}\) is at most
\[d-\alpha+r_{1}-|o-c_{v_{*}}|=2r_{1}=\operatorname{diam}(\xi_{v_{*}})-\alpha.\]
This finishes the proof.
**Remark**.: _In this result, we do not use that \(\mathcal{H}\) is a combinatorial tree. It might be an \(\mathbb{R}\)-tree with a free action of a surface group, for instance. But, for proving that the algorithm ends in a finite number of steps, we need that the diameter decreases by a given value, say \(1\)._
### Proof of Theorem 25 for hyperbolic spaces
In order to prove Theorem 25, we keep the notations of 4.1 unchanged and proceed by induction on the number \(n\) of colors. Recall that \(\delta_{0}=0\) and \(\delta_{n}=\delta_{n-1}+(2n+9)\delta\).
Let us fix some notation. We are given a collection of elements \(\Xi=(\xi_{v})_{v\in V}\) defining a \(\delta_{n}\)-relation. Denote \(d=\max_{v\in V}|\xi_{v}|\). We abreviate \(\Gamma_{\mu}=\Gamma_{\mu}(\Xi)\), so that vertices of \(\Gamma_{\mu}\) are those \(v\in V\) for which \(|\xi_{v}|\geq d-\mu\).
### Base case
If there is one color, then \(\delta_{1}=11\delta\) and the minimum displacement is greater than \(4\delta_{1}+12\delta\). So, Corollary 23 applies and implies that connected components of \(\Gamma_{\delta_{1}}\) are points, and there are no \(\delta_{1}\)-relations.
**Remark**.: _This case implies, in particular, that \(\mathbb{K}[G]\) has no zero divisors. (The zero-divisor relation \(0=\alpha\xi=\sum_{g\in G}\alpha^{g}g\xi\) is a \(\delta_{1}\)-relation defined by a finite collection of elements of a single color.)_
### Inductive step
Suppose now that we know the theorem for \(n-1\) colors and want to prove it for \(n\). By the inductive hypothesis, we may assume that \(\Xi\) satisfies the following "minimality" condition:
* No subset \(W\subset V\) consisting of fewer than \(n\) colors defines a \(\delta_{n-1}\)-relation.
Since we will consider different values of \(\mu\), it is useful to keep in mind that for \(\mu<\mu^{\prime}\) the graph \(\Gamma_{\mu}\) is a subgraph of \(\Gamma_{\mu^{\prime}}\). In particular, for a vertex \(v_{0}\in\Gamma_{0}\) (corresponding to an element \(\xi_{v_{0}}\) whose support contains an extremal point) we have inclusions
\[v_{0}\in\Gamma_{0}\subset\Gamma_{\delta_{n-1}}\subset\Gamma_{\delta_{n}}.\]
Denote by \(\hat{\Gamma}_{\delta_{n-1}}\) and \(\hat{\Gamma}_{\delta_{n}}\) the connected components of \(\Gamma_{\delta_{n-1}}\) and \(\Gamma_{\delta_{n}}\) containing the vertex \(v_{0}\). Since \(\Xi\) defines a \(\delta_{n}\)-relation (and \(\delta_{n}>\delta_{n-1}\)) Lemma 24 implies that \(\hat{\Gamma}_{\delta_{n-1}}\) defines a \(\delta_{n-1}\)-relation, so minimality implies \(\hat{\Gamma}_{\delta_{n-1}}\) contains all \(n\) colors.
#### Large and small colors
Order the colors in decreasing order, \(r_{1}\geq\cdots\geq r_{n}\), and let \(k\) be the largest integer for which \(r_{k}\geq r_{1}-\delta_{n}\). We call \(\{1,\ldots,k\}\) the large colors and the rest small colors. The minimal displacement \(>4\delta_{n}+(10+2n)\delta\) assumption implies, by Corollary 23.2, that each large color appears exactly once in the connected graph \(\hat{\Gamma}_{\delta_{n}}\). Thus,
we can identify the set of large colors \(\{1,\ldots,k\}\) with a set of \(k\) vertices \(\{v_{1},\ldots,v_{k}\}\) in \(\hat{\Gamma}_{\delta_{n}}\). Since each color appears in \(\hat{\Gamma}_{\delta_{n-1}}\), we conclude
\[\{v_{1},\ldots,v_{k}\}\subset\hat{\Gamma}_{\delta_{n-1}}\subset\hat{\Gamma}_{ \delta_{n}}.\]
#### Bounding the diameter of support
Recall that the family \(\Xi\) defines a \(\delta_{n}\)-relation, so the support of \(\sum_{v\in\hat{\Gamma}_{\delta_{n}}}\xi_{v}\) is contained in the ball \(B(o,d-\delta_{n})\). Let \(c^{*}\) be the point in \([o,c_{v_{1}}]\) such that \(|c^{*}|=d-\delta_{n-1}-r_{1}-(n+1)\delta\). In order to conclude the proof of Theorem 25, we shall prove that the ball \(B(c^{*},r_{1}+(n+3)\delta)\) also contains this support:
**Lemma 26**.: _For \(w\in\hat{\Gamma}_{\delta_{n}}\) and a point \(p\in\mathcal{X}_{w}\) that is not \(\delta_{n}\)-extremal we have_
\[|c^{*}-p|\leq r_{1}+(n+3)\delta.\]
Given this, Lemma 18 implies the diameter of the support of \(\sum_{v\in\hat{\Gamma}_{\delta_{n}}}\xi_{v}\) is
\[\leq (d-\delta_{n})+(r_{1}+(n+3)\delta)-(d-r_{1}-\delta_{n-1}-(n+1) \delta)+2\delta\] \[= (2r_{1}-2\delta)-\delta_{n}+\delta_{n-1}+(2n+8)\delta\] \[\leq \mathrm{diam}(\xi_{v_{1}})-\delta,\]
and since \(v_{1}\) is the unique vertex of color \(1\) in \(\hat{\Gamma}_{\delta_{n}}\), this establishes the theorem.
In order to prove Lemma 26 we first need to discuss the positions of centers of the sets \(\mathcal{X}_{v}\).
#### Centers
**Lemma 27**.: _If \(v\) and \(w\) are adjacent vertices in \(\hat{\Gamma}_{\delta_{n}}\), then_
\[\langle c_{v},c_{w}\rangle\geq d-\frac{\delta_{n}+\delta_{n-1}}{2}-r_{1}-\delta.\]
Proof.: If \(v\) and \(w\) are adjacent in \(\hat{\Gamma}_{\delta_{n}}\), then the sets \(\mathcal{X}_{v}\) and \(\mathcal{X}_{w}\) contain a common \(\delta_{n}\)-extremal point \(p\). Since \(p\) is \(\delta_{n}\)-extremal, Lemma 13 implies
\[\langle p,c_{v}\rangle\geq\frac{|\mathcal{X}_{v}|+|p|}{2}-r_{v}\geq\frac{| \mathcal{X}_{v}|+d-\delta_{n}}{2}-r_{v}.\]
If the color of \(v\) is small, then \(r_{v}\leq r_{1}-\delta_{n}\) and \(|\mathcal{X}_{v}|\geq d-\delta_{n}\) so that \(\langle p,c_{v}\rangle\geq d-r_{1}\). If the color of \(v\) is large, then \(v\in\hat{\Gamma}_{\delta_{n-1}}\) so \(|\mathcal{X}_{v}|\geq d-\delta_{n-1}\) and \(-r_{v}\geq-r_{1}\), implying that \(\langle p,c_{v}\rangle\geq\frac{d-\delta_{n-1}+d-\delta_{n}}{2}-r_{1}\). In either case, we get the inequality
\[\langle p,c_{v}\rangle\geq d-\frac{\delta_{n}+\delta_{n-1}}{2}-r_{1},\]
and also the same inequality with \(v\) replaced by \(w\). This implies the claimed inequality since, by definition of \(\delta\)-hyperbolicity, \(\langle c_{v},c_{w}\rangle\geq\min(\langle c_{v},p\rangle\,,\langle p,c_{w} \rangle)-\delta\).
Putting this together with the \(2^{n}\)-diameter bound for the connected graph \(\hat{\Gamma}_{\delta_{n}}\) (Corollary 23.1) and using the Gromov product inequality (Lemma 19) gives for any pair of vertices \(v,w\in\hat{\Gamma}_{\delta_{n}}\)
\[\langle c_{v},c_{w}\rangle\geq d-\frac{\delta_{n}+\delta_{n-1}}{2}-r_{1}-(n+1)\delta. \tag{1}\]
Now, for any \(w\in\hat{\Gamma}_{\delta_{n}}\), pick \(c^{\prime}_{w}\in[o,c_{w}]\) such that \(|c^{\prime}_{w}|=d-\frac{\delta_{n}+\delta_{n-1}}{2}-r_{1}-(n+1)\delta\). Note that by (1) all these points \(c^{\prime}_{w}\) are \(\delta\)-close to each other. The next lemma records this and summarizes how these points compare to the centers \(c_{w}\) and the basepoint \(c^{*}\) we chose earlier.
**Lemma 28**.: _For any vertices \(v,w\) in \(\hat{\Gamma}_{\delta_{n}}\) we have_
\[|c^{\prime}_{v}-c^{\prime}_{w}| \leq \delta,\] \[|c^{*}-c^{\prime}_{v_{1}}| = \frac{\delta_{n}-\delta_{n-1}}{2},\] \[|c_{w}-c^{\prime}_{w}| \geq \frac{\delta_{n}-\delta_{n-1}}{2}+(n+1)\delta.\]
Proof.: The first inequality follows from (1) and hyperbolicity. The equality follows directly from the definition of \(c^{*}\) as the point on \([o,c_{v_{1}}]\) such that \(|c^{*}|=d-r_{1}-\delta_{n-1}-(n+1)\delta\). For the last inequality, note that
\[|c_{w}|\geq d-\delta_{n-1}-r_{1}\]
(either \(w\) is small, in which case \(|c_{w}|\geq d-\delta_{n}-r_{v}\geq d-r_{1}\) or \(w\in\hat{\Gamma}_{\delta_{n-1}}\) and then \(|c_{w}|\geq d-\delta_{n-1}-r_{v}\geq d-\delta_{n-1}-r_{1}\)) and the inequality follows from this by subtracting \(|c^{\prime}_{w}|\) from both sides.
Now we can prove Lemma 26 and thus finish the proof of the theorem.
#### Proof of Lemma 26
Let \(p^{\prime}\) be the projection of \(p\) onto \([o,c_{w}]\). There are two cases.
* If \(p^{\prime}\in[o,c^{\prime}_{w}]\) then \[\frac{\delta_{n}-\delta_{n-1}}{2}+|c^{\prime}_{w}-p|\leq|c_{w}-c^{\prime}_{w} |+|c^{\prime}_{w}-p|\stackrel{{ h}}{{\leq}}|c_{w}-p|+\delta\leq r _{1}+\delta\]
* If \(p^{\prime}\in[c^{\prime}_{w},c_{w}]\) then, since \(p\) is not \(\delta_{n}\)-extremal, \[\bigg{(}d-r_{1}-\frac{\delta_{n}+\delta_{n-1}}{2}-(n+1)\delta\bigg{)}+|c^{ \prime}_{w}-p|=|c^{\prime}_{w}|+|c^{\prime}_{w}-p|\stackrel{{ h}}{{\leq}}|p|+\delta\leq d-\delta_{n}+\delta.\]
**Remark**.: _The inequalities following from hyperbolicity are denoted \(\stackrel{{ h}}{{\leq}}\) for emphasis._
Rearranging, we see that in either case we have obtained the inequality
\[|c^{\prime}_{w}-p|\leq r_{1}-\frac{\delta_{n}-\delta_{n-1}}{2}+(n+2)\delta.\]
Therefore,
\[|c^{*}-p| \leq |c^{*}-c^{\prime}_{v_{1}}|+|c^{\prime}_{v_{1}}-c^{\prime}_{w}|+|c^ {\prime}_{w}-p|,\] \[\leq \left(\frac{\delta_{n}-\delta_{n-1}}{2}\right)+\delta+\left(r_{1} -\frac{\delta_{n}-\delta_{n-1}}{2}+(n+2)\delta\right),\] \[= r_{1}+(n+3)\delta.\]
which finishes the proof.
## 5 Applications
We now apply the algorithm. The main hypothesis in this section is:
\((\mathcal{H}_{n,\delta})\): _G acts on a \(\delta\)-hyperbolic space \(\mathcal{H}\) with displacement greater than \((2n+11)^{2}\delta\)._
Let \(\mathrm{E}_{n}(\mathbb{K}[G])\) be the subgroup of elementary matrices in \(\mathrm{GL}_{n}(\mathbb{K}[G])\). We begin with a linear algebraic lemma.
**Lemma 29**.: _Suppose the group \(G\) satisfies \(\mathcal{H}_{n,\delta}\). Let \(\xi=(\xi_{1},\ldots,\xi_{n})\in\mathbb{K}[G]^{n}\)._
1. _If the coordinates of_ \(\xi\) _are linearly dependent over_ \(\mathbb{K}[G]\)_, then the_ \(\mathrm{E}_{n}(\mathbb{K}[G])\)_-orbit of_ \(\xi\) _contains a vector which has at least one coordinate equal to_ \(0\)_._
2. _If there is_ \(\alpha=(\alpha_{1},\ldots,\alpha_{n})\in\mathbb{K}[G]^{n}\) _satisfying_ \(\alpha\cdot\xi=\sum\alpha_{i}\xi_{i}=1\)_, then the_ \(\mathrm{E}_{n}(\mathbb{K}[G])\)_-orbit of_ \(\xi\) _contains_ \((\lambda g,0,\ldots,0)\) _for some_ \(\lambda\in\mathbb{K}^{*}\) _and_ \(g\in G\)_._
Proof.: \(0\). Pick a vector \(\xi^{\prime}\) in the \(\mathrm{E}_{n}(\mathbb{K}[G])\)-orbit of \(\xi\) that minimizes the sum of diameters of its coordinates. The coordinates of \(\xi^{\prime}\) are still linearly dependent. If none of them are zero, then Theorem 12 would let us reduce the sum of diameters, contradicting the minimality assumption.
1. We argue the same way. Pick a vector \(\xi^{\prime}=\xi U\) in the \(\mathrm{E}_{n}(\mathbb{K}[G])\)-orbit of \(\xi\) that minimizes the sum of diameters of _nonzero_ coordinates. Then
\[1=\alpha\cdot\xi=\alpha U^{-t}\cdot\xi U=\alpha^{\prime}\cdot\xi^{\prime}.\]
Pick \(i\) with \(\alpha^{\prime}_{i}\xi^{\prime}_{i}\neq 0\). If \(|\alpha^{\prime}_{i}\xi^{\prime}_{i}|>0\) then we can apply Theorem 12 to reduce the sum of diameters of nonzero coordinates of \(\xi^{\prime}\), contradicting minimality. So, \(|\alpha^{\prime}_{i}\xi^{\prime}_{i}|=0\), i.e. \(\xi^{\prime}_{i}\) is a unit. By [14], \(\mathbb{K}[G]\) only has trivial units, so \(\xi^{\prime}_{i}=\lambda g\) for some \(\lambda\in\mathbb{K}^{*}\) and \(g\in G\). The conclusion follows by applying elementary transformations to \(\xi^{\prime}\).
### Freeness
This enables the study of finitely generated submodules of free modules.
**Theorem 30**.: _Assume the group \(G\) satisfies \(\mathcal{H}_{n,\delta}\)._
1. _Every_ \(n\)_-generated ideal in_ \(\mathbb{K}[G]\) _is a free_ \(\mathbb{K}[G]\)_-module._
2. _Every_ \(n\)_-generated submodule of a free_ \(\mathbb{K}[G]\)_-module is a free_ \(\mathbb{K}[G]\)_-module._
Proof.: 1. Suppose we have shown that ideals generated by fewer than \(n\) elements are free, and let \(\mathcal{I}\) be an ideal in \(\mathbb{K}[G]\) generated \(n\) elements \(\xi_{1},\ldots,\xi_{n}\). Consider the map
\[\mathbb{K}[G]^{n} \to \mathbb{K}[G],\] \[(\alpha_{1},\ldots,\alpha_{n}) \mapsto \alpha_{1}\xi_{1}+\ldots+\alpha_{n}\xi_{n}.\]
If this map is injective, then it provides an isomorphism from the free module \(\mathbb{K}[G]^{n}\) to the ideal \(\mathcal{I}\). If it is not injective, then there is a non-trivial relation \(\alpha_{1}\xi_{1}+\ldots+\alpha_{n}\xi_{n}=0\). In other words the family \(\xi_{1},\ldots,\xi_{n}\) is linearly dependent. By Lemma 29.0 we can do permutations and elementary transformations to replace \(\xi_{1},\ldots,\xi_{n}\) by a generating set for \(\mathcal{I}\) consisting of \(n-1\) elements. Therefore \(\mathcal{I}\) is free.
2. Suppose \(M\subset\mathbb{K}[G]^{m}\) is an \(n\)-generated submodule. Note that \(M\otimes_{\mathbb{K}[G]}\mathbb{K}\) is a finite dimensional \(\mathbb{K}\)-vector space of some dimension \(d\leq n\). We argue by induction on \(d\). If \(M\neq 0\) there is a projection to a factor \(p:\mathbb{K}[G]^{m}\to\mathbb{K}[G]\) such that \(p(M)\) is non-trivial. Since \(p(M)\) is an \(n\)-generated ideal, it is free as a \(\mathbb{K}[G]\) module by part 1. It follows that the module \(M\) maps onto a non-zero free \(\mathbb{K}[G]\)-module and, a fortiori, onto \(\mathbb{K}[G]\). Therefore, the module \(M\) splits as \(M\cong M^{\prime}\oplus\mathbb{K}[G]\). Note that \(\dim_{\mathbb{K}}(M\otimes_{\mathbb{K}[G]}\mathbb{K})-1=\dim_{\mathbb{K}}(M^{ \prime}\otimes_{\mathbb{K}[G]}\mathbb{K})\), and the result follows.
From this theorem and Stallings result on groups with infinitely many ends, we can also deduce:
**Corollary 31**.: _Under the same hypotheses, every \(n\) generated subgroup of the group \(G\) is free._
Proof.: Let \(\mathbb{K}\) be any field, for instance \(\mathbb{K}=\mathbb{F}_{2}\). Suppose \((g_{1},\ldots,g_{n})=H<G\) is an \(n\)-generated subgroup. Then its augmentation ideal \((g_{1}-1,\ldots,g_{n}-1)\) is a free ideal in \(\mathbb{K}[H]\) by Theorem 30.1. Clearly the group \(H\) is torsion-free, so by 3.14 of [15], the group \(H\) is a free group.
**Remark**.: _For the convenience of the reader, let us recall the argument of Dicks and Dunwoody ([15]). The main ingredient behind the passage from ideals to subgroups is Stallings theorem on ends ([26]). If the augmentation ideal \(\mathcal{I}\) is free, we have a free resolution \(0\to\mathcal{I}\to\mathbb{F}_{2}[G]\to\mathbb{F}_{2}\to 0\), so that \(H^{1}(G,\mathbb{F}_{2}[G])\neq 0\): the group \(G\) has several ends. As the group \(G\) is torsion free, Stallings theorem implies that either it is infinite cyclic or that it splits as a free product \(G=G_{1}*G_{2}\). By Grushko's theorem, the groups \(G_{1},G_{2}\) have smaller rank, and one can conclude by induction. This argument was used by Stallings to prove that a group of cohomological dimension one is a free group._
**Remark**.: _This result--large displacement (\(\geq\rho\)) implies that every \(n\)-generated subgroup is free--is not new. It has been stated by Gromov [18], and proofs have been given by Arzhantseva [2] and Kapovich-Weidmann [23]. We can express the known quantitative results as follows. Let \(N_{fr}(G)\) be the largest number \(n\) such that \(n\)-generated subgroups of \(G\) are free, and set_
\[N\left(\rho/\delta\right):=\min_{G}N_{fr}(G),\]
_where the minimum is over all groups \(G\) for which there is a \(\delta\)-hyperbolic space \(\mathcal{H}\) and a \(G\) action on \(\mathcal{H}\) with minimal displacement \(\geq\rho\). The best known estimate for \(N\)--due to Gromov [19] p.763--is \(N(\rho/\delta)\geq 10^{-6}\frac{\rho/\delta}{\log(\rho/\delta)}\), which is a much better bound than ours, which is \(N(\rho/\delta)\geq\frac{1}{2}\sqrt{\rho/\delta}-6\). Conjecturally ([19]), \(N(\rho/\delta)\geq(1+\varepsilon)^{\rho/\delta}\) for a universal positive constant \(\varepsilon\)._
### The group \(\operatorname{GL}_{n}(\mathbb{K}[G])\)
Recall that \(\operatorname{GE}_{n}(\mathbb{K}[G])\) is the subgroup of \(\operatorname{GL}_{n}(\mathbb{K}[G])\) generated by elementary and diagonal matrices.
**Theorem 32**.: _Assume \(G\) satisfies \(\mathcal{H}_{n,\delta}\). Then_
\[\operatorname{GL}_{n}(\mathbb{K}[G])=\operatorname{GE}_{n}(\mathbb{K}[G]).\]
Proof.: We copy the usual proof of the fact that \(\operatorname{GL}_{n}(\mathbb{Z})\) is generated by elementary matrices and diagonal matrices with entries \(\pm 1\). Let \(X=(\xi_{ij})\) be in \(\operatorname{GL}_{n}(\mathbb{K}[G])\), and choose \(A=(\alpha_{ij})\) in \(\operatorname{GL}_{n}(\mathbb{K}[G])\) such that \(AX=1\). As \(\sum_{i=1}^{n}\alpha_{1i}\xi_{i1}=1\), we can apply Lemma 29.1, and deduce that there is a matrix \(U\) in \(\operatorname{GE}_{n}(\mathbb{K}[G])\) such that the first row of \(XU\) is \((u,0,\dots,0)\) for some unit \(u\in\mathbb{K}[G]\). Left multiplying \(XU\) by a product of elementary matrices, say \(V\in\operatorname{E}_{n}(\mathbb{K}[G])\), we obtain a matrix \(VXU\) of the form
\[\left(\begin{array}{cc}u&0\\ 0&Y\end{array}\right),\]
where \(Y\) is a matrix in \(\operatorname{GL}_{n-1}(\mathbb{K}[G])\). So, the theorem follows by induction on \(n\).
**Theorem 5**.: _Assume \(\mathcal{H}_{n,\delta}\). If \(\mathbb{K}\) is finite and \(G\) is finitely generated, then \(\operatorname{GL}_{n}(\mathbb{K}[G])\) is finitely generated._
Proof.: Recall that, because of the large displacement assumption (greater than \(4\delta\) is enough), all units in \(\mathbb{K}[G]\) are trivial by [14]. Let \(e_{1},\dots,e_{n}\) be the standard basis for \(\mathbb{K}[G]^{n}\). The group \(\operatorname{GE}_{n}(\mathbb{K}[G])\) is generated by the finitely many elementary transformations of the form \((e_{i}\mapsto e_{i}+e_{j})\) and multiplication of basis elements by units of the form \(\lambda g\), where \(\lambda\) is in \(\mathbb{K}^{*}\) and \(g\) is in a generating set for \(G\). So, the corollary follows from the previous theorem.
**Remark**.: _The proof only uses that \(\mathbb{K}^{*}\) is finitely generated. However, no example of an infinite field with finitely generated \(\mathbb{K}^{*}\) is known._
Theorems 30.1 and 32 together establish Theorem 4 from the introduction.
### Submodules of free \(\mathbb{Z}[G]\)-modules
Until now we've been studying the group algebra \(\mathbb{K}[G]\) with coefficients in a field \(\mathbb{K}\). Our next goal is to extend freeness results to the integral group ring \(\mathbb{Z}[G]\). We follow the method Bass used in [5] to show projective \(\mathbb{Z}[F]\)-modules are free, but with two differences. First, the article [5] studies group rings with coefficients in a principal ideal domain. As we have no applications in this degree of generality, we restrict ourselves to the ring \(\mathbb{Z}\). Second, the main hypothesis of [5] is that \(M\) is a projective module. Instead, we use
* The module \(M\) embeds as a submodule of a free module \(\oplus\mathbb{Z}[G]\) such the quotient \((\oplus\mathbb{Z}[G])/M\) is torsion free as an abelian group.
This condition is probably known to specialists but hard to locate in the literature. It is a weakening of the more familiar "projective". Indeed, if \(M\) is a projective module, there exists a module \(N\) such that \(M\oplus N=\oplus\mathbb{Z}[G]\) is a free module, in particular the quotient \(N\) is torsion free. It is useful for obtaining topological consequences thanks to the following important example.
**Example**.: _Let \(X\) be a finite cell complex with fundamental group \(G\), \(\widetilde{X}\) its universal cover and \(C_{*}:=C_{*}(\widetilde{X};\mathbb{Z})\) its cellular chain complex. Then the kernels of each boundary map \(\partial:C_{k}\to C_{k-1}\) and of the augmentation map \(C_{0}\to\mathbb{Z}\) all satisfy condition \((\star)\), as their quotients are submodules of the free modules \(C_{k-1}\) and of \(\mathbb{Z}\), respectively (see the proof of Theorem 36). In particular the augmentation ideal of the group \(G\), the relation module of a generating set for \(G\), and the second homotopy module of a presentation \(2\)-complex for \(G\) all satisfy \((\star)\)._
Here is the main theorem of this subsection.
**Theorem 33**.: _Suppose the group \(G\) satisfies \(\mathcal{H}_{n,\delta}\). Let \(M\) be a \(n\)-generated submodule of a free module \(\oplus\mathbb{Z}G\) such that \((\oplus\mathbb{Z}G)/M\) is torsion free as an abelian group. Then \(M\) is free._
**Remark**.: _Some assumption such as \((\star)\) is necessary to establish freeness: the ideal in \(\mathbb{Z}[t,t^{-1}]\) generated by \(u=2\) and \(v=t-1\) is not free, as it satisfies the relation \((t-1)u-2v=0\) and is not generated by a single element._
To ellucidate the role of \((\star)\) in the proof, we recall some basics on abelian groups.
#### On abelian groups and torsion
For an abelian group \(A\), let \(A_{\mathbb{Q}}:=A\otimes_{\mathbb{Z}}\mathbb{Q}\) be its _rationalization_ and \(A_{p}:=A\otimes_{\mathbb{Z}}\mathbb{F}_{p}\) its _mod \(p\) reduction_. An inclusion of abelian groups induces homomorphisms of rationalizations and mod \(p\) reductions. The later may no longer be an inclusion. In our proof, condition \((\star)\) will be relevant because it implies injectivity of the induced map on mod \(p\) reductions via the last part of the following lemma which summarizes basic properties of rationalizations and mod \(p\) reductions that we will need.
**Lemma 34**.: _For any abelian group \(A\),_
1. _the mod_ \(p\) _reduction can be expressed as_ \(A_{p}=A\otimes_{\mathbb{Z}}\mathbb{F}_{p}\cong A/pA\)_._
2. _If_ \(A\) _is torsion-free, then the natural map_ \(A\to A_{\mathbb{Q}}\) _is an embedding._
_Let \(A\hookrightarrow B\) be an embedding of abelian groups. Then_
1. _the induced map of rationalizations_ \(A_{\mathbb{Q}}\to B_{\mathbb{Q}}\) _is injective, and_
2. _if_ \(B/A\) _has no_ \(p\)_-torsion then the induced map_ \(A_{p}\to B_{p}\) _is injective._
Proof.: The failure of tensor products to preserve injectivity is measured by Tor as appying \(A\otimes_{\mathbb{Z}}-\) to an exact sequence of abelian groups \(0\to B\to C\to D\to 0\) gives the exact sequence
\[\operatorname{Tor}(A,D)\to A\otimes_{\mathbb{Z}}B\to A\otimes_{\mathbb{Z}}C \to A\otimes_{\mathbb{Z}}D\to 0.\]
The basic properties of Tor we will use can be found in 3A.5 of [20]. For the first point, apply \(A\otimes_{\mathbb{Z}}-\) to the exact sequence \(0\to\mathbb{Z}\xrightarrow{p}\mathbb{Z}\to\mathbb{F}_{p}\to 0\). For the second, apply it to the exact sequence \(0\to\mathbb{Z}\to\mathbb{Q}\to\mathbb{Q}/\mathbb{Z}\to 0\) and note that the tor term \(\operatorname{Tor}(A,\mathbb{Q}/\mathbb{Z})\) vanishes because \(A\) is torsion-free. For the third point, apply \(-\otimes_{\mathbb{Z}}\mathbb{Q}\) to \(0\to A\to B\to B/A\to 0\) and note that \(\operatorname{Tor}(B/A,\mathbb{Q})\) vanishes beecause \(\mathbb{Q}\) is torsion-free. For the forth point, apply \(-\otimes_{\mathbb{Z}}\mathbb{F}_{p}\) to the same sequence and note that the tor term \(\operatorname{Tor}(B/A,\mathbb{F}_{p})=\ker(B/A\xrightarrow{p}B/A)\) vanishes because \(B/A\) has no \(p\)-torsion.
#### A 'local-to-global' principle for rings
We can now prove Theorem 33. Note that \(\mathbb{Z}[G]\) is torsion-free as an abelian group, so any submodule of \(\oplus\mathbb{Z}[G]\) is, as well. Moreover, the rank of a finitely generated free \(\mathbb{K}[G]\)-module is the dimension of the vector space of coinvariants \(\mathbb{K}[G]^{m}\otimes_{\mathbb{K}[G]}\mathbb{K}=\mathbb{K}^{m}\). So, Theorem 33 is a consequence of Theorem 30.2, Theorem 32 and the \(R=\mathbb{Z}[G]\) case of the following purely ring theoretic proposition.
**Proposition 35**.: _Suppose \(R\) is a ring satisfying the following properties:_
1. _every_ \(n\)_-generated submodule of_ \(\oplus R_{\mathbb{Q}}\) _is free of unique rank_4_,_ Footnote 4: A finitely generated \(R\)-module is free of unique rank if it is isomorphic to \(R^{m}\) for a unique \(m\).
_and for all primes \(p\)_
1. _every_ \(n\)_-generated submodule of_ \(\oplus R_{p}\) _is free of unique rank, and_
2. _for_ \(m\leqslant n-1\) _we have_ \(\operatorname{GL}_{m}(R_{p})=\operatorname{GE}_{m}(R_{p})\)_._
_If \(M\) is an \(n\)-generated submodule of \(\oplus R\) such that both \(M\) and \((\oplus R)/M\) are torsion-free abelian groups, then \(M\) is a free \(R\)-module._
Proof.: Since \(M\) is torsion-free, the map \(M\to M_{\mathbb{Q}}\) is an embedding. So, we can think of \(M\) as a subgroup of \(M_{\mathbb{Q}}\). Let \(x_{1},\ldots,x_{n}\) generate the \(R\)-module \(M\). By Lemma 34.3, \(M_{\mathbb{Q}}\) is a submodule of the free \(R_{\mathbb{Q}}\)-module \(\oplus R_{\mathbb{Q}}\). So, by our \((\mathbb{Q})\)
hypothesis, it is a free \(R_{\mathbb{Q}}\)-module. If this module is of rank \(n\) (not \(\leqslant n-1\)), then the family \(x_{1},\ldots,x_{n}\) is an \(R_{\mathbb{Q}}\)-basis for \(M_{\mathbb{Q}}\). Therefore, it is an \(R\)-basis for \(M\), and hence \(M\) is free of rank \(n\), and we are done.
Otherwise, \(M_{\mathbb{Q}}\) is of rank \(m<n\). Let \(y_{1},\ldots,y_{m}\) be an \(R_{\mathbb{Q}}\)-basis for \(M_{\mathbb{Q}}\). Clearing denominators, we may assume that each \(y_{i}\) is in \(M\). Then, the \(y_{i}\) generate a free, rank \(m\)\(R\)-submodule \(Y:=\left\langle y_{1},\ldots,y_{m}\right\rangle\) of \(M\). Next, since the \(y_{i}\) form an \(R_{\mathbb{Q}}\)-basis for \(M\), each \(x_{i}\) can expressed as a \(R_{\mathbb{Q}}\)-linear combination of the \(y_{i}\). We can clear denominators in these expressions and find a single positive integer \(k\) such that--for all \(j\)--\(kx_{j}\) is a \(R\)-linear combination of the \(y_{i}\). In summary we have obtained a free \(R\)-module \(Y\), a positive number \(k\), and inclusions
\[kM\subset Y\subset M.\]
Let \(k\geq 1\) be the smallest number such that there is a free \(R\)-module \(Y\) of rank \(m\) with \(kM\subset Y\subset M\). If \(k=1\), then we are done since then \(M=Y\) is a free \(R\)-module. So, towards a contradiction, suppose \(k>1\). We will find a free, rank \(m\)\(R\)-module \(Y^{\prime}\) and \(1\leq k^{\prime}<k\) such that \(k^{\prime}M\subset Y^{\prime}\subset M\).
To that end, pick a prime \(p\) dividing \(k\) and let \(f:Y_{p}\to M_{p}\) be the mod \(p\) reduction of the second inclusion above. By Lemma 34.4 and the hypothesis that \((\oplus R)/M\) is torsion-free as an abelian group, the module \(M_{p}\) embeds in the free module \(\oplus R_{p}\). Since \(M_{p}\) is generated by \(n\)-elements, our hypothesis \((p_{0})\) shows that it is free. The module \(f(Y_{p})\) is a submodule of \(M_{p}\), and is generated by \(m<n\) elements, so it is again free by \((p_{0})\). Thus, we have a splitting
\[Y_{p}\cong\ker(f)\oplus\operatorname{im}(f).\]
Note that the left hand side is also a free \(R_{p}\)-module, since it the mod \(p\) reduction of the free \(R\)-module \(Y\). The splitting shows that \(\ker(f)\) is an \(m\)-generated submodule of a free \(R_{p}\)-module, so it is also free. Therefore, we can pick an \(R_{p}\)-basis \(z_{1},\ldots,z_{k},z_{k+1},\ldots,z_{m}\) for the module \(Y_{p}\) so that the first \(k\) elements are a basis for the kernel of \(f\). By our \((p_{1})\) hypothesis, there exists a matrix \(U\) in \(\operatorname{E}_{m}(R_{p})\) transforming the mod \(p\) reduction of the family \(\left\{y_{i}\right\}\) to the family \(\left\{u_{i}z_{i}\right\}\) where the \(u_{i}\) are units in \(R_{p}\). As a matrix in \(\operatorname{E}_{m}(R_{p})\) is a product of elementary matrices, we can lift \(U\) to \(\operatorname{E}_{m}(R)\) transforming the family \(\left\{y_{i}\right\}\) to a family \(\left\{y^{\prime}_{i}\right\}\) such that the reduction mod \(p\) of \(y^{\prime}_{i}\) is \(u_{i}z_{i}\). Let \(P:=\left\langle y^{\prime}_{1},\ldots,y^{\prime}_{k}\right\rangle\) and \(Q:=\left\langle y^{\prime}_{k+1},\ldots,y^{\prime}_{m}\right\rangle\), so that
\[Y=P\oplus Q,\]
and on mod \(p\) reductions
* \(P_{p}\to M_{p}\) is the zero map, i.e. \(P\subset pM\), while
* \(Q_{p}\to M_{p}\) is injective, i.e. \(pQ=Q\cap pM\).
For \(x\in M\), the inclusion \(kM\subset Y=P\oplus Q\) lets us write \(kx=a+b\) where \(a\in P\) and \(b\in Q\). Since \(P\subset pM\), we have5\(\frac{a}{p}\in\frac{1}{p}P\subset M\) and hence \(b=p(\frac{k}{p}x-\frac{a}{p})\in\frac{1}{p}P\subset pM\). We can assume that \(a\in P\). Then, we have \(\frac{a}{p}\in\frac{1}{p}P\subset pM\).
\(Q\cap pM=pQ\). Therefore, \(\frac{b}{p}\in Q\). Since \(\frac{k}{p}x=\frac{a}{p}+\frac{b}{p}\) we have obtained the inclusions
\[\frac{k}{p}M\subset\frac{1}{p}P\oplus Q\subset M.\]
Since \(\frac{1}{p}P\oplus Q\) is a free \(R\)-module of rank \(m\) (with basis \(\{\frac{y^{\prime}_{1}}{p},\ldots,\frac{y^{\prime}_{k}}{p},y^{\prime}_{k+1}, \ldots,y^{\prime}_{m}\}\)) we arrive at a contradiction to the minimality of \(Y\).
**Remark**.: _One may ask whether there is a local-to-global argument taking as input \(\operatorname{GL}_{m}(R_{\mathbb{K}})=\operatorname{GE}_{m}(R_{\mathbb{K}})\) for all \(m\leq n\) and all fields \(\mathbb{K}\) that leads to \(\operatorname{GL}_{n}(R)=\operatorname{GE}_{n}(R)\). This is not the case. Indeed, the polynomial ring \(R=\mathbb{Z}[t]\) satisfies the hypothesis for all \(n\) and all \(\mathbb{K}\), but the matrix_
\[\left(\begin{array}{cc}4&1+2t\\ 1-2t&-t^{2}\end{array}\right)\in\operatorname{GL}_{2}(\mathbb{Z}[t])\]
_is not in \(\operatorname{GE}_{2}(\mathbb{Z}[t])\) ([10], p.30). In fact, \(\operatorname{GL}_{2}(\mathbb{Z}[t])/\operatorname{GE}_{2}(\mathbb{Z}[t])\) is quite large ([24]). However, the above example goes away if we invert \(t\), and [1] conjectures that \(\operatorname{GL}_{2}(\mathbb{Z}[t,t^{-1}])=\operatorname{GE}_{2}(\mathbb{Z}[ t,t^{-1}])\)._
### Chain complexes, cell decompositions, and Morse theory
Recall that the cohomological dimension of the group \(G\), denoted \(\operatorname{cd}(G)\), is the minimal length of a free \(\mathbb{Z}[G]\)-resolution of \(\mathbb{Z}\) (see [7]).
**Theorem 36**.: _Assume that the group \(G\) satisfies \(\mathcal{H}_{n,\delta}\)._
1. _For every free_ \(\mathbb{Z}[G]\)_-resolution_ \(C_{*}\to\mathbb{Z}\) _and each_ \(0<k<\operatorname{cd}(G)\) _we have_ \[\operatorname{rank}_{\mathbb{Z}[G]}(C_{k})>n.\]
2. _(Theorem_ 9 _in the introduction) Every aspherical cell complex with fundamental group_ \(G\) _has more than_ \(n\) _cells of each dimension_ \(0<k<\operatorname{cd}(G)\)_._
Proof.: First, we prove the algebraic part. Suppose \(\operatorname{rank}(C_{k})\leqslant n\). Then the module of boundaries, \(B_{k-1}:=\partial(C_{k})\subset C_{k-1}\) is an \(n\)-generated submodule of a free module. Since \(C_{*}\) is a resolution, \(\ker\left(C_{k-1}\overset{\partial}{\to}C_{k-2}\right)=\partial(C_{k})\), so \(C_{k-1}/\partial(C_{k})\) injects into \(C_{k-2}\), which is either a free module (if \(k\geq 2\)) or \(\mathbb{Z}\) (if \(k=1\)). In either case, \(C_{k-1}/\partial(C_{k})\) is torsion-free as an abelian group. Hence, \(B_{k-1}\) satisfies (\(\star\)) and we conclude from Theorem 33 that it is free. So, we obtain a new free resolution
\[0\to B_{k-1}\to C_{k-1}\to\ldots\to C_{0}\to\mathbb{Z}\to 0\]
of length \(k\). Thus \(\operatorname{cd}(G)\leq k\). This proves 1.
Now, suppose \(X\) is an aspherical cell complex with fundamental group \(G\). Let \(C_{*}(\widetilde{X};\mathbb{Z})\) be the cellular chain complex of the universal cover of \(X\). The augmented complex \(C_{*}(\widetilde{X};\mathbb{Z})\to\mathbb{Z}\) is a free \(\mathbb{Z}[G]\) resolution of \(\mathbb{Z}\) ([7], Prop I. 4.1). Applying part 1 to this resolution gives part 2.
#### Essential maps
If \(\mathbb{K}\) is a field, the same result is true with \(\mathbb{Z}\) replaced by \(\mathbb{K}\) and cohomological dimension by \(\mathbb{K}\)-cohomological dimension (= minimal length of a free \(\mathbb{K}[G]\)-resolution of \(\mathbb{K}\)), and easier to prove as we don't need Theorem 33, but only Theorem 30.2. In fact, in the setting of \(\mathbb{K}\)-cohomological dimension we have the following more general result suggested by a question of Gromov.
Let \(G\) be a classifying space for the group \(G\). For a cell complex \(X\), we say that a map \(X\to BG\) is _\(d\)-essential (with local coefficients)_ if there is a \(\mathbb{K}[G]\)-module \(V\) such that the induced map \(H^{d}(BG;V)\to H^{d}(X;V)\) is non-zero. For the sake of brevity, we will omit "with local coefficients" from now on. We will call a \(d\)-manifold \(X\)_essential_ if the map \(X\to B\pi_{1}(X)\) is \(d\)-essential.
**Example**.: _Any map of non-zero degree from a closed manifold to a closed aspherical manifold of dimension \(d\) is \(d\)-essential._
**Theorem 37**.: _Assume that the group \(G\) satisfies \(\mathcal{H}_{n,\delta}\). If \(X\to BG\) is a \(d\)-essential map, then \(X\) has more than \(n\) cells in each dimension \(0<k<d\)._
Proof.: Suppose not. Let \(\hat{X}\) be the \(G\)-cover of \(X\) induced by the map \(f:X\to BG\). Note that the image of the map \(f_{*}\circ\partial:C_{k}(\hat{X};\mathbb{K})\to C_{k-1}(EG;\mathbb{K})\) is an \(n\)-generated submodule of a free module, hence it is free by Theorem 30.2. So, if \(i\) denotes the inclusion \(\partial C_{k}\hookrightarrow C_{k-1}\), then map \(f_{*}\circ i:\partial C_{k}(\hat{X};\mathbb{K})\to C_{k-1}(EG;\mathbb{K})\) lifts to \(f^{\prime}:\partial C_{k}(\hat{X};\mathbb{K})\to C_{k}(EG;\mathbb{K})\) satisfying \(f_{*}\circ i=\partial\circ f^{\prime}\). In summary, we have a commutative diagram
\[\begin{array}{ccccccccc}\cdots\rightarrow&C_{k+1}(\hat{X};\mathbb{K})& \rightarrow&C_{k}(\hat{X}\;\mathbb{K})&\stackrel{{\partial}}{{ \rightarrow}}&C_{k-1}(\hat{X};\mathbb{K})&\rightarrow\cdots\rightarrow&C_{0} (\hat{X};\mathbb{K})\\ &\downarrow&&\downarrow\partial&&||&&||\\ \cdots\rightarrow&0&\rightarrow&\partial C_{k}(\hat{X};\mathbb{K})& \stackrel{{ i}}{{\hookrightarrow}}&C_{k-1}(\hat{X};\mathbb{K})& \rightarrow\cdots\rightarrow&C_{0}(\hat{X};\mathbb{K})\\ &\downarrow&&\downarrow f^{\prime}&&\downarrow f_{*}&&\downarrow\\ \cdots\rightarrow&C_{k+1}(EG;\mathbb{K})&\rightarrow&C_{k}(EG;\mathbb{K})& \stackrel{{\partial}}{{\rightarrow}}&C_{k-1}(EG;\mathbb{K})& \rightarrow\cdots\rightarrow&C_{0}(EG;\mathbb{K}).\end{array}\]
This shows that the chain map \(C_{*}(\hat{X};\mathbb{K})\to C_{*}(EG;\mathbb{K})\) factors through a chain complex \(T_{*}\) (the middle row in the above diagram) which has no terms in degree \(d>k\). Therefore, the composition
\[H^{d}(BG;V)\to H^{d}(T_{*};V)\to H^{d}(X;V)\]
is the zero map, contradicting the hypothesis that \(X\to BG\) is \(d\)-essential.
Here is a special case, where we suppose that \(X^{d}\) is a closed \(d\)-manifold.
**Theorem 6**.: _Let \(X^{d}\) be a \(d\)-manifold, \(G\) a group that satisfies \(\mathcal{H}_{n,\delta}\) and \(BG\) its classifying space. If there is a continuous map \(f:X^{d}\to BG\) with \(f_{*}[X]\neq 0\) in \(H_{d}(BG;\mathbb{K})\) then for each \(0<k<d\), a Morse function on \(X^{d}\) has at least \(n+1\) critical points of index \(k\)._
**Remark**.: _Since the fundamental group of a hyperbolic manifold is not free, the case of critical points of index \(1\) or \(d-1\) follows from the aforementioned theorem of Arzhantseva, Gromov, Kapovich-Weidmann ([2, 19, 23]). For hyperbolic manifolds of dimension three, a much better bound is given by [4]._
### Dimensions, few relator groups, and \(2\)-complexes
Concerning few relator groups, we get the following.
**Theorem 8**.: _An \(n\)-relator group satisfying \(\mathcal{H}_{n,\delta}\) has cohomological dimension \(\leq 2\)._
Proof.: An \(n\)-relator group is the fundamental group of an aspherical cell complex with \(n\)\(2\)-cells (and cells of higher dimension). If such a group satisfies \(\mathcal{H}_{n,\delta}\) then, by Theorem 36.2, it has cohomological dimension \(\leq 2\).
**Question 38**.: _Does every \(n\)-relator group satisfying \(\mathcal{H}_{n,\delta}\) have geometric dimension \(\leq 2\)?_
**Remark**.: _This question is an instance of a problem raised by Eilenberg and Ganea in [16], asking whether there is an example of a group whose geometric and cohomological dimensions differ:_
_"We do not know whether these exceptional cases are actually present. The inequality dim_ \(\Pi<\) _cat_ \(\Pi\) _(cases A and B) is equivalent with the assertion that_ \(\Pi\) _is one-dimensional but not free. The problem of the existence of such a group has equivalent formulations in terms of group extensions and also in terms of properties of the integral group ring_ \(\Lambda=\mathbb{Z}[\Pi]\)_. Similarly, the inequality cat_ \(\Pi<\) _geom. dim_ \(\Pi\) _(cases B and C) is related to properties of the ring_ \(\Lambda\)_. For instance, if it can be shown that a direct summand of a free_ \(\Lambda\)_-module is free, then the equality cat_ \(\Pi\)_=geom. dim_ \(\Pi\) _follows." (__[_16_]__, p. 517-518)._
_In [16] "dim" is the cohomological dimension and there is also an intermediate dimension "cat" that was later shown to be equivalent to cohomological dimension by Stallings [26]. Cases A and B refer to the hypothetical situation (\(1=\dim\Pi<\) cat \(\Pi=2\)) which is now known to not occur. Case C refers to the potential situation (\(2=\) dim_ \(\Pi=\) _cat_ \(\Pi<\) _geom. dim_ \(\Pi=3\)_). The conjecture that case C does not occur, either, is nowadays referred to as the Eilenberg-Ganea conjecture. A proof of the reduction of this conjecture to a question about group rings claimed in the last sentence of the quote may help with Question 38, but we do not know how to establish such a reduction, even with the additional assumption_ \(\mathrm{GL}_{n}(\Lambda)=\mathrm{GE}_{n}(\Lambda)\) _for all_ \(n\)_._
Concerning the topology of presentation complexes, we get the following.
**Theorem 39**.: _Assume that the group \(G\) satisfies \(\mathcal{H}_{n,\delta}\)._
1. _Every presentation_ \(2\)_-complex for_ \(G\) _with_ \(n\) _relations (or less) has a free_ \(\pi_{2}\)_._
2. _(Theorem_ 7 _from introduction) Assume further that the group_ \(G\) _has geometric dimension two, and let_ \(Y\) _be an aspherical_ \(2\)_-complex with fundamental group_ \(G\)_. Then every presentation_ \(2\)_-complex with less than_ \(n+1\) _relations is standard, i.e. has the same homotopy type as_ \[Y\lor S^{2}\vee\ldots\lor S^{2}.\]
Proof.: If \(X\) is an \(n\)-relator presentation \(2\)-complex for such a group and \(C_{*}=C_{*}(X;\mathbb{Z})\) is the chain complex of its universal cover, then \(\partial(C_{2})\) is an \(n\)-generated submodule of \(C_{1}\) satisfying (\(\star\) *> 2). Hence it is a free module (by Theorem 33), and we have a splitting \(C_{2}=\pi_{2}(X)\oplus\partial(C_{2})\). By projecting the generators of \(C_{2}\) to \(\pi_{2}(X)\), we see that \(\pi_{2}(X)\) is generated by \(n\) elements. Moreover, \(\pi_{2}(X)\) satisfies (\(\star\) *> 2) (in fact, it is projective) so \(\pi_{2}(X)\) is free by Theorem 33, i.e. \(\pi_{2}(X)\cong\mathbb{Z}[G]^{m}\) for some \(m\). This proves 1.
If \(Y\) is an aspherical \(2\)-complex \(Y\) with fundamental group \(G\), we can construct a homotopy equivalence \(f:Y\lor S_{1}^{2}\vee\ldots\lor S_{m}^{2}\to X\) as follows. First, construct a map realizing the \(\pi_{1}\)-isomorphism \(Y\to X\). This can be done since \(Y\) is a \(2\)-complex. Second, let the \((S_{i}^{2})_{1\leqslant i\leqslant m}\) represent a \(\mathbb{Z}[G]\)-basis of \(\pi_{2}(X)\). Then the resulting map \(f\) is a \(\pi_{1}\)-isomorphism and a homology isomorphism of universal covers, hence a homotopy equivalence. This proves 2.
|
2310.00090 | On the Counting of Involutory MDS Matrices | The optimal branch number of MDS matrices has established their importance in
designing diffusion layers for various block ciphers and hash functions. As a
result, numerous matrix structures, including Hadamard and circulant matrices,
have been proposed for constructing MDS matrices. Also, in the literature,
significant attention is typically given to identifying MDS candidates with
optimal implementations or proposing new constructions across different orders.
However, this paper takes a different approach by not emphasizing efficiency
issues or introducing new constructions. Instead, its primary objective is to
enumerate Hadamard MDS and involutory Hadamard MDS matrices of order $4$ within
the field $\mathbb{F}_{2^r}$. Specifically, it provides an explicit formula for
the count of both Hadamard MDS and involutory Hadamard MDS matrices of order
$4$ over $\mathbb{F}_{2^r}$. Additionally, it derives the count of Hadamard
Near-MDS (NMDS) and involutory Hadamard NMDS matrices, each with exactly one
zero in each row, of order $4$ over $\mathbb{F}_{2^r}$. Furthermore, the paper
discusses some circulant-like matrices for constructing NMDS matrices and
proves that when $n$ is even, any $2n \times 2n$ Type-II circulant-like matrix
can never be an NMDS matrix. While it is known that NMDS matrices may be
singular, this paper establishes that singular Hadamard matrices can never be
NMDS matrices. Moreover, it proves that there exist exactly two orthogonal
Type-I circulant-like matrices of order $4$ over $\mathbb{F}_{2^r}$. | Susanta Samanta | 2023-09-29T18:57:00Z | http://arxiv.org/abs/2310.00090v3 | # On the Counting of Involutory MDS Matrices
###### Abstract
The optimal branch number of MDS matrices has established their prominence in the design of diffusion layers for various block ciphers and hash functions. Consequently, several matrix structures have been proposed for designing MDS matrices, including Hadamard and circulant matrices. In this paper, we first provide the count of Hadamard MDS matrices of order 4 over the field \(\mathbb{F}_{2^{r}}\). Subsequently, we present the counts of order 2 MDS matrices and order 2 involutory MDS matrices over the field \(\mathbb{F}_{2^{r}}\). Finally, leveraging these counts of order 2 matrices, we derive an upper bound for the number of all involutory MDS matrices of order 4 over \(\mathbb{F}_{2^{r}}\).
Keywords:Involutory matrix Hadamard matrix Diffusion Layer MDS matrix.
## 1 Introduction
Claude Shannon, in his paper "Communication Theory of Secrecy Systems" [6], introduced the concepts of confusion and diffusion, which play a significant role in the design of symmetric key cryptographic primitives. The concept of confusion aims to create a statistical relationship between the ciphertext and message that is too intricate for an attacker to exploit. This is accomplished through the use of nonlinear functions such as Sboxes and Boolean functions. Diffusion, on the other hand, ensures that each bit of the message and secret key influences a significant number of bits in the ciphertext, and over several rounds, all output bits depend on every input bit.
Optimal diffusion layers can be achieved by employing _MDS matrices_ with the highest branch numbers. As a result, various matrix structures have been suggested for the designing of MDS matrices, including _Hadamard_ and circulant matrices. A concise survey on the various theories on the construction of MDS matrices is provided in [3]. In the context of lightweight cryptographic primitives, the adoption of _involutory matrices_ allows for the implementation of both encryption and decryption operations using identical circuitry, thereby resulting in an equivalent implementation cost for both processes. So, it is of special interest to find efficient MDS matrices which are also involutory.
However, obtaining efficiently implementable involutory MDS matrices is a challenging task. Moreover, an exhaustive search for involutory MDS matrices
over the finite field of higher order is not suitable due to the vast search space. A concise overview of the different constructions of MDS matrices, considering whether they possess the involutory property, is available in [3]. In 2019, Guzel et al. [4] demonstrated that there are \((2^{r}-1)^{2}(2^{r}-2)(2^{r}-4)\) involutory MDS matrices of size \(3\times 3\) over the finite field \(\mathbb{F}_{2^{r}}\). However, \(4\) and \(8\) are the most commonly used diffusion layer matrix sizes in the literature.
One of the most noteworthy advantages of Hadamard matrices lies in their capability to facilitate the construction of involutory matrices. If the matrix elements are selected such that the first row sums to one, the resultant matrix attains involutory properties [3]. Due to this advantageous characteristic, several block ciphers, such as Anubis [1], Khazad [2] and CLEFIA [7], have incorporated Hadamard involutory MDS matrices into their diffusion layers.
In this paper, the primary focus is to enumerate Hadamard MDS matrices of order \(4\) within the field \(\mathbb{F}_{2^{r}}\) and propose a conjecture regarding the count of involutory Hadamard MDS matrices of order \(4\) in the same field. Subsequently, we provide counts for both order \(2\) MDS matrices and order \(2\) involutory MDS matrices in the field \(\mathbb{F}_{2^{r}}\). Finally, by leveraging these counts of order \(2\) matrices, we establish an upper limit for the number of all involutory MDS matrices of order \(4\) over \(\mathbb{F}_{2^{r}}\).
## 2 Definition and Preliminaries
Let \(\mathbb{F}_{2}=\{0,1\}\) be the finite field of two elements, \(\mathbb{F}_{2^{r}}\) be the finite field of \(2^{r}\) elements and \(\mathbb{F}_{2^{r}}^{*}\) be the multiplicative group of \(\mathbb{F}_{2^{r}}\). The set of vectors of length \(n\) with entries from the finite field \(\mathbb{F}_{2^{r}}\) is denoted by \(\mathbb{F}_{2^{r}}^{n}\).
A matrix \(D\) of order \(n\) is said to be diagonal if \((D)_{i,j}=0\) for \(i\neq j\). Using the notation \(d_{i}=(D)_{i,i}\), the diagonal matrix \(D\) can be represented as \(\mathrm{diag}(d_{1},d_{2},\ldots,d_{n})\). It is evident that the determinant of \(D\) is given by \(\det(D)=\prod_{i=1}^{n}d_{i}\). Therefore, the diagonal matrix \(D\) is nonsingular over \(\mathbb{F}_{2^{r}}\) if and only if \(d_{i}\neq 0\) for \(1\leq i\leq n\).
An MDS matrix offers diffusion properties that find practical applications in the field of cryptography. This concept originates from coding theory, specifically from the realm of maximum distance separable (MDS) codes. An \([n,k,d]\) code is MDS if it meets the singleton bound \(d=n-k+1\).
Theorem 2.1: _[_5_, page 321]_ _An \([n,k,d]\) code \(C\) with generator matrix \(G=[I\mid M]\), where \(M\) is a \(k\times(n-k)\) matrix, is MDS if and only if every square submatrix (formed from any \(i\) rows and any \(i\) columns, for any \(i=1,2,\ldots,min\{k,n-k\}\)) of \(M\) is nonsingular._
Definition 1: A matrix \(M\) of order \(n\) is said to be an MDS matrix if \([I\mid M]\) is a generator matrix of a \([2n,n]\) MDS code.
Another way to define an MDS matrix is as follows.
**Fact 1**: _A square matrix \(M\) is an MDS matrix if and only if every square submatrices of \(M\) are nonsingular._
One of the elementary row operations on matrices is multiplying a row of a matrix by a nonzero scalar. MDS property remains invariant under such operations. Thus, we have the following result regarding MDS matrices.
Lemma 1: _[_3_]_ _Let \(M\) be an MDS matrix, then for any nonsingular diagonal matrices \(D_{1}\) and \(D_{2}\), \(D_{1}MD_{2}\) will also be an MDS matrix._
Using involutory diffusion matrices is more beneficial for implementation since it allows the same module to be utilized in both encryption and decryption phases.
Definition 2: An involutory matrix is defined as a square matrix \(M\) that fulfills the condition \(M^{2}=I\) or, equivalently, \(M=M^{-1}\).
Therefore, based on Lemma 1, we can deduce the following result.
Corollary 1: _For any nonsingular diagonal matrix \(D\), the matrix \(DMD^{-1}\) is an involutory MDS matrix if and only if \(M\) is also an involutory MDS matrix._
Definition 3: A matrix \(M\) of size \(2^{n}\times 2^{n}\) in the field \(\mathbb{F}_{2^{r}}\) is called a Finite Field Hadamard matrix, or simply a Hadamard matrix, if it can be represented in the following form:
\[\text{M= }\begin{bmatrix}U&V\\ V&U\end{bmatrix}\]
where the submatrices \(U\) and \(V\) are also Hadamard matrices.
For example, a \(2^{2}\times 2^{2}\) Hadamard matrix is:
\[M=\begin{bmatrix}a_{1}&a_{2}&a_{3}&a_{4}\\ a_{2}&a_{1}&a_{4}&a_{3}\\ a_{3}&a_{4}&a_{1}&a_{2}\\ a_{4}&a_{3}&a_{2}&a_{1}\end{bmatrix}.\]
Note that Hadamard matrices are symmetric and can be represented by their first row. For simplicity, we will denote a Hadamard matrix with the first row as \(a_{1},a_{2},\ldots,a_{n}\) as \(\text{Hada}(a_{1},a_{2},\ldots,a_{n})\). Also, it is worth noting that if \(a_{1}+a_{2}+\cdots+a_{n}=1\), then the Hadamard matrix will be involutory [3, page 8].
## 3 Enumeration of \(4\times 4\) Hadamard MDS matrices
In this section, we enumerate \(4\times 4\) Hadamard MDS matrices, including involutory Hadamard MDS matrices, over the finite field \(\mathbb{F}_{2^{r}}\). First, we present the conditions that the Hadamard matrix \(\text{Hada}(a,b,c,d)\) must satisfy to be considered an MDS matrix.
Lemma 2: _The Hadamard matrix \(\text{Hada}(a,b,c,d)\) over \(\mathbb{F}_{2^{r}}\) is MDS if and only if the tuple \((a,b,c,d)\in\mathbb{F}_{2^{r}}^{4}\) satisfies the conditions: (i) \(a,b,c,d\in\mathbb{F}_{2^{r}}^{*}\), (ii) \(a\neq b\neq c\neq d\), (iii) \(d\neq a^{-1}bc\), (iv) \(d\neq ab^{-1}c\), (v) \(d\neq abc^{-1}\), and (vi) \(d\neq a+b+c\)._
Proof: The set of minors of \(M=\operatorname{Hada}(a,b,c,d)\) is given by:
\[\begin{array}{l}\{a,b,c,d,a^{2}+b^{2},bc+ad,ac+bd,c^{2}+d^{2},a^{2}+c^{2},ab+cd, b^{2}+d^{2},a^{2}+d^{2},b^{2}+c^{2},\\ a^{3}+ab^{2}+ac^{2}+ad^{2},a^{2}b+b^{3}+bc^{2}+bd^{2},a^{2}c+b^{2}c+c^{3}+cd^{2},a^{2}d+b^{2}d+c^{2}d+d^{3},\\ a^{4}+b^{4}+c^{4}+d^{4}\}.\end{array}\]
These minors have factors given by:
\[T=\{a,b,c,d,a+b,bc+ad,ac+bd,c+d,a+c,ab+cd,b+d,a+d,b+c,a+b+c+d\}.\]
Therefore, \(M\) is an MDS matrix if and only if each element in the set \(T\) is nonzero. This condition is satisfied if and only if \((a,b,c,d)\in\mathbb{F}_{2^{r}}^{4}\) satisfies the conditions: (i) \(a,b,c,d\in\mathbb{F}_{2^{r}}^{\star}\), (ii) \(a\neq b\neq c\neq d\), (iii) \(d\neq a^{-1}bc\), (iv) \(d\neq abc^{-1}c\), (v) \(d\neq abc^{-1}\), and (vi) \(d\neq a+b+c\). This completes the proof.
Theorem 2.2: _The count of \(4\times 4\) Hadamard MDS matrices over the finite field \(\mathbb{F}_{2^{r}}\) is given by \((2^{r}-1)(2^{r}-2)(2^{r}-4)(2^{r}-7)\)._
Proof: According to Lemma 2, the number of \(4\times 4\) Hadamard MDS matrices over \(\mathbb{F}_{2^{r}}\) is equal to the cardinality of the set \(S\), defined as:
\[\begin{array}{l}S=\{(a,b,c,d)\in(\mathbb{F}_{2^{r}}^{\star})^{4}:\ a\neq b \neq c\neq d\text{ and }d\neq a^{-1}bc,\ d\neq ab^{-1}c,\ d\neq abc^{-1},\\ d\neq a+b+c\}.\end{array}\]
Since \(a\in\mathbb{F}_{2^{r}}^{\star}\), there are \(2^{r}-1\) possible choices for \(a\). Furthermore, with \(b\in\mathbb{F}_{2^{r}}^{\star}\) and \(a\neq b\), there are \(2^{r}-2\) possible choices for \(b\). Similarly, for \(c\), there are \(2^{r}-3\) possible choices.
Now, we will demonstrate that \(a^{-1}bc\not\in\left\{b,c,ab^{-1}c,abc^{-1},a+b+c\right\}\) for any choice of \(a,b\) and \(c\).
**Case 1:**\(a^{-1}bc=b\).
In this case, \(a^{-1}bc=b\), which implies \(a=c\). However, this contradicts our assumptions.
**Case 2:**\(a^{-1}bc=c\).
In this case, \(a^{-1}bc=c\), which implies \(a=b\), and this is a contradiction.
**Case 3:**\(a^{-1}bc=ab^{-1}c\).
Now,
\[\begin{array}{l}a^{-1}bc=ab^{-1}c\\ \implies a^{2}=b^{2}\\ \implies a=b\text{ [Since characteristic of $\mathbb{F}_{2^{r}}$ is 2]},\end{array}\]
which is a contradiction.
**Case 4:**\(a^{-1}bc=abc^{-1}\).
Now,
\[\begin{array}{l}a^{-1}bc=abc^{-1}\\ \implies a^{2}=c^{2}\\ \implies a=c\text{ [Since characteristic of $\mathbb{F}_{2^{r}}$ is 2]},\end{array}\]
which is a contradiction.
**Case 5:**\(a^{-1}bc=ab^{-1}c\).
Now,
\[a^{-1}bc=a+b+c\] \[\implies a^{2}+ab+ac=bc\] \[\implies (a+b)(a+c)=0\] \[\implies a=b\text{ or }a=c.\]
This again leads to a contradiction.
Therefore, we have \(a^{-1}bc\not\in\big{\{}b,c,ab^{-1}c,abc^{-1},a+b+c\big{\}}\).
However, when \(c=a^{2}b^{-1}\) (which is not equal to \(a\) and not equal to \(b\)), we have \(a^{-1}bc=a\). In this case, there are \(2^{r}-7\) choices for \(d\). For any other choices of \(a,b\) and \(c\), there are \(2^{r}-8\).
Similarly, we can show the following:
1. \(ab^{-1}c\not\in\big{\{}a,c,a^{-1}bc,abc^{-1},a+b+c\big{\}}\) and when \(c=b^{2}a^{-1}\) (which is not equal to \(a\) and not equal to \(b\)), we have \(ab^{-1}c=b\). In this case, there are \(2^{r}-7\) choices for \(d\). For any other choices of \(a,b\) and \(c\), there are \(2^{r}-8\).
2. \(abc^{-1}\not\in\big{\{}a,b,a^{-1}bc,ab^{-1}c,a+b+c\big{\}}\). Since, the characteristic of \(\mathbb{F}_{2^{r}}\) is \(2\), \(x\mapsto x^{2}\) is an isomorphism over \(\mathbb{F}_{2^{r}}\). Hence, there exist a unique element \(\alpha\in\mathbb{F}_{2^{r}}\) such that \(\alpha^{2}=ab\). Now for \(c=\alpha\) (which is not equal to \(a\) and not equal to \(b\)), we have \(abc^{-1}=c\). In this case, there are \(2^{r}-7\) choices for \(d\). For any other choices of \(a,b\) and \(c\), there are \(2^{r}-8\).
3. \(a+b+c\notin\big{\{}a,b,c,a^{-1}bc,ab^{-1}c,abc^{-1}\big{\}}\). However, \(a+b+c\) may equal zero, and when \(c=a+b\), we have \(a+b+c=0\). In this case, there are \(2^{r}-7\) choices for \(d\). For any other combinations of \(a\), \(b\), and \(c\), there are \(2^{r}-8\) choices.
Therefore, there are \(4\) choices of \(c\) for which \(d\) has \(2^{r}-7\) choices, whereas for any of the remaining \(2^{r}-7\) choices of \(c\), \(d\) has \(2^{r}-8\) choices. Hence, we have
\[|S| = (2^{r}-1)(2^{r}-2)[4.(2^{r}-7)+(2^{r}-7)(2^{r}-8)]\] \[= (2^{r}-1)(2^{r}-2)(2^{r}-7)(4+2^{r}-8)\] \[= (2^{r}-1)(2^{r}-2)(2^{r}-4)(2^{r}-7).\]
Hence, the number of \(4\times 4\) Hadamard MDS matrices over the finite field \(\mathbb{F}_{2^{r}}\) is equal to \((2^{r}-1)(2^{r}-2)(2^{r}-4)(2^{r}-7)\).
It is worth noting that the Hadamard matrix \(\operatorname{Hada}(a,b,c,d)\) over \(\mathbb{F}_{2^{r}}\) achieves involutory property if and only if the condition \(a+b+c+d=1\) is satisfied. Therefore, \(\operatorname{Hada}(a,b,c,d)\) is an involutory MDS matrix if and only if it satisfies the following conditions:
1. \(a,b,c,d\in\mathbb{F}_{2^{r}}^{\star}\), (ii) \(a\neq b\neq c\neq d\), (iii) \(d\neq a^{-1}bc\), (iv) \(d\neq ab^{-1}c\), (v) \(d\neq abc^{-1}\), (vi) \(d\neq a+b+c\), and (vii) \(d=a+b+c+1\).
Therefore, the number of \(4\times 4\) involutory Hadamard MDS matrices over \(\mathbb{F}_{2^{r}}\) is equal to the cardinality of the set \(T\), defined as:
\[T=\{(a,b,c,d)\in(\mathbb{F}_{2^{r}}^{*})^{4}:\ d=a+b+c+1,\text{and }a \neq b\neq c\neq d,\ d\neq a^{-1}bc,\] \[\ d\neq ab^{-1}c,\ d\neq abc^{-1}\}.\]
**Conjecture 1**: _The count of \(4\times 4\) involutory Hadamard MDS matrices over the finite field \(\mathbb{F}_{2^{r}}\) is given by \((2^{r}-2)(2^{r}-4)(2^{r}-7)\)._
We have empirically verified this conjecture for various finite fields \(\mathbb{F}_{2^{r}}\) with \(3\leq r\leq 7\). However, a proof of its validity is currently lacking.
## 4 \(2n\times 2n\) involutory MDS matrices
In this section, we first present a general form for all \(2n\times 2n\) involutory MDS matrices over \(\mathbb{F}_{2^{r}}\). Then, we determine the exact count of \(2\times 2\) MDS and involutory MDS matrices over \(\mathbb{F}_{2^{r}}\). Using these counts, we establish an upper bound for the enumeration of all \(4\times 4\) involutory MDS matrices over \(\mathbb{F}_{2^{r}}\).
Lemma 3: _Let \(M=\begin{bmatrix}A_{1}&A_{2}\\ A_{3}&A_{4}\end{bmatrix}\) be a \(2n\times 2n\) involutory MDS matrix, where \(A_{i}\) are \(n\times n\) matrices. Then \(M\) can be expressed in the form \(M=DHD^{-1}\), where \(H=\begin{bmatrix}A_{1}&I_{n}+A_{1}\\ I_{n}+A_{1}&A_{1}\end{bmatrix}\) and \(D\) is the block diagonal matrix \(\operatorname{diag}(I_{n},A_{3}(I_{n}+A_{1})^{-1})\)._
Proof: Since \(M\) is involutory we have
\[A_{1}^{2}+A_{2}A_{3}=I_{n}, A_{1}A_{2}+A_{2}A_{4}=\mathbf{0},\] \[A_{3}A_{1}+A_{4}A_{3}=\mathbf{0}, A_{3}A_{2}+A_{4}^{2}=I_{n}.\] \[\implies A_{2}=(I_{n}+A_{1}^{2})A_{3}^{-1} A_{4}=A_{3}A_{1}A_{3}^{-1}.\]
Now, as \(M\) is an MDS matrix, it cannot contain any zero entries. Hence, we must have \(A_{1}\neq I_{n}\). Therefore, \(M\) can be expressed as:
\[M =\begin{bmatrix}A_{1}&(I_{n}+A_{1}^{2})A_{3}^{-1}\\ A_{3}&A_{3}A_{1}A_{3}^{-1}\end{bmatrix}\] \[=\begin{bmatrix}I_{n}&\mathbf{0}\\ \mathbf{0}&A_{3}(I_{n}+A_{1})^{-1}\end{bmatrix}\begin{bmatrix}A_{1}&I_{n}+A_{ 1}\\ I_{n}+A_{1}&A_{1}\end{bmatrix}\begin{bmatrix}I_{n}&\mathbf{0}\\ \mathbf{0}&(I_{n}+A_{1})A_{3}^{-1}\end{bmatrix}\] \[=DHD^{-1},\]
where \(H\) is the block matrix \(\begin{bmatrix}A_{1}&I_{n}+A_{1}\\ I_{n}+A_{1}&A_{1}\end{bmatrix}\) and \(D\) is the block diagonal matrix \(\operatorname{diag}(I_{n},A_{3}(I_{n}+A_{1})^{-1})\).
Lemma 4: _The count of \(2\times 2\) MDS matrices over the finite field \(\mathbb{F}_{2^{r}}\) is given by \((2^{r}-1)^{3}(2^{r}-3)\)._
Proof: Let \(M=\begin{bmatrix}a&b\\ c&d\end{bmatrix}\) be an MDS matrix over \(\mathbb{F}_{2^{r}}\). Note that \(a\), \(b\), \(c\), and \(d\) all belong to the nonzero elements of the finite field \(\mathbb{F}_{2^{r}}\). Also, \(\det(M)=ad+bc\) must be nonzero. This implies that \(d\neq a^{-1}bc\). Consequently, each of the values \(a\), \(b\), and \(c\) can be chosen from \(2^{r}-1\) possibilities, as they are drawn from the nonzero elements of \(\mathbb{F}_{2^{r}}\). On the other hand, \(d\) can be selected from \(2^{r}-2\) possibilities because it cannot be equal to \(a^{-1}bc\). Hence, the total number of \(2\times 2\) MDS matrices can be calculated as \((2^{r}-1)^{3}(2^{r}-2)\).
Lemma 5: _The count of \(2\times 2\) involutory MDS matrices over the finite field \(\mathbb{F}_{2^{r}}\) is given by \((2^{r}-1)(2^{r}-2)\)._
Proof: According to Lemma 3, any \(2\times 2\) involutory MDS matrix \(M\) can be represented as \(M=DHD^{-1}\), where \(H\) is the \(2\times 2\) Hadamard matrix \(\operatorname{Hada}(\alpha,1+\alpha)\), and \(D\) is a nonsingular diagonal matrix \(\operatorname{diag}(1,(1+\alpha)\beta^{-1})\), with \(\alpha,\beta\in\mathbb{F}_{2^{r}}^{*}\). Also, from Corollary 1, we can say that \(M\) is an involutory MDS if and only if \(H\) is an involutory MDS.
Regarding \(H\), there are \(2^{r}-2\) possible choices since \(\alpha\not\in\{0,1\}\). On the other hand, \(D\) provides \(2^{r}-1\) options. Therefore, the total number of \(2\times 2\) involutory MDS matrices is \((2^{r}-1)(2^{r}-2)\).
In Conjecture 1, we provide the count of \(4\times 4\) involutory Hadamard MDS matrices. Next, we establish an upper bound for the enumeration of all \(4\times 4\) involutory MDS matrices.
Theorem 3.1: _The number of \(4\times 4\) involutory MDS matrices over the finite field \(\mathbb{F}_{2^{r}}\) is upper bounded by \(2^{r}(2^{r}-1)^{3}(2^{r}-2)^{2}(2^{r}-3)(2^{r}-4)\)._
Proof: From Lemma 3, we can deduce that any \(4\times 4\) involutory MDS matrix \(M\) can be expressed as:
\[M=\begin{bmatrix}A_{1}&(I_{n}+A_{1}^{2})A_{3}^{-1}\\ A_{3}&A_{3}A_{1}A_{3}^{-1}\end{bmatrix},\]
where both \(A_{1}\) and \(A_{3}\) are \(2\times 2\) MDS matrices. Additionally, \(A_{1}\) cannot be an involutory matrix. Therefore, by considering both Lemma 4 and Lemma 5, we can conclude that \(A_{1}\) has a total of \((2^{r}-1)^{3}(2^{r}-2)-(2^{r}-1)(2^{r}-2)=2^{r}(2^{r}-1)(2^{r}-2)^{2}\) possible choices.
Each row of \(A_{3}\) must be linearly independent with the rows of \(A_{1}\). Since \(A_{3}\) is a \(2\times 2\) MDS matrix, there are \((2^{r}-1)(2^{r}-3)\) possible choices for the first row of \(A_{3}\) and \((2^{r}-1)(2^{r}-4)\) possible choices for the second row. Thus, \(A_{3}\) has a total of \((2^{r}-1)^{2}(2^{r}-3)(2^{r}-4)\) possible choices. Hence, the number of \(4\times 4\) involutory MDS matrices over the finite field \(\mathbb{F}_{2^{r}}\) is upper bounded by \(2^{r}(2^{r}-1)^{3}(2^{r}-2)^{2}(2^{r}-3)(2^{r}-4)\).
It is essential to emphasize that the upper bound we have derived in Theorem 3.1 is not a precise one. This is because our calculation only considers cases involving \(2\times 2\) submatrices constructed from \(A_{1}\) and \(A_{3}\). To achieve a more precise bound, it is necessary to examine the other \(2\times 2\) submatrices as
well as all \(3\times 3\) submatrices. However, currently, we are unable to exclude these potential cases from our analysis.
Also, as stated in Lemma 3, we can represent any \(4\times 4\) involutory MDS matrix \(M\) as \(M=DHD^{-1}\), where \(H\) is a \(4\times 4\) involutory matrix. However, it is important to note that in this context, \(D\) is not a standard diagonal matrix but rather a diagonal block matrix. Consequently, unlike Lemma 5 for \(2\times 2\) involutory MDS matrices, we cannot directly apply Corollary 1 to assert that \(M\) is MDS if and only if \(H\) is MDS. Furthermore, considering the specific form of \(H\), it cannot be considered an MDS matrix.
## 5 Conclusion
This paper has concentrated on two main objectives. First, we have enumerated Hadamard MDS matrices with an order of \(4\) within the field \(\mathbb{F}_{2^{r}}\) and proposed a conjecture regarding the count of involutory Hadamard MDS matrices over the same field. Additionally, we have established an upper limit for the number of involutory MDS matrices with an order of \(4\) over \(\mathbb{F}_{2^{r}}\). However, it is important to note that the upper bound we have calculated is not a precise one. Consequently, determining the exact count of \(4\times 4\) involutory MDS matrices or achieving a tighter bound is a potential avenue for future research.
|
2310.20340 | Near-Optimal Coverage Path Planning with Turn Costs | Coverage path planning is a fundamental challenge in robotics, with diverse
applications in aerial surveillance, manufacturing, cleaning, inspection,
agriculture, and more. The main objective is to devise a trajectory for an
agent that efficiently covers a given area, while minimizing time or energy
consumption. Existing practical approaches often lack a solid theoretical
foundation, relying on purely heuristic methods, or overly abstracting the
problem to a simple Traveling Salesman Problem in Grid Graphs. Moreover, the
considered cost functions only rarely consider turn cost, prize-collecting
variants for uneven cover demand, or arbitrary geometric regions.
In this paper, we describe an array of systematic methods for handling
arbitrary meshes derived from intricate, polygonal environments. This
adaptation paves the way to compute efficient coverage paths with a robust
theoretical foundation for real-world robotic applications. Through
comprehensive evaluations, we demonstrate that the algorithm also exhibits low
optimality gaps, while efficiently handling complex environments. Furthermore,
we showcase its versatility in handling partial coverage and accommodating
heterogeneous passage costs, offering the flexibility to trade off coverage
quality and time efficiency. | Dominik Michael Krupke | 2023-10-31T10:24:45Z | http://arxiv.org/abs/2310.20340v1 | # Near-Optimal Coverage Path Planning with Turn Costs
###### Abstract
Coverage path planning is a fundamental challenge in robotics, with diverse applications in aerial surveillance, manufacturing, cleaning, inspection, agriculture, and more. The main objective is to devise a trajectory for an agent that efficiently covers a given area, while minimizing time or energy consumption. Existing practical approaches often lack a solid theoretical foundation, relying on purely heuristic methods, or overly abstracting the problem to a simple Traveling Salesman Problem in Grid Graphs. Moreover, the considered cost functions only rarely consider turn cost, prize-collecting variants for uneven cover demand, or arbitrary geometric regions.
In this paper, we describe an array of systematic methods for handling arbitrary meshes derived from intricate, polygonal environments. This adaptation paves the way to compute efficient coverage paths with a robust theoretical foundation for real-world robotic applications. Through comprehensive evaluations, we demonstrate that the algorithm also exhibits low optimality gaps, while efficiently handling complex environments. Furthermore, we showcase its versatility in handling partial coverage and accommodating heterogeneous passage costs, offering the flexibility to trade off coverage quality and time efficiency.
## 1 Introduction
Coverage path planning is an important problem for various applications such as aerial surveillance [14], cleaning [13], milling [37], moving [30], pest control [9], and more. It has already received a considerable amount of attention, mostly from a practical perspective, but also with some theoretical results. The problem is provably hard to solve on multiple levels, as it contains NP- and PSPACE-hard problems such as the Traveling Salesman Problem (TSP), Covering, and the Piano Mover Problem.
The simplest theoretical abstraction of the problem is the TSP in Grid Graphs. Here, we simply place a grid, with a cell size matching the agent's coverage capabilities, over the area and compute the shortest tour on it. Because the TSP appears in many applications, it is one of the most well researched optimization problems, such that there are highly capable solvers despite its proved hardness. The Concorde solver [3] is able to solve instances with tens of thousands of vertices to proved optimality [5] and there are other algorithms that can compute good solutions for much larger instances. Concorde is also used to optimize coverage paths, e.g., by Bormann et al. [13].
Although solving the TSP in Grid Graphs aims to minimize tour length, which is an important factor in energy consumption, this narrow optimization criterion can lead to unintended consequences. In applications such as multicopters, straighter flight paths are generally more energy-efficient [15, 39]. An objective focused solely on minimizing the length of a coverage tour often encourages wavy routes, as this approach enables, e.g., covering two lanes in a single pass. Consequently, these ostensibly shorter tours can actually be more expensive to execute.
This issue is addressed in the problem Milling with Turn Costs, which not only minimizes the length but also the sum of turn angles the tour performs through the grid [7]. While still not capturing all the dynamics, it serves as a more realistic approximation for various scenarios and mitigates the shortcomings of focusing solely on length minimization. Unfortunately, turn costs increase the complexity of the problem such that not only itself but already the cycle cover relaxation becomes NP-hard [24]. While the optimally solvable problem size increased from less than 100 vertices [20] to over 1000 vertices [25], the still large difference to classical TSP shows the limits of computing optimal solutions for realistic dynamic models, even for strongly simplified environments.
Besides complex dynamics, we sometimes do not need to cover the whole area. A true 100 % coverage is in many cases even not achievable because the tool simply does not fit into every corner. Instead, we have a feasible area that allows us to move in, and a smaller subset of it that is actually 'valuable'. A vacuum robot can move within the whole room, but often there are dirt-prone areas and cleaner areas, which do not need to be cleaned every time. A harvester can move along the whole field, but crop yield can be heterogeneous; the harvester does not need to harvest everything, rather only most of the harvest. For aerial supervision, there are areas of higher and lower interest. Additionally, there may be areas that are harder to pass than others, e.g., wind fields for UAVs [53] and difficult terrain or inclinations [30] for ground-based vehicles.
Fekete and Krupke [24, 25] proposed a constant-factor approximation algorithm for the Milling with
Turn Costs problem on grid graphs, which is also able to handle partial coverage via skipping-penalties. In this paper, we generalize this algorithm to work on arbitrary meshes obtained from polygonal environments and heterogeneous costs, which allows us to compute efficient trajectories based on a theoretical foundation for real-world applications, see Fig. 1. We show in our evaluation that the algorithm is able to compute solutions that are on average close to optimum (\(10\,\mathrm{\char 37}\) to \(15\,\mathrm{\char 37}\)), on the mesh representation. While the constant-factor approximation guarantee may be lost for arbitrary meshes, this paper shows how a theoretical algorithm for coverage path planning on square grids can be generalized for real-world applications.
### Related Work
Planning a trajectory for a tool to cover an area, e.g., moving a field or vacuuming a room, is known as the _Coverage Path Planning_ problem (CPP). The CPP already enjoyed a lot of attention for different applications, models (e.g., multi-robot), constraints, and objectives, as can be seen in multiple surveys [17, 27, 13, 14]. There are multiple approaches, the two most prominent being: (1) decomposing the larger area into simpler areas that can be covered using spiraling or zigzag patterns ([42, 19, 18]) and (2) applying a (regular square) grid onto the area, where each grid cell roughly represents the coverage area, converting the geometric coverage problem into a discrete touring problem on grid graphs ([12, 40, 54, 50, 39]). In this paper, we use the second approach but generalized to arbitrary meshes that can adapt better to the area than strict grids, as a well fitting mesh can drastically improve the achievable tours. When only considering the length of the trajectory, the problem becomes the famous Traveling Salesman Problem (TSP), which is NP-hard even in square grids [32], but can be solved well in practice due to extensive algorithm engineering [4]. To account for the non-negligible dynamics, we need to incorporate turn costs, which makes the problem significantly harder. Even previously simple relaxations become NP-hard [24] but constant-factor approximations are available [6, 7, 24]. On grid graphs, instances with around 1000 vertices could be solved to optimality, and approximation algorithms have been applied to instances with up to \(300\,000\) vertices [25]. For general points in the plane, the problem is known as the Angular Metric TSP, and only a logarithmic approximation is known [1]. A further generalization to abstract graphs is the Quadratic TSP, which plays an important role, e.g., in bioinformatics [26]. Of these problems, only instances with less than 100 vertices can be expected to be solved to optimality in reasonable time [33, 45, 2]. On the practical side, the CPP has been considered on models with distance and turn costs in various degrees, such as only minimizing the number of turns [34], the sum of turn angles [12, 43, 39] (like this paper), or even model- and experiment-based cost functions [42, 15]. Including heterogeneous cost functions have also been considered for CPP, e.g., [54, 30], and for simple path planning [38, 52, 46].
Another aspect of this paper is the ability to selectively cover the area, based on some value distribution. There are a few paper that also consider partial coverage path planning. Papachristos et al. [43] and Ellefsen, Lepikson, and Albiez [23] consider partial inspection of three-dimensional structures with distance and turn costs. Jensen et al. [34] and Soltero et al. [51] perform coverage without a fixed radius, but minimize the distance of (weighted) points of interests to the trajectory. Murtaza et al. [40] compute a full-coverage of the area, but prioritize subareas based on a probability distribution to find targets quickly. Sharma et al. [50] also compute a full-coverage of the area, but with a limited budget, resulting in multiple tours that try to efficiently cover as much as possible. However, all of these problems have significant differences to our problem. On the theoretical side, there are the _Penalty_ and _Budget TSP_, which allow skipping vertices at a penalty or try to cover as much as possible within a budget. An overview of such problems is given by Ausiello et al. [8].
### Contributions
In this paper, we make the following contributions:
* We generalize an approximation algorithm for coverage tours in regular grid graphs to work on more
Figure 1: A complex polygonal instance (a) is discretized using a meshing-algorithm in which the trajectory (b) is computed. Green indicates important areas, red indicates increased passage costs, and blue indicates the covered area of the black trajectory. We see that the trajectory minimizes the turn costs and focuses on the important areas while the expensive areas are avoided.
realistic polygonal instances by using meshing algorithms, paving the way to compute more efficient coverage tours with a robust theoretical foundation for real-world applications.
* but do not have to
- be avoided.
* We investigate partial coverage by using a penalty for missed coverage, which allows trading off coverage quality and time efficiency. The area can be weighted to target important areas with the tour.
* We locally improve the tour by using a large neighborhood search (LNS), which is able to improve the tour by a few percent.
* We evaluate the optimality gap of the implementation on over 500 instances, which were semi-automatically generated to mimic real-world scenarios. Data and code are provided.
We do _not_ maintain the approximation factor of the original algorithm, but we show that the implementation is still able to compute good solutions on arbitrary meshes by using sound lower bounds. Due to a lack of real-world instances and models, the evaluation is done only on synthetic instances, which were semi-automatically generated trying to mimic agricultural areas, locations with multiple buildings, and complex architecture. A comparison to the geometric model without the restriction to a mesh is not performed, as strong lower bounds are difficult to obtain. However, a comparison of the achievable solution quality of different grids and meshes was performed and the best meshing strategy was used for the evaluation. We noticed that focusing on the coverage of the edges rather than on the coverage of the points improves results when choosing a mesh resolution. Using hexagonal grids instead of square ones also shows beneficial, especially with higher turn costs. Furthermore, it is important to note that not all meshing algorithms are well-suited for addressing our specific problem. The corresponding study has been attached in Appendix C.
### Preliminaries
Given a graph \(G=(P,E)\), where \(P\subset\mathbb{R}^{2}\) is a set of waypoints, which span a potential trajectory, and \(E\) are segments connecting two waypoints. Additionally, we are given a value function \(val:P\rightarrow\mathbb{R}^{+}\), which assigns a value to each waypoint, and a cost function \(cost:P^{3}\rightarrow\mathbb{R}^{+}\), which assigns a cost to each consecutive triple \(u,v,w\) of waypoints with \(uv,vw\in E\). We call such a triple a _passage_ through the middle point \(v\). The goal is to find a tour \(T=p_{0},p_{1},\ldots,p_{|T|-1},p_{0}\) with \(p_{i}p_{i+1}\in E\) for all \(i\in\{0,\ldots,|T|-1\}\) that minimizes the objective
\[\min_{T}\underbrace{\sum_{i=0}^{|T|-1}cost(p_{i-1},p_{i},p_{i+1})}_{\text{Touring cost}}+\underbrace{\sum_{p\in P,p\notin T}val(p)}_{\text{ Coverage loss}} \tag{1}\]
We define the cost of using a passage \(uvw\) by a linear combination of length of the two segments and their turn angle, weighted by \(\tau\in\mathbb{R}^{+}\). It may additionally be scaled by a local factor \(\alpha_{v}\).
\[\text{cost}(u,v,w)=\alpha_{v}\cdot\left(\frac{d(u,v)+d(v,w)}{2}+\tau\cdot \text{turn}(u,v,w)\right)\]
The distance is halved to avoid double charging the edges. See Appendix A for a more extensive discussion.
## 2 Generalized Algorithm
In this section, we show how to adapt the algorithm of Fekete and Krupke [24, 25] to solve polygonal instances including expensive and valuable areas. More precisely, we show how to approximate the area using an embedded graph, adapt the previous algorithm to work on arbitrary embedded graphs, and add optimizations.
The generalized algorithm has seven steps: (1) Convert the polygonal instances to a discrete graph of waypoints. (2) Compute a fractional solution in this graph using linear programming. (3) Select atomic strips using the fractional solution. This step is more complicated for general meshes than for square grids. (4) Perform a matching on the atomic strips and obtain a cycle cover. (5) Improve the cycle cover. (6) Connect the cycles to form a tour. (7) Improve the tour. Steps (1), (3), (5), and (7) significantly differ from the original algorithm, and we will describe them in detail. However, if we are given a regular square grid for (1) and disable the local optimization, (5), and (7), the behavior of the algorithm is nearly identical to the original algorithm.
The resulting trajectories are shown in Fig. 2.
### Step 1: Discretization
We apply the meshing algorithm _dmsh_ (v0.2.17)[49] with additional smoothing by _optimesh_[48] onto the polygon to obtain a nicely fitting mesh, as can be seen in Fig. 2(a). The optimal distance between two waypoints is set to \(0.95\cdot\nicefrac{{4}}{{\sqrt{3}}}\cdot r\), where \(r\) represents the coverage radius, assuming the tool to have a circular coverage. We set \(r=1\) in the examples, but the algorithm works for any \(r\). The distance \(\nicefrac{{4}}{{\sqrt{3}}}\cdot r\approx 3.31\cdot r\) in a triangular grid, which
Figure 2: Examples of instances and solutions. The weight for the turn costs vary, resulting in different tour characteristics (high turn costs lead to a higher redundancy and longer straight lines). The trajectories are smoothed in post-processing using Bézier curves.
is approximated by the mesh, leads to parallel lines being a perfect \(2\cdot r\) apart. As _dmsh_ prefers vertices to be too far apart over too close, we counter this by reducing the distance by \(5\,\%\). Tours on this sparse grid will miss some area on turns, but we minimize turns and the missed coverage can be compensated by slightly enlarging the turns in post-processing. The coverage value is estimated by the area covered by the Voronoi-cell of the waypoint, see Fig. 2(b). We could also use the coverage of the agent at the waypoint, but this is less accurate because the coverage primarily happens when moving along the edges. Getting a mesh that yields good tours is non-trivial, and a considerable number of experiments were necessary to find a good meshing algorithm and parameters. Many aspects of the discretization also struggle with the infamous numeric issues of geometric operations, that have to be handled carefully. Instead of _dmsh_ also the _Packing of Parallelograms_-algorithm of _gmsh_[28] can be used to obtain similar good meshes. _gmsh_ is faster and more robust, but has more outliers regarding the quality than _dmsh_. There are many other meshing algorithms, but most of them are not suitable for our purpose as they will not allow smooth trajectories and equally sized cells but focus on different qualities. More details can be found in Appendix C.
### Step 2: Linear Relaxation
Given the graph \(G=(P,E)\), we can obtain a fractional solution for a cycle cover by using linear programming. We work on passages \(uvw=wvu\) that cover a waypoint \(v\in P\) coming from or going to the neighbored waypoints \(u,w\in N(v)\). For every passage \(uvw\), the variable \(x_{uvw}\geq 0\) denotes how often the passage is used. Additionally, we use the variable \(s_{v}\geq 0\) that denotes skipping the waypoint and paying for its coverage loss.
\[\min\sum_{v\in P}\text{val}(v)\cdot s_{v}+\sum_{u,w\in N(v)}\text{cost}(u,v,w) \cdot x_{uvw}\] (2.3) s.t. \[\sum_{u,w\in N(v)}x_{uvw}+s_{v}\geq 1 \forall v\in P \tag{2.4}\] \[2\cdot x_{uvw}+\sum_{u\in N(v),u\neq w}x_{uvu}=\] \[2\cdot x_{vwv}+\sum_{u\in N(w),u\neq v}x_{vwu} \forall vw\in E \tag{2.2}\]
The objective in Eq. (2.2) simply minimizes the missed coverage value and touring costs. Equation (2.3) enforces a waypoint either to be covered or skipped, and Eq. (2.4) enforces a consistent flow, i.e., every edge is used equally from both sides. Examples for fractional solutions covering the whole area or for partial coverage are given in Fig. 3(a) resp. Fig. 3(b).
### Step 3: Atomic Strips
In the next step, we want to compute a cycle cover, using the fractional solution of the previous solution as a hint. If the costs would only depend on the distance, the cycle cover could efficiently be computed by a minimum-weight perfect matching. For this, we would replace every waypoint by two vertices and connect them to all other vertices with the corresponding distance, efficiently calculable by Dijkstra's algorithm. To implement partial coverage, we would add an edge with the corresponding value of the coverage loss between the two vertices of a waypoint. The minimum-weight perfect matching would then either enforce every waypoint to have an incoming and an
Figure 4: Fractional solutions in red for full-coverage (a) and partial coverage (b). The thickness indicates the fractional values.
Figure 3: To convert a polygonal instance into a graph, we first mesh the polygon (a) and use the coverage value of the Voronoi cells (b) to approximate the coverage value of each waypoint.
outgoing trajectory, i.e., be in a cycle, or only use the internal edge and skip the waypoint.
With turn costs, the cycle cover problem gets NP-hard, but Fekete and Krupke [24] showed that we can use the fractional solution of the previous step to estimate in which orientation we go through a waypoint, and move the corresponding turn costs to the edge weights. In square grids, this technique can be shown to yield a 4-approximation, and a 6-approximation in triangular grids. This can be imagined as replacing every waypoint by an epsilon-length segment, as in Fig. 5, whose orientation is most used in the fractional solution. We are calling these epsilon-length segments _atomic strips_. Computing a minimum-weight perfect matching on the endpoints, yields the optimal cycle cover that includes all these segments. If the segments have been chosen correctly (which is NP-hard), the minimum-weight perfect matching actually corresponds to an optimal cycle cover on the waypoints.
Meshes make the selection of these atomic strips more complicated, as there can be more than just two or three sensible orientations. A useful property of the atomic strips is that the larger the turn is, the more orientations are optimal. For a U-turn, every orientation is optimal. The straighter a passage, the more important a good orientation becomes; but often these cases are easy to guess from the fractional solution. Therefore, it is sensible to limit the potential orientations to the orientations of incident edges, i.e., neighbors. We weight each orientation by how well the passages of the fractional solution fit to it and chose the one with the highest sum.
Connecting all waypoints with each other results in a quadratic number of edges, whose weights are non-trivial to compute. Fekete and Krupke [25] noted that it is more efficient to only connect the waypoints with their neighbors (also making the weights easy to compute), and to allow for optional atomic strips to deal with potentially necessary overlapping trajectories. An optional atomic strip can be implemented by simply adding an edge with zero weight between its endpoints, allowing it to be neutralized without additional costs. Arkin et al. [7] showed that in a square grid, every vertex is visited at most four time, limiting the number of necessary optional atomic strips. For triangular grids, the number of necessary visitations can be linear, as shown in Fig. 6, destroying the approximation factor when using this optimization. However, this is an artificial instance, and in our instances, every waypoint is usually only covered once or twice. A further challenge is that the optional atomic strips also have to match the original trajectory of the longer edges to reconstruct the actual costs. Otherwise, connecting two waypoints via optional atomic strips could be more expensive than connecting them directly. Adding a number of optional strips for any orientation would solve this problem, but it would also increase the computational complexity. Therefore, we limit the number of atomic strips per waypoint to a constant \(k\), and additionally allowing every waypoint \(p\in P\) at most one atomic strip per neighbor \(n\in N(p)\). This keeps the complexity of the auxiliary graph in \(O(|P|\cdot k^{2})\).
An example for different \(k\) can be seen in Fig. 7 and the detailed implementation is described in Appendix B.
### Step 4: Matching
We are left with a weighted graph on the endpoints of the atomic strips, and we want to compute a minimal matching. There are edges between any endpoints of atomic strips belonging to neighbored waypoints in the grid. The weight corresponds to the touring costs between the two waypoints, with the corresponding orientation at the endpoints. Additionally, each atomic strip has an edge between its two endpoints. For the mandatory atomic strip, the
Figure 5: Replacing every waypoint by an atomic strip (black segments) converts the problem into a matching problem without losing the turn costs. The orientation of each atomic strip needs to be guessed from the fractional solution (Fig. 3(a)), and wrong guesses can degrade the solution.
Figure 6: Optimal tours with turn costs in a regular triangular grid can require a linear amount of passages through some waypoints (red).
weight corresponds to the opportunity loss, i.e., the assigned coverage value, when not covering it. For all others, the cost is zero to allow skipping them without additional costs. Let \(k\) be the maximal number of atomic strips at a waypoint, then the number of vertices and edges in the matching instance is in \(O(|P|\cdot k^{2})\).
We solve the corresponding minimum-weight perfect matching instance with the Blossom V algorithm of Kolmogorov [35]. The author states a worst-case complexity of \(O(n^{3}m)\), which would be prohibitive, but in practice it shows to be sufficiently fast even for large instances. Connecting the atomic strips via the matched endpoints yields a set of cycles, see Fig. 8(a), that we can connect in Step 6.
### Step 5: Local Optimization
Before we continue to connect the cycles to a single tour, we can optimize the cycle cover. For this, we select a small but expensive part of the solution and compute a (nearly) optimal solution via mixed integer programming. This can be repeated multiple times until a satisfying solution is obtained, see Fig. 8. Note that it is possible to solve many instances with 1000 vertices in regular square grids to optimality, as described in [41, 25]. Also, for irregular grids, small instances with less than 100 vertices can usually be solved within seconds. We denote the desired number of vertices for local optimization by \(t\).
We select the expensive area to be optimized by choosing an expensive root and selecting the first \(t\) vertices of a breadth-first-search. The expense of a waypoint in a solution is denoted as the cost of the passages covering it, or the corresponding opportunity loss if it is not used. To make the selection more robust, we also include the expenses of all direct neighbors by summing them.
By simply replacing the fractional variables with integral variables, the linear program in Section 2.2 yields a corresponding MIP. In this MIP we fix all variables of the given solution except the variables corresponding to the \(t+1\) selected waypoints. Of course, we do not need to include the fixed waypoints in this MIP at all but only need to place the corresponding constants into Eq. (2.4). This ensures that the local solution remains consistent with the fixed exterior solution. After optimizing the local MIP, we replace the part in the solution and exclude the root and its neighbors to be selected as root in further iterations. This is necessary because the expensive parts can already be optimal (within their local area) and should not be optimized again.
A useful property of the MIP is that the optimization process usually is faster, if our (local) solution is already (nearly) optimal. If we provide the MIP-solver with the corresponding start solution, it only has to find a matching lower bound. Using the running time and the actual improvements, one could improve the selection of the next area, or dynamically increase it.
Figure 8: By optimizing local areas (red) of cycle covers (blue) with mixed integer programming, we can improve the initial cycle cover. The solution provided by the previous approach without optimizations is shown in Fig. 7(a). We then select an expensive area (Fig. 7(b)) and optimize it to near optimality, resulting in (Fig. 7(c)). After five such iterations, we end with a visibly improved solution (Fig. 7(d)). By chance, the optimized solution is even connected.
Figure 7: Example of atomic strip selection for different \(k\). The atomic strips are displayed in yellow (optional) and red (mandatory). The grid is displayed in black and the fractional solution in blue.
By choosing disjunct areas, this optimization approach also allows efficient parallelization. However, we leave such optimizations to future work, and simply perform \(i\) iterations for a fixed area size \(t\).
### Step 6: Connecting Cycles
Now, we only need to connect the cycles to form a tour. For adjacent cycles, this is quite simple and involves only minimal extra costs: simply go through every edge that connects two cycles and perform a merge via the least expensive one, see Fig. 8(b). A simple optimization would be to use two parallel edges once, instead of one edge twice, but this is also done automatically in Section 2.7. Things get more complicated if the cycles are farther apart. It could be that the connection costs actually outweigh the touring costs of the corresponding cycles. If the area covered by the cycle is not valuable enough, we are better off simply removing the cycle, see Fig. 10.
To select any cycles, we first need to know how much each cycle is worth. We estimate the value of a cycle by the sum of values of its covered waypoints. If a waypoint occurs in different cycles, only the first cycle gets its value. This can happen if two cycles cross and cannot be connected due to turn costs. Because this rarely happens, the estimated cycle values are accurate if the values of the waypoints are accurate. Otherwise, the value of a cycle can be underestimated and result in a slightly lower solution quality.
Next, we need to know how expensive it is to connect any two cycles. This can be achieved with a Dijkstra-variant on the edge graph. Working on the edge graph of the grid allows us to include not only the distance of the path, but also the turn costs between any two edges. To make things simpler, we use a directed version where we also include the direction through which we pass the edge, as can be seen in Fig. 11.
The distance cost of using an edge can now simply be assigned to the outgoing arc in the edge graph. If we let \(k\) be the maximum degree in the grid, then we have at most \(O(|P|\cdot k)\) vertices and \(O(|P|\cdot k^{2})\) edges in the auxiliary graph. Using Dijkstra's algorithm, we can compute the least expensive path between any two edges (ignoring possibly collected coverage value) in \(O(|P|\cdot k^{2}\log|P|)\). The costs are symmetric, so it is optimal in both directions. Still missing are the costs involved merging a (doubled) path with a cycle. It would be expensive to check all combinations for edges incident to the two cycles. Instead, we can select one of the cycles and initialize all incident edges in the Dijkstra-algorithm with the final connection costs to it. We now only have to find the least expensive incident edge to the target circle using the already computed distances by Dijkstra's algorithm.
With these two pieces of information, we can compute a prize-collecting steiner tree (PCST) on the cycles and their connections. The resulting tree corresponds to the worthwhile cycles and how to connect them. Computing an optimal PCST is \(\mathsf{NP}\)-hard, but
Figure 11: Converting the grid (gray) into a directed edge graph to compute a shortest path with turn costs inside. The distance and turn costs are assigned to the blue arcs.
Figure 10: If the valuable areas (green) are more distanced, the cycles (blue) should only be connected if the value is high enough in relation to the costs. In (a) the right area’s value is not high enough and its cycle gets removed. In (b) the value is increased and the cycle gets connected.
Figure 9: The matching of the atomic strips of Fig. 5 yields a set of tours (a). In this case, a red and a blue cycle. It can also directly decide not to cover some waypoints, but in this case the coverage values are very high. This cycle cover is then connected to a tour (b) via and edge (red).
the cycle covers obtained here are usually small enough to be solved optimally using integer programming. Otherwise, an implementation [31] of the 2-approximation by Goemans and Williamson [29] can be used. If there are some zero or negative connection costs, we can directly connect the corresponding cycles before we compute the PCST. Using a PCST instead of just greedily connecting cycles potentially also integrates cycles that are not valuable enough on their own, but they become valuable in combination with other cycles.
Using the PCST, we now iteratively merge cycles (using the doubled paths computed using the Dijkstra-approach) in a depth-first search starting from an arbitrary cycle in the PCST. Whenever we merge two cycles, the path creates additional docking points that may be cheaper than the originally computed connecting paths. However, we do not need to recompute the whole Dijkstra-tree, but can simply reduce the costs for the corresponding edges and let the reduced costs propagate. Caveat: During the joining of the cycle with the doubled path, passages are actually replaced from the cycle. The shortest paths originating from such removed passages become invalid. As this rarely occurs and can be detected, recomputation should only be performed if such an invalid shortest path is about to be used.
### Step 7: Local Optimization
After connecting the cycles to form a tour, the connecting parts are often highly redundant, as can be seen in Fig. 12. Luckily, we can extend the local optimization approach of Section 2.5 to connected tours. The challenge is to make sure that the tour remains connected after local optimizations. The used MIP does not enforce connectivity and may disconnect the tour again. A naive approach is to only accept local improvements that preserve the connectivity and discard all others. This is of course quite restrictive and we can find a superior solution.
Subtour elimination in the MIP is more difficult than for, e.g., the Traveling Salesman Problem: not only are all visitations optional, but two tours can cross without being connected. Simply enforcing that two edges have to leave a connected component, therefore, does not yield the desired result. In [41] we actually have a corresponding MIP. Because we already start with a tour and know that we have to connect an interior solution (inside the small area to be optimized) to the fixed exterior solution, we can devise a simpler separation constraint.
There are two types of subtours: those that are completely within the area and those that are only partially within the area. We can only get an infeasible solution with subtours of the second type if the local solution incorrectly connects the exterior solution. However, both types can be handled equally.
We either want a subtour \(C\) to dissolve or to become part of the connected tour. For this, there needs to be a vertex passage of a subtour to be unused, or a vertex passage leaving the subtour used. We select an arbitrary vertex passage of the first type and demand that the sum of the second type is greater than it. Note that this assumes the existence of an external, fixed solution, and is otherwise not exact.
Let \(X_{A}\) be the vertex passage variables that are contained in the area \(A\) and can be modified by the local optimization. This includes all variable \(x_{uvw}\) with \(u,v,w\in A\). If \(u\) or \(w\) are not in \(A\), \(vu\) resp. \(vw\) must be used in the solution, i.e., the edge connects the changeable interior solution to the fixed exterior solution. All other variables are fixed.
Let \(X_{C}\) be the vertex passage variables that are used by the subtour \(C\). Let \(X^{\prime}_{C}\) be the vertex passage variables that share one edge with the subtour \(C\) but are not in \(X_{C}\). These are the vertex passages that leave the trajectory of \(C\). We can now state a constraint that eliminates \(C\), if it has been created by an optimization on \(X_{C}\).
\[\sum_{x\in X^{\prime}_{C}\cap X_{A}}x\geq x_{c}\quad x_{c}\in X_{C}\cap X_{A},C\text{ is subtour} \tag{2.5}\]
There exist more efficient options for connecting, e.g., more distant subtours, but this hardly applies for optimizing only small areas. For the case that the MIP does not yield a connected solution for \(A\) within a fixed number of iterations, we discard the infeasible solution do not change \(A\) in this iteration. Applying this approach multiple times can significantly improve the solution, as can be seen in the example in Fig. 13.
Figure 12: Especially due to the connection approach of subtours, a lot of redundant coverages (red) can be created, which we aim to minimize.
## 3 Evaluation
In this section, we evaluate the performance of the algorithm on a set of benchmark instances. We first evaluate the influence of the new optimizations and then the overall performance on the benchmark instances.
We generated 500 random instances for our benchmark using unions and differences of reasonably simple and thick random polygons. The generation was supervised and the parameters were manually adjusted to create instances that mimic complex agricultural fields, architecture, groups of buildings, and other real-world scenarios. The valuable areas and areas with increased costs were also chosen by randomly placing thick polygons, with possible overlaps that summed up. Examples of these instance can be seen in Fig. 2. The selection of these examples was random and, thus, should reflect the distribution in the 500 instances.
All experiments were run on Ubuntu workstations with AMD Ryzen 7 5800X (\(8\times 3.8\,\)GHz) CPU and 128 GB of RAM. The code is run with Python 3.8.8 and uses Gurobi 9.1.2. More details can be found in the repository [https://github.com/d-krupke/ALENEX24-partial-coverage-path-planning](https://github.com/d-krupke/ALENEX24-partial-coverage-path-planning).
### Local Optimization
In the first experiment, we evaluate the local optimization steps, that were not considered in the original algorithm. The important questions are: (1) How much can we improve the solution using this optimization? We have to make sure that the improvement is worth the additional complexity. (2) Should we focus on optimizing the cycle covers or the tours? While the tours are the final result, the cycle covers are less expensive to optimize. (3) How much influence do the number of iterations and the size of the area have? The runtime increases linearly with the number of iterations but exponentially with the size of the area. However, the NP-hard nature of the problem also implies that area cannot be fully substituted by iterations.
To answer these questions, we computed solutions that performed a local optimization on either cycle cover or tour with 0, 10, 25, 50, 100 and 200 iterations and an area of 50 vertices. Additionally, we computed solutions that performed 50 iterations of the local optimization on either cycle cover or tour, but with a varying area of 0, 10, 25, 50, 75 and 100 vertices.
The results in Fig. 13(a) show that the optimizations with an area size of 50 vertices yield a visible improvement for partial coverage in both steps. 10 iterations on the cycle cover already reduce the optimality gap (in comparison to the lower bound) by around 10 %. The further iterations lose effectiveness, as can be expected because we prioritize the expensive areas, but an improvement remains visible. While the optimization is successful on cycle covers, it is even more impressive on tours. Here, the first 10 iterations lower the optimality gap by more than 20 %. The further iterations also
Figure 14: Influence of the number of iterations (a) and the optimization area (b) for the local optimization depending on whether it is applied on cycle covers (Step 5) or tours (Step 7). The 0-bars are the baseline without local optimization and show the initial gap to the fractional solution (upper bound on optimality gap), the other bars show the optimality gap after the local optimization. This is the average over all 500 instances, the error bars show the standard deviation.
Figure 13: Multiple steps of the tour optimization. The optimized area and the changed parts are highlighted in red. In some steps, no changes are made because the solution is (locally) optimal in the area.
remain stronger than for cycle cover, but their improvements still decline quickly. This implies that the cycle covers are already nearly optimal, but the connection of the cycles to a tour is not very efficient. The local optimization on tours can easily find (locally) suboptimal parts in the connected solution and improve them visibly.
The results in Fig. (b)b for varying area are surprisingly very similar: Doubling the area has a similar effect as quadrupling the iterations. One difference is that for optimizing cycle covers, the larger areas are more important than for tours. Optimizing only small areas with 10 vertices barely improves the solution. For tours, on the other hand, such small areas can already make a significant difference. This is very useful to know because optimizing 10 vertices is extremely fast and could still be done by brute-force. Thus, we can do many iterations with such small areas in a short time. Larger areas still have their advantage, and 50 iterations of size 100 are roughly as effective as 200 iterations of size 50.
The runtime differences for the number of iterations and the size of the area can be seen in Fig. 15. Surprisingly, the runtime for larger areas looks nearly linear (caveat: the x-axis for iterations is exponential, but it is almost linear for area). However, this data should be used with caution because it can be skewed. The implementation is only optimized for quality but not for runtime. The connectivity detection, necessary to make sure that we did not accidentally disconnect the tour and have to insert constraints, is especially inefficient. Instead of only analyzing the changed part, it always checks the whole solution with a procedure written in pure Python. This gives the tour variant a significant overhead which could be eliminated. The tour variant will still remain slower because the solution frequently gets disconnected and needs to be reconnected using additional constraints. These constraints become less efficient for larger areas because the solution develops more options to evade it. For larger optimization areas, additional constraints should be developed and used.
For the next experiment, we use 25 iterations of size 50 for both steps. For tours, we use at most 10 cutting plane iterations.
### Optimality Gap
To evaluate the overall performance of the algorithm, we again run the algorithm on the 500 instances. The plot in Fig. (a)a shows how the quality of the solution develops over the size of the instances. The quality is again measured by the difference of the objective value to the lower bound provided by the fractional solution (see Section 2.2). We can see that the objective is around 10 % to 15 % above the fractional solution, as we have already seen in the previous experiments. However, we make the new observation here that the quality slightly degrades for larger instances. Based on the tool radius of 1.0, the larger (graph) instances have multiple thousand vertices. This degradation could be converging, but the data is relatively noisy and has too small a range to make any sure assumptions. The gap is generally smaller for lower turn costs, but this is not surprising because the turn costs make the problem combinatorially more complex. This influences at least the quality of the fractional solution, which provides us with the lower bound. Whether the actual solution has a larger optimality gap cannot be answered from the solution.
### Runtime
The primary focus of this paper is on the quality of the solutions, but the runtime is also an important factor. The original algorithm was able to solve instances with over 300 000 vertices, though this could take several hours and require a powerful workstation. The instances considered in this paper only have a few thousand vertices, as the implementation is only optimized for quality and not for runtime. Despite being relatively small, the instances are still non-trivial, as seen in Fig. 2. These instances require a runtime of a few minutes, as can be seen in Fig. (b)b. Improving the efficiency of the prototype is possible in multiple places. However, there are inherent challenges when compared to the original algorithm. First, the original algorithm benefits from the simplicity of square grids, which have only three types of passages. Second, it utilizes basic integer arithmetic, while the algorithm in this paper requires floating-point arithmetic, potentially affecting convergence behavior.
Figure 15: Runtime for more iterations or larger areas in the local optimization. Shown as the mean runtime in seconds over all 500 instances.
## 4 Conclusion
In this paper, we showed how to adapt a constant-factor approximation algorithm for coverage tours on grid graphs to arbitrary meshes derived from intricate, polygonal environments. While the approximation factor may be lost in the process (if the mesh does not happen to be a perfect square grid), we demonstrated that the algorithm still yields low optimality gaps in practice. Furthermore, we showcased its versatility in handling partial coverage and accommodating heterogeneous passage costs, offering the flexibility to trade off coverage quality and time efficiency. This adaptation paves the way to compute efficient coverage paths with a robust theoretical foundation for real-world robotic applications.
Potential future work includes multi-robot variants of the problem, in which a fixed number of robots may be used. The current approach should be extendable by only adapting the connection step (Step 6) if only the overall sum of costs is of interest. If the individual costs are of interest, the proposed approach could be generalized by not only deciding for an orientation (Step 3) based on the linear relaxation, but extending the linear relaxation to multiple robots (essentially copying it for every robot), and additionally deciding which robot should be used. A practically relevant but algorithmically challenging variant is to maximize the coverage quality for a given budget. Among others, one problem in our approach is the reliance on the fractional relaxation, which is known to be weak for budget constraints. However, the linear relaxation could potentially be improved by additional constraints or by performing some branching steps.
## Acknowledgments
This work has been supported by the German Research Foundation (DFG), project "Computational Geometry: Solving Hard Optimization Problems" (CG:SHOP), grant FE407/21-1.
|
2309.14186 | Value-transforming financial, carbon and biodiversity footprint
accounting | Transformative changes in our production and consumption habits are needed to
enable the sustainability transition towards carbon neutrality, no net loss of
biodiversity, and planetary well-being. Organizations are the way we humans
have organized our everyday life, and much of our negative environmental
impacts, also called carbon and biodiversity footprints, are caused by
organizations. Here we show how the financial accounts of any organization can
be exploited to develop an integrated carbon and biodiversity footprint
account. As a metric we utilize spatially explicit potential global loss of
species which, we argue, can be understood as the biodiversity equivalent, the
utility of which for biodiversity is similar to what carbon dioxide equivalent
is for climate. We provide a global Biodiversity Footprint Database that
organizations, experts and researchers can use to assess consumption-based
biodiversity footprints. We also argue that the current integration of
financial and environmental accounting is superficial, and provide a framework
for a more robust financial value-transforming accounting model. To test the
methodologies, we utilized a Finnish university as a living lab. Assigning an
offsetting cost to the footprints significantly altered the financial value of
the organization. We believe such value-transforming accounting is needed in
order to draw the attention of senior executives and investors to the negative
environmental impacts of their organizations. | S. El Geneidy, S. Baumeister, M. Peura, J. S. Kotiaho | 2023-09-25T14:47:28Z | http://arxiv.org/abs/2309.14186v1 | # Value-transforming financial, carbon and biodiversity footprint accounting
###### Abstract
We present a new method for calculating the rate of growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of the growth of growth
###### Abstract
Transformative changes in our production and consumption habits are needed to enable the sustainability transition towards carbon neutrality, no net loss of biodiversity, and planetary well-being. Organizations are the way we humans have organized our everyday life, and much of our negative environmental impacts, also called carbon and biodiversity footprints, are caused by organizations. Here we show how the financial accounts of any organization can be exploited to develop an integrated carbon and biodiversity footprint account. As a metric we utilize spatially explicit potential global loss of species which, we argue, can be understood as the biodiversity equivalent, the utility of which for biodiversity is similar to what carbon dioxide equivalent is for climate. We provide a global Biodiversity Footprint Database that organizations, experts and researchers can use to assess consumption-based biodiversity footprints. We also argue that the current integration of financial and environmental accounting is superficial, and provide a framework for a more robust financial value-transforming accounting model. To test the methodologies, we utilized a Finnish university as a living lab. Assigning an offsetting cost to the footprints significantly altered the financial value of the organization. We believe such value-transforming accounting is needed in order to draw the attention of senior executives and investors to the negative environmental impacts of their organizations.
## Main Text
### Introduction
Biodiversity loss is directly driven by human land and sea use and their changes, direct exploitation of nature, climate change, pollution, and introduction of invasive alien species(1). These direct drivers result from various underlying indirect root causes such as human population dynamics, consumption patterns, trade, and governance, which are in turn underpinned by societal values and behaviours(1-3). Managing the direct drivers of biodiversity loss alone will not produce sustained outcomes sufficient to bend the curve of biodiversity loss(4, 5). Instead, we must direct our efforts to the root causes such as consumption and trade.
Everyday life and the economics of societies are organized through organizations, be they private businesses, public services, or non-governmental organizations. The negative environmental impacts of nearly any organization extend through international trade and supply chains to all over the planet(6-8). While carbon footprint assessments are abundant(9-11) and a few biodiversity footprint assessments have been attempted(12-15), we take the approach a significant step further by showing how the financial accounts of any organization, coupled with global trade databases and a spatially explicit global biodiversity footprint indicator, can be used to estimate the potential global loss of species. We argue that this indicator can be understood as the biodiversity equivalent, the utility of which for biodiversity is similar to what CO\({}_{2}\) equivalent is for climate. The approach we have developed here allows financial and environmental accounting to be integrated to the extent that with some adjustments to public policy(16) (e.g. taxation or mandatory offsetting of the footprints) the financial value of the accounts can be transformed based on the environmental impacts.
### Financial accounting
Decision-making in organizations is ultimately guided by information obtained from financial accounts(17-21). The International Accounting Standards Board defines the objective of financial reporting to be to provide financial information to management, investors, regulators and the general public(22). Financial accounting links the company activities with performance and distils all this information into a single unit of account: money(20). Financial accounts define what are included and excluded in assets and liabilities and how profit and loss are calculated, which consequently defines the size, structure and performance of the organization(18). Unfortunately,
conventional financial accounting neglects the complex web of societal and environmental impacts organizations have beyond their socially constructed, and thus only presumed, boundaries([18, 20]).
In the economics literature, these neglected impacts are called externalities([23]). Externalities are something that happen to a seemingly uninvolved third party, such as the environment, when actions are taken to meet the needs of the so-called true stakeholders, such as the shareholders. Conventional financial accounting overlooks environmental externalities([21, 24, 25]) and is therefore ill-equipped to be conductive to the sustainability transition. To facilitate a transformative change to more sustainable production and consumption patterns in organizations, we need to reconfigure the financial accounting to internalize the environmental impacts.
### Environmental accounting
Environmental accounting should be a fundamental part of organizational decision-making. Unfortunately, environmental accounting seems to remain isolated within organizations and even when it is integrated with other reporting practices like financial reports it can still remain unexploited in management decisions([17, 19, 21, 24]). It has even been argued that the integrated reports of companies merely exploit the concept of sustainability in order to buttress the dominant financial discourses of development and growth([26]).
Basic principles for environmental accounting have been set by several standards such as the Sustainability Reporting Standards of the Global Reporting Initiative([27]) and the International Financial Reporting Standards' Sustainability Disclosure Standard([28]). Some standards are set to provide guidance for specific dimensions of environmental accounting, for example the Greenhouse Gas Protocol([29]) or the Natural Capital Protocol([30]). In addition, the International Integrated Reporting Framework([31]) has developed a framework that brings financial, social and environmental information under a single report.
The qualitative characteristics set by the different sustainability reporting frameworks somewhat align with the basic principles of conventional financial accounting standards([22, 32]). However, it seems that scrutiny of the latter is still much more profound than of the former([24, 33]). For management decisions to be truly conductive to sustainability transition, the scrutiny of the two should be equal([19]).
### Integrating financial and environmental accounting and the results from the living lab
To integrate financial and environmental accounts we developed a five-step framework for value-transforming integrated financial-environmental accounting that can be replicated in any organization with financial accounts. We will focus on how environmental impacts, more specifically carbon and biodiversity footprints, can be estimated, communicated and prioritized by utilizing financial accounts. While the first steps towards integrating environmental information into financial accounts have already been taken([21, 24, 34, 35, 36, 37]), generalizable applications remain to be articulated.
We demonstrate the utility of the framework by assessing the carbon and biodiversity footprint of our living lab, the University of Jyvaskyla in Finland, and construct a value-transforming integrated financial-environmental impact statement. In each step of the framework, we present general principles and then apply them to the living lab.
### STEP 1: Choose the report of financial accounts
In environmental impact assessment through financial accounts, the boundaries of the assessment are set by the financial accounts. Thus, the first step is to choose an appropriate report of the financial accounts. Since we are interested in the environmental impacts of consumption, we focus
on financial expenses exclusively and disregard revenues and other financial flows. The revenue of the organization might be of interest, however, if the analysis is expanded to consider handprints, that is, potential positive environmental impacts(38) that the organization produces. Some expenses are deemed not relevant regarding environmental impacts, for example staff salaries.
Reports with varying level of detail can be produced from the financial accounts. While a more detailed financial report might reduce error and provide more granulated data, using a more cursory financial report can limit the necessary work, especially during the harmonization of accounts (step 3), and makes future automated annual calculation more feasible. The most important consideration is that the chosen report provides account classification that retains enough detail to remain fit for the purpose. A typical financial impact statement where expenses are provided in very broad categories, for example in materials and services, is not sufficiently detailed for the evaluation of environmental impacts.
Here we utilize financial reports of the University of Jyvaskyla containing 123 different expense categories for the years 2019-2021. The reports were procured from the university administration.
### STEP 2: Choose environmental accounting methods and indicators
The hybrid EEIO-LCA methodology combines environmentally extended input-output (EEIO) analysis with life cycle assessment (LCA) and can be utilized to account the environmental impacts of organizations(36, 39, 40, 41). For this paper, it is enough to state that EEIO analysis connects the inputs an organization needs (measured as financial consumption revealed by the financial reports) with the environmental impacts of those inputs upstream in the supply chain. For certain financial accounts, such as energy and travel-related accounts, the LCA can reveal the environmental impacts more accurately by utilizing process-based impact factors obtained from service providers or from scientific literature. Hybrid EEIO-LCA combines the strengths of EEIO analysis and LCA approaches, and we anticipate that in the future we will see a stronger merger of the two.
Of the direct drivers of biodiversity loss3, the EEIO and LCA databases generally cover land and water use (i.e. water stress), pollution, and greenhouse gas emissions. There are several sub-categories within each of the included drivers in the databases. For example, land use is divided into several land use types. The quantity of each of the drivers alone is not sufficient for the evaluation of the biodiversity footprint. However, by further integrating the EEIO or LCA analysis with other existing databases and frameworks, such as LC-IMPACT(42) or ReCiPe(43), the quantity of the driver can be converted to biodiversity loss.
Carbon footprints are generally expressed in carbon dioxide equivalents (CO\({}_{2}\)e). Emissions other than carbon dioxide such as methane, nitrous oxide and fluorinated gases are converted into CO\({}_{2}\)e based on their global warming potential(44). Biodiversity footprints can be measured with several indicators(14, 45, 46, 47). We opted for the global potentially disappeared fraction of species(42) for one specific reason: as an indicator, it has desirable characteristics much like CO\({}_{2}\)e in that it provides a common currency for measuring biodiversity loss across the planet. For this reason we refer to the indicator as biodiversity equivalent (BDe). In essence, BDe tells what fraction of the species of the world are at risk of going extinct globally if for example 1 km\({}^{2}\) of land is continuously exploited by a specific driver of biodiversity loss, such as land use for intensive forestry(42), in any given country. The same amount of area occupied by the same driver causes less global biodiversity loss in relatively species poor areas than what it causes in relatively species rich areas. On the other hand, if both areas experienced a loss of the same amount of BDe, this would indicate both areas experienced the same global biodiversity loss. Different species would be lost in different parts of the world, but the fraction of globally potentially lost species would be the same.
Climate change and biodiversity loss are interconnected and thus should be solved together(1, 48, 49). In this regard the methodology we describe here is convenient: As climate change is one of
the drivers of biodiversity loss, assessing the carbon footprint becomes an obligatory intermediate step when assessing the biodiversity footprint.
To assess the carbon and biodiversity footprint of the consumption of the University of Jyvaskyla we utilized a hybrid EEIO-LCA methodology. We derived emission impact factors (CO\({}_{2}\)e/e) directly from the EEIO database EXIOBASE(50), amended by some of the service providers, and the LCA methodology (SI Appendix Dataset S5, S6). To obtain spatially explicit biodiversity loss impact factors (BDe/e) we combined the EXIOBASE with the LC-IMPACT (SI Appendix Table S5, S6). We provide the full dataset in [https://doi.org/10.5281/zenodo.8369650](https://doi.org/10.5281/zenodo.8369650).
### STEP 3: Harmonize the accounts
The categorization of the financial accounts of the organization is usually not directly compatible with the EEIO economic activity categorization, and the account categorizations must be harmonized. Determining a suitable match from the EEIO categorization for all financial accounts of the organization can be onerous but it helps when the chosen EEIO database has high sectorial detail. The harmonization can be done based on the chart of accounts containing information about all accounts in the general ledger of the organization.
There are generally two further key transformation operations needed: inflation adjustment and conversion of the purchaser prices in the financial accounts of the organization to the basic prices in the EEIO databases.
In the living lab we opted for the EEIO database EXIOBASE because it has relatively high sectorial detail, allowing the University of Jyvaskyla's accounts to be harmonized with it. Inflation adjustment and price conversions were calculated according to the equations presented in the Methods section (see also SI Appendix Table S7 and Dataset S3).
### STEP 4: Calculate results
For the carbon footprint assessment, the monetary consumption (\(\mathbf{\xi}\)) in each of the account categories of the organization is first multiplied with the category-specific emission impact factor (CO\({}_{2}\)e/e) derived from the EXIOBASE. Carbon footprints that have been assessed by the service providers or with the LCA methodology can be directly imported to the specific account category. The total carbon footprint is then calculated by summing across all the account categories.
The biodiversity footprint is first calculated for each driver of biodiversity loss individually by multiplying the money (\(\mathbf{\xi}\)) in each of the account categories of the organization with the category-specific biodiversity footprint impact factor (BDe/e) derived from the merger of EXIOBASE and LC-IMPACT, and then by summing the biodiversity footprint across the categories within each of the three impacted ecosystem types: terrestrial, freshwater and marine ecosystem. Finally, to arrive at a single BDe value for the organization, the biodiversity footprints in different ecosystem types are merged by taking a species-weighted average of biodiversity footprints over ecosystem types (see
Methods section). The complete process flowchart depicted in Fig. 1 illustrates the sequence of the calculations.
To illustrate the results, we aggregated the consumption information of the University of Jyvaskyla to 12 broad consumption categories and calculated the relative importance of each to carbon and biodiversity footprints (the carbon footprint and biodiversity footprints for each of the 123 accounts are tabulated in SI Appendix Dataset S4). The total annual carbon footprint decreased by 16% from 16 150 t CO2e in 2019 to 13 570 CO2e in 2020 (SI Appendix Table S1). Similarly, the total biodiversity footprint decreased by 19% from 4.17E-08 BDe in 2019 to 3.38 BDe in 2020 (SI Appendix Table S2). However, as biodiversity footprint is not cumulative over the years, we averaged the three years and on average 0.0000037% of the species of the world are potentially globally lost due to the operations of University of Jyvaskyla, if no action is taken to reduce the pressures i.e. the consumption continues as is over time(42). The global biodiversity footprint impact factors we have calculated are provided for further research and applications in SI Appendix Datasets S1 and S2. Disaggregated results by ecosystem type can be found from SI Appendix Fig. 1 and SI Appendix Table S3.
The decrease of the total annual carbon and biodiversity footprints were both largely driven by a decrease in business travel and related services (Fig. 2a). From Fig. 2a we can also see that energy and water consumption had the highest overall carbon footprints while IT supplies, licenses and services, and machinery, equipment and supplies had the highest overall biodiversity footprints. As the chosen time interval coincides with the outbreak of the COVID-19 pandemic, some of the greatest annual variations are likely caused by signatures of the pandemic. Most obvious is the plummeting of the carbon and biodiversity footprints attributable to business travel and related services since 2019. Other clear changes are the increased footprints due to IT supplies and machinery and the decreased footprints due to food and related services. Both of these were likely caused by the increase in remote working practices due to the pandemic.
The annual share of terrestrial biodiversity footprint from land use, climate change and pollution was on average 47%, 46% and 7% respectively while the annual share of freshwater biodiversity
Fig. 1: **Process flowchart for calculation of the biodiversity and carbon footprints from financial accounts.** Explanation of each of the steps are provided in the main text and further details for the calculations in the Methods.
footprint from water stress, climate change and pollution was 55%, 42% and 3%. In marine ecosystems pollution is the only driver that can currently be incorporated to the assessment (SI Appendix Table S4).
Assessing the carbon and biodiversity footprints simultaneously allowed us to see that the consumption categories had similar relative impacts on both. This similarity can be seen from Fig. 2b where we have plotted the relative carbon footprint of each consumption category against those of the relative biodiversity footprint. As was alluded to above nearly half of the biodiversity footprint was due to climate change in terrestrial and freshwater ecosystems, and therefore this similarity is easy to understand. These results illustrate that there are clear synergies to be obtained in combating climate change and biodiversity loss simultaneously. However, the disaggregated results by ecosystem type (SI Appendix Fig. S1 and Table S3) illustrate that there were also some residual impacts beyond climate change on biodiversity footprints that may need separate focus.
The approach we have developed is spatially explicit (at a country level), and thus we were able to determine the geographical location of the carbon and biodiversity footprints of the University of Jyvaskyla. In terms of the carbon footprint, largest share of the emissions was generated in Finland, Russia and China (Fig. 3a). Largest threats to biodiversity (Fig. 4b), can be observed in Estonia, United Arab Emirates, Palestinian Territory, Italy, Indonesia, Finland, and in several small island states (e.g. Guam and Seychelles) that cannot be distinguished from the map. It is notable that 66 % of the carbon footprint and 98 % of biodiversity footprint is situated outside of Finland. Furthermore, the data illustrates that the spatial analysis of the direct drivers of biodiversity loss
Figure 2: **The composition of the carbon and biodiversity footprint of the University of Jyvaskyla**. The relative contribution (%) of different consumption categories of the University of Jyvaskyla during 2019-2021 for the carbon and biodiversity footprints (**a**) and a scatterplot of the relative carbon footprint of each consumption category on the relative biodiversity footprint of the corresponding consumption category in 2021 (**b**). Small numbers in the scatter plot of panel b refer to the consumption categories in panel a.
produces a different outcome to the consequential global biodiversity footprint they cause (Fig. 4c-f).
### STEP 5: Assemble the value-transforming financial-environmental impact statement
In financial accounting, the relevant information is generally compiled in an income statement and a balance sheet. For carbon and biodiversity footprint analysis it is the income statement which
Fig. 3: **Geographical analysis of the carbon and biodiversity footprints of the University.** The geographical location of the University’s carbon footprint (tCO2e) (panel a), biodiversity footprint (BDe) (panel b), land use (ha) and biodiversity footprint (BDe) due to land use (panels c and d respectively) and freshwater pollution (kg) and biodiversity footprint (BDe) due to freshwater pollution (panels e and f respectively). Small island states that are not visible in the map were excluded from the scales of the map. Although in the analysis the carbon footprint contains all greenhouse gases, in this figure, only CO2 is depicted. Detailed data for each country, including the small island states’, is provided in SI Appendix Dataset S7. Analysis was done in R.
contains most of the information needed, that is, the incomes and expenses of the organization. The balance sheet, which contains information about the organization's assets, could be used in natural capital(34) and handprint(38) analyses, but these fall outside the scope of our current paper.
To transform the financial value, the carbon and biodiversity footprints need to have a cost that is visible in the income statement. One way to do this is to purchase offsets matching the footprints. To evaluate the offsetting cost of the carbon footprint, we used the World Bank's carbon pricing statistics for the European Union, which varied between 24.51 $/1CO2e in 2019 and 49.78 $/1CO2e in 2021(51). As no such statistics are available for biodiversity footprints, we developed one to demonstrate the idea.
As stated above, a desirable characteristic of the BDe is that it provides a common currency for measuring biodiversity loss across the planet. While we first used BDe to measure biodiversity loss due to factors like continued land use, here we reverse the logic and use the same land use biodiversity impact factors to estimate avoided loss(52), that is, the biodiversity gain achieved if the continuous exploitation is ceased for the purpose of offsetting biodiversity loss. For the sake of the example, here we only consider the biodiversity footprint in the year 2021. Potential leakage of the benefits is taken into account with a multiplier, as explained in the Methods section. Using the LC-IMPACT database, we calculated the area of land used for intensive forestry that should be permanently removed from use in Finland or in Brazil to offset the global biodiversity footprint of the University of Jyvaskyla. To offset the 3.66E-08 BDe caused by the consumption of the university, altogether 574 000 or 6 800 ha should permanently be removed from intensive use in Finland or in Brazil, respectively. By multiplying the area with the average price of forest land in Finland (6524 $/ha(53)) or Brazil (901 $/ha(54)) (see Methods for details), we arrived at the total cost of 3 747 743 kE in Finland or 6 117 kE in Brazil to be transferred to the income statement. If the cost is distributed across 30 years, similar to the depreciation of large investments, the annual cost would be around 125 000 kE if the offset was completed in Finland and 204 kE if it was completed in Brazil.
Finally, building on earlier research(16, 34), we compiled a financial-environmental impact statement. By amending the statement with the carbon and biodiversity footprint offset values, we arrived at the value-transforming integration of financial and environmental accounts (**Error! R reference source not found.**). In financial accounts, net income is generally the deduction of expenses from revenue. By adopting the same logic, the net carbon and biodiversity footprint is the deduction of the footprints from their respective offsets. The integrated financial-environmental
impact statement can be used to quickly deduce the economic and environmental position of the organization.
## Discussion
The value-transforming integration of financial and environmental accounting presented here is motivated by the observation that environmental accounting has remained isolated and unexploited in management decisions(17, 19, 21, 24). While earlier research on linked financial and environmental accounting(16, 34-36, 55) has been pioneering, discussion about the implications of the integration for accounting itself(35-37, 41) or its wider societal importance(16, 17, 21) has remained scant. We think that extensive adoption of value-transforming integration is
\begin{table}
\begin{tabular}{l l l l} \hline & Financial & Carbon & Biodiversity \\ & footprint & footprint & footprint \\ & (k€) & (tCO\({}_{2}\)e) & (pBDe) \\ \hline Revenue & & & \\ _Government funding_ & 148 826 & - & - \\ _Other revenue from operations_ & 67 881 & - & - \\ \multicolumn{3}{l}{Expenses / Footprints} & & \\ _Staff expenses_ & 152 868 & 224 & 797 \\ _Depreciation_ & 2 281 & 799 & 2 409 \\ _Grants_ & 2 768 & 436 & 1 365 \\ _Raw materials, equipment, and goods_ & 11 802 & 3984 & 12 008 \\ _Services_ & 13 613 & 3146 & 12 059 \\ _Rents_ & 25 575 & 4795 & 4 865 \\ _Travel_ & 1 094 & 366 & 1 259 \\ _Other_ & 9 700 & 747 & 1 887 \\ Total Expenses / Footprints & **219 701** & **14 498** & **36 649** \\ Losses and Gains & & & \\ _Fundraising_ & 4 768 & - & - \\ _Investment gains and losses_ & 31 666 & - & - \\ _Appropriation_ & -4 328 & - & - \\ Internal impact pricing & & 673 & -14 498 & - \\ _Carbon offsets_ & 125 000 & - & -36 649 \\ _Biodiversity offsets_ & _if in Brazil_ & 204 & - & -36 649 \\ Net Income / Footprints & & & \\ _Footprints without offsets_ & **29 112** & **14 498** & **36 649** \\ _Footprints with offsets if in Finland_ & **-95 888** & **0** & **0** \\ _Footprints with offsets if in Brazil_ & **28 908** & **0** & **0** \\ \hline \end{tabular}
\end{table}
Table 1: The financial-environmental impact statement of the University of Jyväskylä in 2021. As units we use thousands of euros (k€), tonnes of carbon dioxide equivalents (tCO\({}_{2}\)e) and pico (10\({}^{12}\)) biodiversity equivalents (pBDe).
essential in order to influence decision-making in organizations and to facilitate the much-needed transformative change in our production and consumption practices in support of planetary well-being(2, 56).
Adoption of the new accounting system is, however, not only a technical accounting issue; it is also a public policy issue(16). The mere existence of the framework does not guarantee that the value-transforming integration of financial and environmental accounting is adopted. Some forerunner corporations have called for mandatory assessment and disclosure of their impacts on nature(57) and mandatory reporting might indeed be a more effective strategy compared to voluntary reporting(58, 59, 60, 61).
The introduction of mandatory offsetting is one policy intervention that would transform financial values of the accounts of the organizations. Taxes or subsidies based on the environmental footprints might be another(62), and internal pricing (or so-called internal offsetting(41)) of environmental impacts could be yet another. In internal pricing a cost is set for environmental impacts based on an agreed internal valuation scheme. The money is then placed in an internal fund to support activities that mitigate the footprint or enhance the handprint of the organization. Previously, it has been stressed that value-transforming economic instruments to protect biodiversity, including biodiversity offset programs, do not and most likely cannot operate without robust regulation and government involvement(63, 64, 65, 66, 67). Therefore, the value-transforming integration of financial and environmental accounting should be made mandatory for all organizations with financial disclosure obligations.
A massive 98% of the biodiversity footprint caused by the University of Jyvaskyla's consumption is exported outside Finland through complex supply chains. As assessment of the biodiversity footprint of consumption is not yet mainstream, also the question of how to offset these exported biodiversity impacts has remained unexplored. We open the debate by arguing that as BDe provides a common currency for measuring biodiversity loss across the planet, it may also provide a location-independent common currency for offsetting the loss. While biodiversity is different from place to place, BDe focuses on the contribution of any activity anywhere on the planet to global species loss. As such, it measures biodiversity loss potential similarly to how the location independent CO\({}_{2}\)e measures the global warming potential. To highlight this point, we provided a rough example of how the biodiversity footprint of the University of Jyvaskyla, the majority of which is causing biodiversity loss outside Finland, could nevertheless be offset by protecting forests in Finland or in Brazil. Ideally, of course, the offsetting should be made in the countries and ecosystems where the biodiversity loss actualizes. From the global biodiversity perspective Finland is relatively species poor and much larger areas need to be protected as offsets than would be needed if the offsets were completed in relatively more species-rich areas such as in Brazil. Optimally locating the global offsets would therefore have an impact on the cost of offsetting, as our rough comparison between offsetting the biodiversity loss in Finland or Brazil clearly illustrated. Further supportive argument for the global offsetting comes from our finding that nearly half of the biodiversity footprint is actually driven by climate change, which may be challenging to offset locally.
As climate change is a major driver of biodiversity loss, it is easy to understand that the consumption categories had similar relative impacts on both. This observation is nevertheless important and confirms that environmentally informed prioritization of actions can yield synergies and thus cost savings when mitigating the negative climate and biodiversity impacts. A further interesting observation is that carbon footprint assessment is indeed an obligatory intermediate stage in biodiversity footprint assessment. Although currently the independent analysis of carbon footprints is common, we may see a merger of carbon and biodiversity footprint assessments in the future.
Setting boundaries between different organizations and how their financial-environmental impact statements might interact with each other will need some further development and conventions.
This is because the environmental impacts caused by consumption are simultaneously the environmental impacts of production along the supply chain. This is something that needs to be considered if environmental taxation, subsidies, or offsetting schemes are designed based on the value-transforming integration of financial and environmental accounting presented here. Indeed, if all organizations globally would offset their own direct footprints and transfer the cost of offsetting to the supply chain, the environmental accounting of supply chain impacts would become redundant. However, such a transformation needs time and the methodologies presented here are an important albeit perhaps only a temporary phase in our quest to stop biodiversity loss and climate change.
## Materials and Methods
### About the Living Lab, University of Jyvaskyla
The University of Jyvaskyla is a research and teaching institution that brings together education and psychology, natural sciences, humanities and social sciences, sport and health sciences, and business and economics. Finnish-language teacher education began here in 1863, and today the university is still Finland's largest teacher education provider. The university has 14 300 degree students, 2 800 staff members and 220 million euros in turnover(68).
### Detailed step-by-step methods for the framework
### Step 1. Choose the report of financial accounts
We selected a financial report containing 123 different expense accounts and conducted the analysis separately for three consecutive years 2019-2021. The reports of the financial accounts were procured from the university administration.
A common trait of financial accounting in organizations is depreciation value. Depreciation of goods is customarily applied on an annual basis, which means a fraction of the cost of the depreciated goods is visible in the financial accounts each year until their purchase value is zero. Depreciation accounts can be calculated annually like any other cost account, but it is worth noting that depreciation will distribute the environmental impact of the goods over several years like it does for the cost of the goods. If depreciated goods are purchased continuously across the years with approximately the same annual budget, depreciation has no great impact on the footprints of any given year.
### Step 2. Choose environmental accounting methods and indicators
EEIO databases can be used to assess the environmental impacts of financial consumption. Fundamentally, input-output methodology assesses the inputs an economic sector needs to produce its goods and services and the outputs an economic sector provides to other sectors or to final consumption(69, 70). Environmentally extended input-output (EEIO) databases, such as EXIOBASE, Eora, GTAP and WIOD, connect environmental impacts, such as greenhouse gas emissions, land use and water pollution, with economic activities and transactions, thus aiming to reveal both direct and indirect environmental flows associated with downstream consumption of products and services by organizations, the public sector, households and final consumers(69, 70). One of the strengths of EEIO databases, especially in terms of biodiversity footprints, is that they allow modelling the location of supply chain environmental impacts. The impact factors of different product categories need to be extracted from the EEIO database for each country being analysed (place of consumption). For example, EXIOBASE provides readily calculated monetary impact factors for carbon footprints and for many of the direct drivers of biodiversity footprints(71). Pymrio is an open-source tool that can be used for calculating the environmental impact factors
(impact/e) of some EEIO databases if the impact factors are not readily available([72]).
Furthermore, Pymrio can be used to analyse the location of the environmental impacts in the EEIO databases by modelling the structure of supply chains.
In the case study we used the EEIO database EXIOBASE(50) to calculate environmental impacts of financial consumption. EXIOBASE is suitable for assessing the financial accounts of organizations (as presented before([36])) because it has relatively high sectorial detail, namely, 200 different product categories (an advantage when harmonizing EEIO categories with financial accounts), and because it is open access. The latest version 3.8.2(71) was used in this study to gain access to the most up-to-date data. Nevertheless, the data utilized is derived from the year 2019 in terms of impact factors and 2011 in terms of the location of the drivers of biodiversity loss. One of the currently unavoidable downsides of EEIO databases is that the data is accumulating retroactively.
The assessment of carbon and biodiversity footprints based on financial consumption also has some other shortcomings. The categories in EXIOBASE and similar databases in general are relatively limited and only provide a snapshot of the numerous consumption activities of organizations. It is also currently not possible to distinguish between the footprints of two different products in the same sector. This will limit the possibilities for organizations to track the impact of their positive actions on the footprint, especially when actions are taken within a specific sector, for example by procuring more sustainable hardware. Nevertheless, with the currently available methodologies it is very difficult and time consuming to get accurate data about the life cycle impacts of many consumption activities, for example by using the life cycle assessment method (LCA). There is a clear need form more research on the methodologies and databases, as some recent evidence points out that LCA and EEIO databases may produce different results for the same activities([73]). Even with these shortcomings, footprints derived from hybrid EEIO-LCA methodology provide valuable information on what sectors an organization should primarily focus on when mitigating its footprints.
In our living lab case, the hybrid EEIO-LCA approach meant that to calculate the carbon footprint we applied LCA approaches to obtain process-based impact factors for five accounts: electricity, heating, water, travel services and travel grants. The carbon footprint of these accounts was calculated based on non-monetary consumption information (e.g. MWh of electricity consumption by electricity generation type and kilometres travelled by different modes of transportation) collected during the preliminary screening of the footprints of the University of Jyvaskyla([13, 74]). The biodiversity footprint of these accounts was nevertheless calculated with the EEIO methodology because the LCA methodology does not currently offer the opportunity to determine all the environmental impacts needed for biodiversity footprint analysis or the location of the impacts in the supply chain. We used the knowledge used in the carbon footprint assessment about the share of different energy production and travel methods to enhance the accuracy of the analysis and assumed that the costs would be distributed similarly. Nevertheless, differences in calculation methodologies between carbon and biodiversity footprints could explain the differences in the relative importance of energy and water consumption footprints to the total footprint, when looking at the results.
For the carbon footprint of financial consumption, we use the indicator recommended by The International Reference Life Cycle Data System (ILCD)([75]), global warming potential during a period of 100 years, which is readily available in EXIOBASE. For the carbon footprint of non-monetary consumption (energy, water, travel), we used impact factors provided by the stakeholders responsible for producing those services (SI Appendix Dataset S5 and S6). For travel grants, we calculated the emissions by utilizing the impact factor (t CO2e/e) of travel services, which was in turn calculated with process-based impact factors.
We built the biodiversity footprint assessment on estimating the impact of the direct drivers of biodiversity loss, including land use, direct exploitation (water stress), climate change and
pollution. We combined indicators of direct drivers of biodiversity loss from EXIOBASE(50) with the LC-IMPACT life cycle assessment database(42, 76) (SI Appendix Table S5) to calculate the biodiversity footprints of financial accounts, similar to what has been previously done(77). The indicator of biodiversity loss in LC-IMPACT is the potentially disappeared fraction of species(42), which we describe in this paper as the biodiversity equivalent (BDe) because it has similar characteristics to the carbon dioxide equivalent indicator (CO\({}_{2}\)e). Previous studies on the biodiversity footprints of organizations have mostly used regional indicators of biodiversity loss(12, 15, 42). While it is important to look at both regional and global species loss to cover different viewpoints on biodiversity loss(42), regionally lost species do not necessarily translate to global extinctions. Furthermore, in this context, where we have assessed global supply chains, it is important that we are able to unify the loss of species in different parts of the world under a single indicator that can be used to compare global supply chains with each other. Next, we explain the methodology for calculating the biodiversity footprint of financial accounts.
EXIOBASE contains impact factors (i.e. what is the amount of the driver of biodiversity loss per unit of consumption, such as euro) for land use, blue water consumption (water stress), pollution and greenhouse gas (GHG) emissions associated with the financial consumption of products and services, while the share of the world's species that potentially will go extinct globally if the pressure continues over time is provided by LC-IMPACT. The most recent EXIOBASE datasets can be extracted from the Zenodo repository(71). The impact factors can be found in the satellite accounts folder and multipliers datasheet. However, to determine the share of the world's species that potentially will go extinct globally associated with the direct drivers of biodiversity loss that are driven by consumption (in this case Finnish consumption), the countries of origin where the land use and pollution occur need to be identified. The open-source tool Pymrio can be used to assess the country of origin in the EEIO databases(72).
Following the code provided in Pymrio, we first calculated a global matrix for the country of origin of a driver of biodiversity loss (\(DR_{origin}\)):
\[DR_{origin}=\begin{array}{cccc}DR_{1,1,1}&DR_{1,2,2}&...&DR_{1,j,k}\\ DR_{2,1,1}&DR_{2,2,2}&...&DR_{2,j,k}\\ \vdots&\vdots&\ddots&\vdots\\ DR_{i,1,1}&DR_{i,2,2}&...&DR_{i,j,k}\end{array}\]
Each cell of the matrix describes the amount of the driver of biodiversity loss (DR) that occurs in region \(i\) (referred to as impact region) and is driven by consumption in region \(j\) (referred to as consumption region), product sector \(k\) (for further clarification see SI Appendix Table S6). The data is from 2011 because running the analysis on data from more recent years, for example 2019, provided non-sensible results, especially in terms of pollution. This might be due to errors in the EXIOBASE satellite account datasets. However, impact factors (impact/euro) from 2019 were used. For the biodiversity footprint assessment, we do not identify the country of origin for climate change because there is no regionalized biodiversity impact data in LC-IMPACT for climate change(42). However, we do assess the country of origin for carbon dioxide emissions in the carbon footprint assessment. The several blue water consumption (water stress) accounts in EXIOBASE were aggregated using the aggregation function in Pymrio. We use the general version of EXIOBASE, with limited land use types and country resolution, rather than the higher-resolution data as it allowed us to include climate change and pollution as biodiversity pressures alongside land use. This somewhat limits the accuracy of the analyses, since it increases the use of averages when connecting EXIOBASE with LC-IMPACT, especially in terms of regional level of detail. In any case it seems the level of detail is sufficient for the purpose of providing a means to influence decision-making in organizations.
As we know the impact and consumption region (in this case Finland) of each driver of biodiversity loss, we can then identify the share of a driver of biodiversity loss in each region (\(DR_{share}\)):
\[DR_{share}=\frac{DR_{origin}}{\sum_{i=1}^{n}DR_{i,j,k}}\]
The cells of the new matrix contain the share of the driver of biodiversity loss (\(DR\)) in impact region \(i\) from the total amount of the driver that is driven by consumption in consumption region \(j\), product sector \(k\).
Then we need to harmonize the regional classification between EXIOBASE and LC-IMPACT. EXIOBASE contains 44 countries and five'rest of the world' regions(50), while LC-IMPACT contains a highly detailed list of the world's countries. The missing countries from EXIOBASE can be harmonized by using the five'rest of the world' regions. Once the harmonization was done, we allocated the share of the driver of biodiversity loss (\(DR_{share}\)) to each respective region. Then we looked into how one unit of a driver of biodiversity loss (\(DR_{unit}\), e.g., 1 kg or 1 m\({}^{2}\)) is divided between each impact region \(i\).
\[\text{DR}_{unit}=\text{DR}_{share,i,j,k}\ /\ R_{i}\]
Here \(R\) represents the frequency of the impact region \(i\) after harmonization with LC-IMPACT (e.g. EXIOBASE region 'Rest of the World Europe' has been allocated to 23 countries in LC-IMPACT). Given the lack of information on'rest of the world' regions, we were forced to assume that the drivers of biodiversity loss were shared equally between all countries representing those regions.
At this stage we calculated the impact factors of the driver of biodiversity loss (\(DR_{factor}\)) for each impact region \(i\) driven by consumption in consumption region \(j\), product sector \(k\).
\[\text{DR}_{factor,i,j,k}=\text{DR}_{unit,i,j,k}\ \mathbf{x}\ \text{DR}_{ \text{exiobase},j,k}\]
\(DR_{\text{exiobase}}\) represents the monetary impact factors of the driver of biodiversity loss (impact per euro) from EXIOBASE for consumption region \(j\), product sector \(k\). Finally, we calculated the biodiversity equivalent factors for the driver of biodiversity loss (\(BDe\)) for each impact region \(i\), driven by consumption in consumption region \(j\) and product sector \(k\), by combining the previous matrix with the biodiversity equivalent factors for each driver of biodiversity loss (\(DR_{\text{ic-impact}}\)) for each impact region \(i\) from LC-IMPACT(42, 76):
\[\text{BDe}_{i,j,k}=\text{DR}_{\text{factor,i,j,k}}\ \mathbf{x}\ \text{DR}_{\text{ic-impact},i}\]
Total biodiversity equivalent factors (\(BDe_{factor}\)) for each consumption region \(j\) and product sector \(k\) were derived by summing up the biodiversity equivalent factors of each impact region \(i\) in consumption region \(j\), product sector _k_:
\[BDe_{factor,j,k}=\sum_{i=1}^{n}BDe_{i,j,k}\]
The biodiversity footprint of each financial account was then calculated by simply multiplying the biodiversity equivalent factor (\(BDe\)/\(euro\)) with the harmonized financial accounts (see Step 3). In terms of the biodiversity impacts of climate change, we take into account carbon dioxide, methane, fossil methane and nitrous oxide. We chose impact factors that take all effects into account for a period of 100 years for both terrestrial and aquatic ecosystems(42). With the spatial component missing from the climate change biodiversity impact analyses, we then multiplied the biodiversity impact factor of each gas with its respective counterpart factor in EXIOBASE. Then we summed the results to derive a total biodiversity footprint factor of climate change for both terrestrial and aquatic ecosystems.
We calculated biodiversity footprint results for each pressure individually first and then merged the results into three impacted ecosystem types: terrestrial, freshwater and marine ecosystems. We then combined the biodiversity footprints of the three ecosystem types by taking a weighted average of biodiversity footprints over ecosystem types. As weights we used the estimated share of all plant and animal species that exist in each habitat type(78). The merged biodiversity footprint (\(\text{BF}_{\text{total}}\)) can then be calculated with the equation:
\[\text{BF}_{\text{total}}=\text{BF}_{\text{ Terrestrial}}\times\text{0.801}+\text{BF }_{\text{freshwater}}\times\text{0.096}+\text{BF}_{\text{marine}}\times\text{0.102}\]
The Biodiversity Footprint Database can be accessed in [https://doi.org/10.5281/zenodo.8369650](https://doi.org/10.5281/zenodo.8369650).
_Step 3. Harmonize the accounts_
EXIOBASE product classification is based on the Statistical Classification of Economic Activities in the European Community, the so-called NACE classification(50, 79). The financial accounts of the University of Jyvaskyla were harmonized with EXIOBASE (SI Appendix Dataset S4), except in the case of two accounts that are general cost accounts ("Compensation of cooperation costs" and "Other costs"), which were considered to represent an average of other cost accounts (excluding depreciation accounts), and in the case of five accounts that were imported as external environmental accounts (heat, electricity, water, travel services and travel grants, see Step 5 for further information). In total, 123 financial accounts were analysed, out of which 12 were excluded because it was not possible to identify their environmental impacts with the current methodologies (e.g. tax-related accounts). Regarding rental accounts, we excluded some space rentals to avoid double-counting of the energy-related environmental accounts.
In the case study, price adjustment due to inflation had to be made only for the financial account data from 2020 because environmental impact multipliers for the year 2019 were used. Prices were adjusted by using the Consumer Price Index from Statistics Finland (2021). For the basic price conversion factors (SI Appendix Dataset S3), we used EXIOBASE supply and use tables(71) for the Finnish economy in the year 2019 (data is nowcasted based on 2016 data points). Value-added tax (VAT) was excluded from calculations because it is invoiced separately in the university accounts (as it is in most Finnish organizations) and thus has already been deducted from the purchaser price. However, if VAT were to be included in the financial account prices, it should be deducted as shown by the formulae in the SI Appendix Table S7.
One of the inevitable limitations of using EEIO data is that it is accumulating retroactively. Thus, inflation between the baseline year of the EEIO database and the financial account data needs to be taken into account. Prices can be adjusted by using national Consumer Price Index data, showing the relative increase of inflation in a given year in relation to a baseline year (i.e. Inflation factor):
\[\text{IAP}=\text{FAP}-\text{(FAP}\times\text{INF)}\]
where _IAP_ is the inflation-adjusted price, _FAP_ is the financial account price and _INF_ is the inflation factor. Furthermore, in order to use the impact factors (Step 2) of the EEIO database, financial account prices (i.e. purchaser prices) need to be converted to basic prices, the general unit used in EEIO databases. The System of National Accounts(80) define producer price (_PRP_) as:
\[\text{PRP}=\text{BP}+\text{TAX}-\text{SUB}\]
where _BP_ is the basic price, _TAX_ is the amount of taxes on products excluding invoiced VAT, and _SUB_ is the amount of subsidies on products. Consequently, purchaser price (_PUP_) is defined as:
PUP = PRP + TTM + VAT
where _TTM_ refers to the trade and transport margins and _VAT_ to the value-added tax not deductible by the purchaser. Finally, the purchaser price (_PUP_) can be defined as:
\[\text{PUP}=\text{BP}+\text{TAX}-\text{SUB}+\text{TTM}+\text{VAT}\]
Then a basic price conversion factor (_BPCF_) can be calculated for each product sector \(i\) by calculating the share of taxes less subsidies, value-added tax and trade and transport margins from the total supply (_SUP_) values per product sector \(i\) of the EEIO database (in basic prices):
\[\text{BPCF}_{i}=\left(\text{TAX}_{i}-\text{SUB}_{i}+\text{VAT}_{i}+\text{TTM }_{i}\right)/\left(\text{SUP}_{i}+\text{TAX}_{i}-\text{SUB}_{i}+\text{VAT}_{i }+\text{TTM}_{i}\right)\]
The required values on taxes less subsidies (excluding VAT) and _TTM_s can be found from national supply and use tables, generally contained within the EEIO database repositories. Harmonized prices (_HP_), including inflation adjustment and basic price conversion, can be calculated with the equation:
\[\text{HP}=\text{IAP}-\left(\text{IAP}\times\text{BPCF}\right)\]
The conversion formulae and their explanations are summarized in SI Appendix Table S7.
#### 3.2.2 STEP 4. Calculate results
This step can be seen as an optional mid-point step to gain more in-depth insights about the environmental accounts before Step 5, where the results are condensed to meet the financial impact statement criteria. The impact factors from EEIO databases and the footprints of accounts that were calculated with non-monetary impact factors (Step 2) should be assigned to their respective financial account categories (Step 3) and multiplied with the harmonized prices, with the exception of those non-monetary accounts whose results can be directly imported into the accounting scheme.
#### 3.2.3 STEP 5. Assemble the value-transforming financial-environmental impact statement
We made a rough pricing scheme for the purpose of illustrating the principle of how environmental accounts can be used to transform the financial value in the financial-environmental impact statement. To evaluate the offsetting cost of the carbon footprint we used the World Bank carbon pricing statistics for the European Union(51). We converted prices to euros with a currency converter(81). Thus, we multiply the converted pricing factor with the University of Jyvaskyla's carbon footprint.
To estimate the offsetting value of the biodiversity footprint, more assumptions were needed. We used the LC-IMPACT database to determine the biodiversity footprint of intensive forestry land use in Finland and in Brazil(42). By dividing the total biodiversity footprint of the organization (3.66E-08 BDe in 2021) with the characterization factors of intensive forestry land use in Finland (2.65E-17 BDe/m\({}^{2}\))(42) and in Brazil (2.24E-15 BDe/m\({}^{2}\))(42), we assessed how much intensive forestry land should be permanently removed from use if we were to preserve an equivalent amount of global biodiversity (BDe) in Finland. This resulted in 138 423 ha in Finland, and 1636 ha in Brazil. However, protecting an ecosystem from economic demand does not necessarily mean that the demand ends; rather, the economic activity is often shifted elsewhere. To account for this so-called leakage, we derived a correction factor from an existing biodiversity offsetting case report, which calculated the amount of additional forest biodiversity offsets that need to be done when leakage is considered(82). Multiplying this factor (4.15) with the amount of land that needs to be preserved to avoid the BDe loss, we conclude that the total amount of conserved
forest land in Finland needs to be 574 455 ha. As we could not find an estimate of potential leakage for Brazil, we utilized the Finland-specific multiplier also for Brazil and conclude that the total amount of conserved forest land in Brazil needs to be 6789 ha.
## Acknowledgments
We thank the Strategic Research Council at the Academy of Finland (Kotiaho 345267), Green Carbon Finland Ltd, The Finnish Innovation Fund SITA and SOK Corporation for funding the development of the methodologies. We thank Maris Grunskis for the illustrations, Andrew Pattison and Annukka Sailo for correcting the language, the administration at the University of Jyvaskyla for giving access to the consumption data, the biodiversity footprint team members Veera Vainio and Krista Pokkinen as well as Ulla Helimo, Matti Toivonen, Janne Peljo and Hanna-Leena Pesonen for feedback.
## References
* [1] IPBES, "Summary for policymakers of the global assessment report on biodiversity and ecosystem services of the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services." (2019).
* [2] S. Diaz, _et al._, Pervasive human-driven decline of life on Earth points to the need for transformative change. _Science_**366**, eaax3100 (2019).
* [3] I. J. Visseren-Hamakers, _et al._, Transformative governance of biodiversity: insights for sustainable development. _Curr. Opin. Environ. Sustain._**53**, 20-28 (2021).
* [4] G. M. Mace, _et al._, Aiming higher to bend the curve of biodiversity loss. _Nat. Sustain._**1**, 448-451 (2018).
* [5] D. Leclere, _et al._, Bending the curve of terrestrial biodiversity needs an integrated strategy. _Nature_**585**, 551-556 (2020).
* [6] C. Hong, _et al._, Land-use emissions embodied in international trade. _Science_**376** (2022).
* [7] D. Presberger, T. Bernauer, Economic and political drivers of environmental impact shifting between countries. _Glob. Environ. Change_**79**, 102637 (2023).
* [8] A. Marques, _et al._, Increasing impacts of land use on biodiversity and carbon sequestration driven by population and economic growth. _Nat. Ecol. Evol._**3** (2019).
* [9] G. P. Peters, Carbon footprints and embodied carbon at multiple scales. _Curr. Opin. Environ. Sustain._, 245-250 (2010).
* [10] S. Shi, J. Yin, Global research on carbon footprint: A scientometric review. _Environ. Impact Assess. Rev._**89**, 106571 (2021).
* [11] R. Chen, R. Zhang, H. Han, Where has carbon footprint research gone? _Ecol. Indic._**120**, 106882 (2021).
* [12] J. W. Bull, _et al._, Analysis: the biodiversity footprint of the University of Oxford. _Nature_**604**, 420-424 (2022).
* [
* [13] S. El Geneidy, _et al._, "Sustainability for JYU: Jyvaskylan yliopiston ilmasto- ja luontohaitat" (2021).
* [14] J. Lammerant, K. Driesen, J. Verhelst, J. De Ryck, "ASSESSMENT OF BIODIVERSITY MEASUREMT APPROACHLES FOR BUSINESSES AND FINANCIAL INSTITUTIONS" (EU Business @ Biodiversity Platform, 2022).
* [15] I. Taylor, _et al._, Nature-positive goals for an organization's food consumption. _Nat. Food_**4**, 96-108 (2023).
* [16] J. A. Nicholls, Integrating financial, social and environmental accounting. _Sustain. Account. Manag. Policy J._**11**, 745-769 (2020).
* [17] E. Bracci, L. Maran, Environmental management and regulation: Pitfalls of environmental accounting? _Manag. Environ. Qual. Int. J._**24**, 538-554 (2013).
* [18] R. D. Hines, Financial accounting: In communicating reality, we construct reality. _Account. Organ. Soc._**13**, 251-261 (1988).
* [19] K. Saravanmuthu, What is measured counts: Harmonized corporate reporting and sustainable economic development. _Crit. Perspect. Account._**15**, 295-302 (2004).
* Issues, Concepts and Practice_ (2000).
* [21] J. Veldman, A. Jansson, Planetary Boundaries and Corporate Reporting: The Role of the Conceptual Basis of the Corporation. _Account. Econ. Law Conviv._, 1-18 (2020).
* [22] International Accounting Standards Board, Conceptual Framework for Financial Reporting (2018).
* [23] Alfred Endres, _Environmental Economics : Theory and Policy_ (Cambridge University Press, 2011).
* [24] K. Maas, S. Schaltegger, N. Crutzen, Integrating corporate sustainability assessment, management accounting, control, and reporting. _J. Clean. Prod._**136**, 237-248 (2016).
* [25] M. Laine, M. Scobie, M. Sorola, H. Tregidga, Special Issue Editorial: Social and Environmental Account/Ability 2020 and Beyond. _Soc. Environ. Account. J._**2245** (2020).
* [26] F. Zappettini, J. Unerman, 'Mixing' and 'Bending': The recontextualisation of discourses of sustainability in integrated reporting. _Discourse Commun._**10**, 521-542 (2016).
* [27] GRI, Consolidated Set of the GRI Standards (2023).
* [28] IFRS, Exposure Draft. IFRS Sustainability Disclosure Standard. (2022).
* [29] WBCDS, WRI, "The Greenhouse Gas Protocol. A Corporate Accounting and Reporting Standard" (2012).
* [30] Capitals Coalition, "Natural Capital Protocol" (2016).
* [31] IIRC, International Integrated Reporting Framework (2021).
* [32] J. Unerman, J. Bebbington, B. O'dwyer, Corporate reporting and accounting for externalities. _Account. Bus. Res._**48**, 497-522 (2018).
* [33] F. Hartmann, P. Perego, A. Young, Carbon Accounting: Challenges for Research in Management Control and Performance Measurement. _Abacus_**49**, 539-563 (2013).
* [34] J. Houdet, H. Ding, F. Quetier, P. Addison, P. Deshmukh, Adapting double-entry bookkeeping to renewable natural capital: An application to corporate net biodiversity impact accounting and disclosure. _Ecosyst. Serv._**45**, 101104 (2020).
* [35] S. Alvarez, M. Blanquer, A. Rubio, Carbon footprint using the Compound Method based on Financial Accounts. the case of the School of Forestry Engineering, Technical University of Madrid. _J. Clean. Prod._**66**, 224-232 (2014).
* The case of NTNU. _J. Clean. Prod._**48**, 39-47 (2013).
* [37] M. Thurston, M. J. Eckelman, Assessing greenhouse gas emissions from university purchases. _Int. J. Sustain. High. Educ._**12**, 225-235 (2011).
* [38] T. Pajula, _et al._, "Carbon handprint guide. V. 2.0. Applicable for environmental handprint" (VTT Technical Research Centre of Finland Ltd. LUT University., 2021).
* A review. _J. Clean. Prod._**172**, 1273-1288 (2018).
* The Complete World of Life Cycle Assessment., M. Finkbeiner, Ed. (Springer Netherlands, 2016), pp. 219-291.
* [41] S. El Geneidy, S. Baumeister, V. M. Govigli, T. Orfanidou, V. Wallius, The carbon footprint of a knowledge organization and emission scenarios for a post-COVID-19 world. _Environ. Impact Assess. Rev._**91**, 106645 (2021).
* [42] F. Verones, _et al._, LC-IMPACT: A regionalized life cycle damage assessment method. _J. Ind. Ecol._**24**, 1201-1219 (2020).
* [43] M. A. J. Huijbregts, _et al._, ReCiPe2016: a harmonised life cycle impact assessment method at midpoint and endpoint level. _Int. J. Life Cycle Assess._**22**, 138-147 (2017).
* [44] T. Wiedmann, J. Minx, A Definition of "Carbon Footprint." _Science_**1**, 1-11 (2007).
* [45] E. Crenna, A. Marques, A. La Notte, S. Sala, Biodiversity Assessment of Value Chains: State of the Art and Emerging Challenges. _Environ. Sci. Technol._**54**, 9715-9728 (2020).
* [46] A. Marques, F. Verones, M. T. Kok, M. A. Huijbregts, H. M. Pereira, How to quantify biodiversity footprints of consumption? A review of multi-regional input-output analysis and life cycle assessment. _Curr. Opin. Environ. Sustain._**29**, 75-81 (2017).
* [47] D. Parkes, G. Newell, D. Cheal, Assessing the quality of native vegetation: The 'habitat hectares' approach. _Ecol. Manag. Restor._**4**, S29-S38 (2003).
* [48] IPCC, "Climate Change 2022: Impacts, Adaptation and Vulnerability" (Cambridge University Press. Cambridge University Press, Cambridge, UK and New York, NY, USA, 2022).
* [49] H.-O. Portner, _et al._, "Scientific outcome of the IPBES-IPCC co-sponsored workshop on biodiversity and climate change" (Zenodo, 2021) [https://doi.org/10.5281/zenodo.5101125](https://doi.org/10.5281/zenodo.5101125) (March 31, 2023).
* [50] K. Stadler, _et al._, EXIOBASE 3: Developing a Time Series of Detailed Environmentally Extended Multi-Regional Input-Output Tables. _J. Ind. Ecol._**22**, 502-515 (2018).
* [51] The World Bank, Carbon Pricing Dashboard (2023).
* [52] A. Moilanen, J. Laitila, FORUM: Indirect leakage leads to a failure of avoided loss biodiversity offsetting. _J. Appl. Ecol._**53**, 106-111 (2016).
* land purchases by the state, private nature reserves and fixed-term nature reserves (2023).
* [54] F. D. F. Silva, _et al._, The Cost of Forest Preservation in the Brazilian Amazon: The "Arc of Deforestation" (2019) [https://doi.org/10.22004/AG.ECON.292328](https://doi.org/10.22004/AG.ECON.292328) (May 17, 2023).
* [55] M. Thurston, M. J. Eckelman, Assessing greenhouse gas emissions from university purchases. _Int. J. Sustain. High. Educ._**12**, 225-235 (2011).
* [56] T. Korttemaki, _et al._, Planetary well-being. _Humanit. Soc. Sci. Commun._**8**, 1-8 (2021).
* [57] Business for Nature, Capitals Coalition, CDP, "Make It Mandatory: the case for mandatory corporate assessment and disclosure on nature" (2022).
* [58] E. Perrault Crawford, C. Clark Williams, Should corporate social reporting be voluntary or mandatory? Evidence from the banking sector in France and the United States. _Corp. Gov. Int. J. Bus. Soc._**10**, 512-526 (2010).
* [59] J. Wu, B. A. Babcock, The Relative Efficiency of Voluntary vs Mandatory Environmental Regulations. _J. Environ. Econ. Manag._**38**, 158-175 (1999).
* [60] R. Gray, Thirty years of social accounting, reporting and auditing: what (if anything) have we learnt? _Bus. Ethics Eur. Rev._**10**, 9-15 (2001).
* [61] D. A. Koehler, The Effectiveness of Voluntary Environmental Programs--A Policy at a Crossroads? _Policy Stud. J._**35**, 689-722 (2007).
* [62] P. Dasgupta, "The Economics of Biodiversity: The Dasgupta Review" (London: HM Treasury, 2021).
* [63] V. Boisvert, Conservation banking mechanisms and the economization of nature: An institutional analysis. _Ecosyst. Serv._**15**, 134-142 (2015).
* [64] A. Vatn, Markets in environmental governance. From theory to practice. _Ecol. Econ._**117**, 225-233 (2015).
* [65] N. S. Koh, T. Hahn, C. Ituarte-Lima, Safeguards for enhancing ecological compensation in Sweden. _Land Use Policy_**64**, 186-199 (2017).
* [66] H. Kujala, _et al._, Credible biodiversity offsetting needs public national registers to confirm no net loss. _One Earth_**5**, 650-662 (2022).
* [67] N. S. Koh, T. Hahn, W. J. Boonstra, How much of a market is involved in a biodiversity offset? A typology of biodiversity offset policies. _J. Environ. Manage._**232**, 679-691 (2019).
* [68] University of Jyvaskyla, Annual report 2021 (2021).
* [69] J. Kitzes, An introduction to environmentally-extended input-output analysis. _Resources_**2**, 489-503 (2013).
* [70] W. Leontief, Environmental Repercussions and the Economic Structure : An Input-Output Approach Author ( s ): Wassily Leontief Source : The Review of Economics and Statistics, Vol. 52, No. 3 ( Aug., 1970 ), pp. 262-271 Published by : The MIT Press Stable URL : ht. _Rev. Econ. Stat._**52**, 262-271 (1970).
* [71] K. Stadler, _et al._, EXIOBASE 3 (3.8.2) [Data set] (2021).
* multi regional input output analysis in python (2022).
* [73] B. Steubing, A. de Koning, S. Merciai, A. Tukker, How do carbon footprints from LCA and EEIOA databases compare? A comparison of ecoinvent and EXIOBASE. _J. Ind. Ecol._**26**, 1406-1422 (2022).
* [74] V. Vainio, S. El Geneidy, Sustainability for JYU: Jyvaskylan yliopiston limasto- ja luontohaitat 2020. _JYU Rep._, 1-39 (2021).
* [75] S. Fazio, _et al._, "Supporting information to the characterisation factors of recommended EF Life Cycle Impact Assessment methods" (European Commission, 2018) (June 4, 2023).
* [76] F. Verones, LC-IMPACT1.3 (2021) [https://doi.org/10.5281/zenodo.6200606](https://doi.org/10.5281/zenodo.6200606) (June 4, 2023).
* [77] E. L. Bjelle, _et al._, Adding country resolution to EXIOBASE: impacts on land use embodied in trade. _J. Econ. Struct._**9** (2020).
* [78] C. Roman-Palacios, D. Moraga-Lopez, J. J. Wiens, The origins of global biodiversity on land, sea and freshwater. _Ecol. Lett._**25**, 1376-1386 (2022).
* [79]eurostat, "NACE Rev. 2: Statistical classification of economic activities in the European Community" (European Communities, 2008).
* [80] European Commission, International Monetary Fund, Organisation for Economic Cooperation and Development, United Nations, World Bank, "System of National Accounts 2008."
* [81] Xe, Xe Currency Converter.
* [82] A. Moilanen, J. S. Kotiaho, "Vapaaehtoinen ekologinen kompensaatio AA Sakatti Mining Oy:n mahdolliselle Sakatin kaivokselle. Liite ymparistovaikutusten arvioinitin." (2020).
**Supporting Information Text**
**Supporting Methods**
**Programming information:**
Analyses done with Spyder IDE
\({}^{\star}\) Spyder version: 5.1.5
\({}^{\star}\) Python version: 3.7.6 64-bit
\({}^{\star}\) Ot version: 5.9.7
\({}^{\star}\) PyQt5 version: 5.9.2
\({}^{\star}\) Operating System: Windows 10
**Code for finding country of origin for the direct drivers of biodiversity loss, using Pymrio (1)**
import pymrio
export pandas
exio3 = pymrio.parse_exiobase3(path="FILE LOCATION")
#Diagonalize specific stressor account, e.g. et1_diag = exio3.satellite.diag_stressor(("Cropland -- Cereal grains nec"))
et1_diag = exio3.satellite.diag_stressor(("DRIVER NAME"))
#Connect back to the system
exio3.et1_diag = et1_diag
exio3.calc_all()
#Aggregate to the source drivers
exiostressor = exio3.et1_diag.D_cba.groupby(level="region", axis=0).sum()
#Save as a csv-file to given location
exiostressor.to_csv(path_or_buf="FILE LOCATION")
**Code for aggregating drivers (in this study, blue water consumption), using Pymrio (1)**
import pymrio
export pandas
exio3 = pymrio.parse_exiobase3(path="FILE LOCATION")
#Forming the aggregated group(s).
groups = exio3.satellite.get_index(as_dict=True, grouping_pattern = {"Water Consumption Blue.":": "Water Consumption Blue -- Total"})
exio3.satellite_agg = exio3.satellite.copy(new_name="Aggregated blue water consumption accounts")
for df_name, df in zip(exio3.satellite_agg.get_DataFrame(data=False, with_unit=True, with_population=False),
exio3.satellite_agg.get_DataFrame(data=True, with_unit=True, with_population=False)):
if df_name == "unit":
exio3.satellite_agg_dict [df_name] = df.groupby(groups).apply(lambda x: " & ".join(x.unit.unique()))
else:
exio3.satellite_agg_dict [df_name] = df.groupby(groups).sum()
#Diagonalize specific stressor account, e.g. et1_diag = exio3.satellite.diag_stressor(("Cropland -- Cereal grains nec"))
et1_diag = exio3.satellite_agg.diag_stressor(("Water Consumption Blue -- Total"))
#Connect back to the system
exio3.et1_diag = et1_diag
exio3.calc_all()
#Aggregate to the source drivers
exiostressor = exio3.et1_diag.D_cba.groupby(level="region", axis=0).sum()
#Save as a csv-file to given location
exiostressor.to_csv(path_or_buf="FILE LOCATION")
#### Supporting Discussion
A challenge that remains to be solved when financial and environmental accounts are integrated, is that financial accounting entries do not always include all the relevant information for making an environmental footprint assessment. Thus, in future developments of financial accounting, it would be valuable to consider the needs of environmental accounting. Financial accounting entries should be as detailed as possible revealing the type of the product or service consumed (e.g., travel type: flight vs. train or energy type: coal vs. wind electricity). While more detailed information could be found from individual recepits of purchasing activities, analysis of such information might be cumbersome. Digitalization of recepits would already allow such detailed information to be stored. Second, financial accounting entries could be adjusted to also include physical consumption information, e.g., kilometers travelled, or kilograms of product consumed. Physical consumption information can be found, but it is generally scattered around different units of an organization. Another interesting avenue for further research in the integrated accounting system would be the use of double-entry bookkeeping in environmental accounting and reporting. Double-entry bookkeeping is a common feature of financial accounting used to track financial transactions by keeping book of where money was taken from and to what purpose it was used. In other words, every financial transaction has equal and opposite effects in two different accounts(2). In the future, environmental accounting could take up a similar practice by recording the flows of negative (footprints) and positive (handprints) impacts (impact statement) and consider their accumulation over time (balance sheet). This approach has been mainly discussed in terms of natural capital accounting(3) but could be extended to cover general environmental accounting principles. Double-entry bookkeeping combined with the presented hybrid EEIO-LCA methodology would also allow real-time and automated tracking of carbon and biodiversity footprints in organizations, if environmental accounts would be made at the point of purchasing events, rather than at the end of the year as was done in the case of the University of Jyvaskyla. The integration of financial and environmental accounts is one of the important steps to transforming value in organizations. It is high time for equality of accounting. Environmental and financial accounting should be handled with the same level of rigor. To achieve this, changes are not only needed in environmental accounting, but also in financial accounting practices and policy, which play an important role in how organizations currently operate.
**Fig. S1. The composition of the carbon and biodiversity footprints of the University divided by ecosystem types.** The relative contribution (%) of different consumption categories of the University of Jyvaskylan during 2019-2021 for the carbon and biodiversity footprints (**a**) and scatterplots of the relative carbon footprint of each consumption category on the relative biodiversity footprints of the corresponding consumption category in 2021 (**b**). Small numbers in the scatter plot of panel b refer to the consumption categories in panel a.
**Table S1.** The carbon footprint (t CO\({}_{2}\)e) of the 12 aggregated consumption categories of the University of Jyvaskyla 2019-2021.
**Table S2.** The biodiversity footprint (BDe) of the 12 aggregated consumption categories of the University of Jyvaskyla 2019-2021.
**Table S3.** The biodiversity footprint (BDe) of the 12 aggregated consumption categories of the University of Jyvaskyla 2019-2021 in terrestrial, freshwater and marine ecosystems.
\begin{tabular}{l c c c c c|c c c c} \hline & \multicolumn{3}{c|}{**Terrestrial**} & \multicolumn{3}{c|}{**Freshwater**} & \multicolumn{3}{c}{**Marine**} \\
**Consumption** & **2019** & **2020** & **2021** & **2019** & **2020** & **2021** & **2019** & **2020** & **2021** \\
**category** & & & & & & & & & \\ \hline
**Unidentified** & 9.24E-10 & 1.25E-09 & 1.00E-09 & 2.88E-10 & 3.89E-10 & 3.10E-10 & 8.02E-10 & 1.15E-09 & 1.02E-09 \\
**products and** & & & & & & & & & \\ \multicolumn{1}{c}{**services**} & & & & & & & & & \\ \multicolumn{1}{c}{**Paper**} & 7.96E-10 & 8.64E-10 & 8.08E-10 & 3.16E-10 & 3.44E-10 & 3.23E-10 & 7.86E-11 & 8.37E-11 & 7.74E-11 \\
**products** & & & & & & & & & \\ \multicolumn{1}{c}{**Maintenance**} & 1.27E-09 & 1.45E-09 & 1.27E-09 & 3.65E-10 & 4.13E-10 & 3.63E-10 & 1.58E-10 & 1.64E-10 & 1.49E-10 \\
**and** & & & & & & & & & \\ \multicolumn{1}{c}{**construction**} & & & & & & & & & \\ \multicolumn{1}{c}{**Laboratory**} & 1.60E-09 & 1.56E-09 & 2.01E-09 & 5.12E-10 & 5.16E-10 & 6.60E-10 & 2.35E-09 & 8.24E-10 & 1.60E-09 \\
**equipment** & & & & & & & & & \\ \multicolumn{1}{c}{**and services**} & & & & & & & & & \\ \multicolumn{1}{c}{**Fuels and**} & 1.84E-09 & 1.94E-09 & 1.86E-09 & 1.11E-09 & 1.18E-09 & 1.12E-09 & 8.91E-11 & 9.54E-11 & 9.06E-11 \\
**Table S4.** The contribution of the direct drivers of biodiversity loss to the biodiversity footprint (BDe) of the University of Jyvaskyla 2019-2021 in terrestrial, freshwater and marine ecosystems.
\begin{tabular}{l c c c|c c c|c c c} \hline & \multicolumn{3}{c|}{Terrestrial} & \multicolumn{3}{c|}{Freshwater} & \multicolumn{3}{c}{Marine} \\
**Driver type** & **2019** & **2020** & **2021** & **2019** & **2020** & **2021** & **2019** & **2020** & **2021** \\ \hline
**Land use** & 1.62E-08 & 1.52E-08 & 1.67E-08 & - & - & - & - & - & - \\
**Climate** & 1.49E-08 & 1.49E-08 & 1.62E-08 & 4.64E-09 & 4.64E-09 & 5.05E-09 & - & - & - \\
**change** & & & & & & & & & \\
**Pollution** & 2.28E-09 & 2.15E-09 & 2.34E-09 & 3.59E-10 & 3.18E-10 & 3.54E-10 & 1.12E-08 & 8.51E-09 & 9.82E-09 \\
**Water stress** & - & - & - & 6.35E-09 & 6.03E-09 & 6.56E-09 & - & - & - \\ \hline \end{tabular}
[MISSING_PAGE_POST]
**Table S5.** Biodiversity footprint impact categories in EXIOBASE and connecting impact category in LC-IMPACT. In terms of land use, average effects from LC-IMPACT.
\begin{table}
\begin{tabular}{c l l l l} \hline & **Region A** & **Region A** & **Region B** & **Region B** \\ & **Sector 1** & **Sector 2** & **Sector 1** & **Sector 2** \\ \cline{2-5} & Impact in Region A & Impact in Region A & Impact in Region A & Impact in Region A \\ & driven by consumption & driven by consumption & driven by consumption & driven by consumption in \\ Region A & in Region A – Sector 1 & in Region A – Sector 2 & consumption in & driven by consumption in \\ Region A & in Region A – Sector 1 & in Region A – Sector 2 & consumption in & driven by consumption in \\ Region B & in Region A – Sector 1 & in Region A – Sector 2 & consumption in & driven by consumption in \\ Region B & in Region A – Sector 1 & in Region A – Sector 2 & consumption in & driven by consumption in \\ Region B & in Region A – Sector 1 & in Region A – Sector 2 & consumption in & driven by consumption in \\ Region B & in Region A – Sector 2 & in Region A – Sector 2 & consumption in & driven by consumption in \\ Region B & in Region A – Sector 2 & in Region A – Sector 2 & region B – Sector & region B – Sector \\ \hline \end{tabular}
\end{table}
Table S6: Illustration of the data matrix derived from pymrio analysis of stressor (impact) sources. Regions in the column headers indicate the location of the environmental impact. Regions and sectors in row headers indicate the place of consumption.
**Table S7.** Summary of the different operations needed to harmonize purchaser prices (financial account prices) with basic prices (EEIO database prices).
\begin{tabular}{l l l} \hline
**Description** & **Equation** & **Legend** \\ \hline
**Harmonizing financial** & \(IAP=FAP-(FAP\times IF)\) & IAP = Inflation adjusted price \\
**account prices to take into** & & FAP = Financial account price \\
**account inflation between** & & IF = Inflation factor \\
**EEIO database baseline** & & \\
**year and financial** & & \\
**accounting year.** & & \\
**Definition of producer** & \(PRP=BP+TAX-SUB\) & PRP = Producer price \\
**price.** & & BP = Basic price \\
**data** & & TAX = Taxes on products excluding invoiced \\
**data** & & VAT \\
**data** & & SUB = Subsidies on products \\
**data** & & PUP = Purchaser price \\
**data** & & PRP= Producer price \\
**data** & & TTM = Trade and transport margins \\
**data** & & VAT = VAT not deductible by the purchaser \\
**data** & & PUP = Purchaser price \\
**data** & & BP = Basic price \\
**data** & & TAX = Taxes on products excluding invoiced \\
**data** & & VAT \\
**data** & & SUB = Subsidies on products \\
**data** & & TTM = Trade and transport margins \\
**data** & & VAT = VAT not deductible by the purchaser \\
**data** & & BPCF = Basic price conversion factor \\
**data** & & TAX = Taxes on products excluding invoiced \\
**data** & & VAT \\
**data** & & SUB = Subsidies on products \\
**data** & & VAT = VAT not deductible by the purchaser \\
**data** & & TTM = Trade and transport margins \\
**data** & & VAT = VAT not deductible by the purchaser \\
**data** & & SUP = Total supply per sector \\
**data** & & HP = Harmonized price \\
**data** & & IP = Inflation adjusted price \\
**data** & & FAP = Financial account price \\
**data** & & BPCF = Basic price conversion factor \\ \hline \end{tabular}
**Table S8.** Summary of the different operations needed to harmonize purchaser prices (financial account prices) with basic prices (EEIO database prices).
## Dataset
The full dataset can be accessed in [https://doi.org/10.5281/zenodo.8369650](https://doi.org/10.5281/zenodo.8369650).
|
2309.05935 | Dynamic relationship between XRP price and correlation tensor spectra of
the transaction network | The emergence of cryptoassets has sparked a paradigm shift in the world of
finance and investment, ushering in a new era of digital assets with profound
implications for the future of currency and asset management. A recent study
showed that during the bubble period around the year, 2018, the price of
cryptoasset, XRP has a strong anti correlation with the largest singular values
of the correlation tensors obtained from the weekly XRP transaction networks.
In this study, we provide a detailed analysis of the method of correlation
tensor spectra for XRP transaction networks. We calculate and compare the
distribution of the largest singular values of the correlation tensor using the
random matrix theory with the largest singular values of the empirical
correlation tensor. We investigate the correlation between the XRP price and
the largest singular values for a period spanning two years. We also uncover
the distinct dependence between XRP price and the singular values for bubble
and non-bubble periods. The significance of time evolution of singular values
is shown by comparison with the evolution of singular values of the reshuffled
correlation tensor. Furthermore, we identify a set of driver nodes in the
transaction networks that drives the market during the bubble period using the
singular vectors. | Abhijit Chakraborty, Tetsuo Hatsuda, Yuichi Ikeda | 2023-09-12T03:18:58Z | http://arxiv.org/abs/2309.05935v1 | # Dynamic relationship between XRP price and correlation tensor spectra of the transaction network
###### Abstract
The emergence of cryptoassets has sparked a paradigm shift in the world of finance and investment, ushering in a new era of digital assets with profound implications for the future of currency and asset management. A recent study showed that during the bubble period around the year, 2018, the price of cryptoasset, XRP has a strong anti correlation with the largest singular values of the correlation tensors obtained from the weekly XRP transaction networks. In this study, we provide a detailed analysis of the method of correlation tensor spectra for XRP transaction networks. We calculate and compare the distribution of the largest singular values of the correlation tensor using the random matrix theory with the largest singular values of the empirical correlation tensor. We investigate the correlation between the XRP price and the largest singular values for a period spanning two years. We also uncover the distinct dependence between XRP price and the singular values for bubble and non-bubble periods. The significance of time evolution of singular values is shown by comparison with the evolution of singular values of the reshuffled correlation tensor. Furthermore, we identify a set of driver nodes in the transaction networks that drives the market during the bubble period using the singular vectors.
## Introduction
Cryptoasset has emerged as a new asset class that has gained enormous popularity and attention from investors worldwide. The rapid growth and widespread adoption of cryptoassets have led to a surge in prices and market capitalization recently. However, high volatility in the cryptoasset market has led to concerns about its stability and reliability as an investment. One of the major challenges in the cryptoasset market is the presence of anomalies in price movements. An anomaly refers to an observation that deviates significantly from the expected or normal pattern. Anomalies in the cryptoasset market can occur due to various factors, including market manipulation, insider trading, regulatory changes, or technical issues. Cryptoasset uses a decentralized digital ledger called a blockchain to store transaction data. The blockchain is a distributed ledger that contains a record of every transaction that has ever occurred on the network. Each block in the blockchain contains a set of transactions, and these blocks are linked together in a chronological chain. The blockchain provides a secure and transparent way to store transaction data without the need for a central authority or intermediary.
Cryptoasset transactions data are typically publicly available on a blockchain. Public availability of a complete history of different cryptoasset transaction data allows researchers to inspect various aspects of the cryptoasset market [1]. Among the various cryptoassets, Bitcoin, Ethereum and XRP are the popular and large market cap cryptoassets. L. Kristoufek studied the relationship of Bitcoin price with search queries on Google Trends and visit frequency on the Wikipedia page on Bitcoin [2]. An early study [3] investigated the topological structure of the Bitcoin transaction network. D. Kondor _et. al._ demonstrated that the linear preferential attachment is the key mechanism for growth of the Bitcoin transaction network [4]. Ethereum transaction data has also been investigated and it is found that the transaction network exhibits power-law degree distribution, disassortativity, the absence of rich club phenomenon, and small world phenomenon [5, 6]. The cryptoasset transaction networks are growing and evolving with time. Application of principal component analysis on the structural change of Bitcoin transaction network shows a connection with Bitcoin price [7]. It is also found that the out-degrees of the Bitcoin transaction network provides a connection to the price changes [8]. Recently, transaction data of crypto asset, XRP is also studied. The structural properties, such as the heavy tail nature of degree distribution and triangular motifs are analyzed for XRP transaction network [9]. Considering outgoing and incoming flows for both, XRP and Bitcoin transaction networks, the key nodes have been shown to be classified into three different groups [10]. Remittance transactions recorded on the XRP ledger has also been studied recently [11].
In this article, we focus on XRP, which was created by Ripple Labs in 2012. It is used for transferring value on the Ripple payment protocol and is also the native digital asset of the XRP ledger. Using the XRP transaction data, we have recently
introduced a method of correlation tensor spectra of transaction network to detect price burst in XRP price [12]. The method of the correlation tensor is inspired by the cross correlation method in stock price time series [14, 15]. The theory of random matrices [16, 17, 18] is crucial to understanding the structure of empirical correlation matrices. The method is very useful for separating noise from the signal by excluding the components that arises due to randomness. While this method is generally applied to price time series data, the correlation tensor method adapts this technique for transaction network snapshots.
The method of correlation tensor spectra was applied for the period of October 2017 to March 2018 which was the most significant bubble period of XRP price history [12]. In this article, we study the robustness of correlation tensor spectra method over two years period between January 2020 to December 2021. We uncover the behaviour of correlation tensor spectra in both, bubble period and non bubble period. The different periods for daily XRP price are shown in figure 1. The periods \(AB\), \(CG\), \(CD\) and \(EF\) represent the period October 2, 2017 to March 4, 2018, January 6, 2020 to December 26, 2021, January 6, 2020 to November 1, 2020, and February 1, 2021 to August 1, 2021 respectively. The most significant bubble period for XRP price is observed in 2018, which is denoted by the period \(AB\) within vertical lines A and B. In this article, we study the method correlation spectra on the period \(CG\) within the vertical lines \(C\) and \(G\), which covers a period of 103 weeks. We further delve into two sub periods: a non-bubble period \(CD\) within vertical lines \(C\) and \(D\), and a bubble period \(EF\) within the vertical lines \(E\) and \(F\). From the singular vectors of the empirical correlation tensor, we identify a set of small number of nodes that drives the XRP market.
## 1 Materials and methods
### Data
We have collected the data from Ripple API. In this study, we mainly focus on the transaction data from January 06, 2020 to December 26, 2021, which is shown as the period \(CG\) in figure 1. The entire data is grouped into 103 weeks. For each week, we construct a weekly weighed directed network of XRP transactions. The nodes of the networks are wallets. A link is formed from the source wallet to the destination wallet if there is at least one transaction between them. The link-weight represents the total transaction volume between the wallet for a week. Therefore, the link indicates the flow of XRP between wallets. The network statistics are shown in SI Text 1.
### Network embedding
We utilized the well-known node2vec [19] algorithm to embed each of the weekly weighted directed networks into a \(D\)-dimensional space. In the node2vec algorithm, we have used the return parameter, \(p=1\) and in-out parameter, \(q=1\), which represent the
Figure 1: The XRP/USD daily closing price is recorded from May 05, 2017 to October 13, 2022. The blue horizontal line segments between different pair of blue vertical lines represent different periods, which are explained in the main text.
utilization of unbiased random walks. This results in a D-dimensional vector, denoted as \(V_{i}^{\alpha}\), for every node in the networks. Here, we use \(i\) and \(j\) as node indices, and \(\alpha\) and \(\beta\) as components of the vectors in the D-dimensional space.
### Correlation tensor
The method of the correlation tensor and its diagonalization using double Singular Value Decomposition (SVD) were introduced in [12]. Here, we briefly discuss the method.
In the weekly networks of XRP transactions, we identify \(N\) nodes that carry out at least one transaction every week during the period under investigation. We refer to these nodes as regular nodes. Each regular node in the embedding space is represented by a time series of D-dimensional vectors, denoted \(V_{i}^{\alpha}(t)\). Here, \(i\) ranges from 1 to \(N\), \(t\) ranges from 1 to \(T\), and \(\alpha\) ranges from 1 to \(D\).
The correlation tensor between regular node components is given by:
\[M_{ij}^{\alpha\beta}(t)=\frac{1}{2\Delta T}\sum_{t^{\prime}=t-\Delta T}^{t+ \Delta T}\frac{[V_{i}^{\alpha}(t^{\prime})-\overline{V_{i}^{\alpha}}][V_{j}^{ \beta}(t^{\prime})-\overline{V_{j}^{\beta}}]}{\sigma_{V_{i}^{\alpha}}\sigma_{ V_{j}^{\beta}}}, \tag{1}\]
Figure 2: Comparison of singular values for different correlation tensors for the the week January 06 -12, 2020. (a) The plot shows the comparison of the singular values, \(\rho_{k}^{\gamma}\), for the empirical correlation tensor (black filled circle) and the reshuffled correlation tensor (red filled triangle) for all values of \(k\). (b) The comparison of the singular values, \(\rho_{k}^{\gamma}\), for the empirical correlation tensor and the reshuffled correlation tensor, considering all values of \(k\) and \(\gamma>1\). (c) The simulated singular values, \(\rho_{k}^{1}\), for the Gaussian random correlation tensor (green open square), along with the corresponding analytic curve following equation 5 (solid purple line). (d) The simulated singular values, \(\rho_{k}^{\gamma}\), for the Gaussian random correlation tensor, considering all values of \(k\) and \(\gamma>1\).
In this equation, we take the sum over five weekly networks at times \(t^{\prime}=\{t-2,t-1,t,t+1,t+2\}\) with a time window of \((2\Delta T+1)\) with \(\Delta T=2\) for our analysis. The values of \(\overline{V}_{i}^{\alpha}\) and \(\sigma_{V_{i}^{\alpha}}\) represent the mean and standard deviation of \(V_{i}^{\alpha}\) over a time window of \((2\Delta T+1)=5\) weekly networks at times \(\{t-2,t-1,t,t+1,t+2\}\). It is important to note that a smaller value of \(\Delta T\) results in more noise in the correlation tensor. However, we cannot choose a large value for \(\Delta T\) as we are conducting a detailed temporal evolution of the networks. For this analysis we have chosen the dimension \(D=32\). The dependence of correlation tensor on window size \((2\Delta T+1)\) and dimension \(D\) can be found in [12].
### Double singular value decomposition
To determine the spectrum of the correlation tensor, we use a double SVD approach as follows: First, we diagonalize \(M_{ij}^{\alpha\beta}\) successively by a bi-unitary transformation, also known as SVD, in terms of the \((ij)\)-index and then the \((\alpha\beta)\)-index. The first step involves expressing \(M_{ij}^{\alpha\beta}\) as a sum of matrices, using the SVD method:
\[M_{ij}^{\alpha\beta}=\sum_{k=1}^{N}L_{ik}\sigma_{k}^{\alpha\beta}R_{kj}. \tag{2}\]
The second step is to further decompose each singular value \(\sigma_{k}^{\alpha\beta}\) as a sum of matrices, using SVD:
\[\sigma_{k}^{\alpha\beta}=\sum_{\gamma=1}^{D}\mathcal{L}^{\alpha\gamma}\rho_{k }^{\gamma}\mathcal{R}^{\gamma\beta}. \tag{3}\]
Finally, we combine these steps to obtain the following expression for \(M_{ij}^{\alpha\beta}\):
\[M_{ij}^{\alpha\beta}=\sum_{k=1}^{N}\sum_{\gamma=1}^{D}\rho_{k}^{\gamma}(L_{ik} R_{kj})(\mathcal{L}^{\alpha\gamma}\mathcal{R}^{\gamma\beta}). \tag{4}\]
Here, \(\rho_{k}^{\gamma}\) represents the \(N\times D\) generalized singular values, which are real and positive due to the fact that \(M\) is a real correlation tensor.
## 2 Results
We investigate the period, \(CG\), which spans 103 weeks. It consists of 265 regular nodes. Following equation 1, we calculate the correlation tensor between the components of regular nodes for different weeks. Note that with 103 weekly networks,
Figure 3: The comparison between the daily XRP/USD price with the singular values and spectral gap for the period, \(EF\), February 01, 2021 to August 1, 2021. The black curves in the graph show the daily XRP/USD price, while the blue curves represent (a) the largest singular value \(\rho_{1}^{1}\), (b) the second largest singular value \(\rho_{2}^{1}\), and (c) the spectral gap \((\rho_{1}^{1}-\rho_{2}^{1})\) of correlation tensors for different weeks. The dotted grey vertical lines indicate the weekly windows.
we get 99 weekly correlation tensors following equation 1. A weekly correlation tensor has \(N\times N\times D\times D\) elements. To get crucial information from the correlation tensor, we diagonalize it by double SVD as described in the method section. The double SVD is an extension of the SVD which is applied for a matrix. Applying double SVD on the weekly correlation tensor \(M_{ij}^{\alpha,\beta}(t)\), we get the singular values \(\rho_{k}^{\gamma}(t)\).
The significance of the empirical correlation tensor is measured by comparing it with the reshuffled correlation tensor. To calculate the reshuffled correlation tensor, we reshuffle the components of the embedded regular node vector \(v_{i}^{\alpha}\) with in the time window \((2\Delta T+1)\). Using the reshuffled embedded regular node vectors, we calculate the reshuffled correlation tensor following equation 1. We also calculate and simulate singular values of a Gaussian random correlation tensor using random matrix theory [16, 20, 21, 22, 23] for comparison. The Gaussian correlation tensor elements \(G_{ij}^{\alpha,\beta}\) are sampled from a Gaussian distribution with a mean of zero and a standard deviation of \(\sigma_{G}=0.5\), where \((i,j=1,\ldots,N)\) and \((\alpha,\beta=1,\ldots,D)\). We choose \(\sigma_{G}=0.5\) to match the standard deviation of our empirical correlation tensor. The probability distribution function form of the largest singular values of Gaussian random correlation tensor \((\tilde{\rho}_{k}^{1})\) for all \(k\) is given by
\[P(\tilde{\rho}_{k}^{1})=\frac{1}{\pi\sigma_{G}^{2}}\sqrt{(\tilde{\rho}_{1}^{1 })^{2}-(\tilde{\rho}_{k}^{1})^{2}}, \tag{5}\]
where \(\tilde{\rho}_{1}^{1}=2\sigma_{G}D\sqrt{N}\) is the largest singular value for \(k=1\). The derivation is shown in SI text 3.
We show the singular values \(\rho_{k}^{\gamma}(t)\) of empirical, reshuffled and Gaussian random correlation tensor \(M(t)\) for the week \(t=\) January 06 -12, 2020 in figure 2. Figure 2 (a) shows the singular values \(\rho_{k}^{\gamma}\) for all \(k\in[1,N]\) and \(\gamma=1\) along with the singular values for the reshuffled correlation tensor. We observed that only the largest singular value \(\rho_{1}^{1}\) for the empirical correlation tensor lies above the largest singular value for the reshuffled correlation tensor. Similarly we show the comparison for other singular values \(\rho_{k}^{\gamma}\) for all \(k\in[1,2,3,\ldots,N]\) and \(\gamma\in[2,3,4,\ldots,D]\). Here we observe that several singular values of the empirical correlation tensor lies above the largest of these singular values for the reshuffled correlation tensor. However, these singular values are much smaller than \(\rho_{1}^{1}\). Therefore, these singular values have relatively less contribution to the correlation tensor. We further compare the empirical singular values with the singular values of the Gaussian random correlation tensor, where the elements are taken from a normal distribution. We have calculated the singular values of the Gaussian random correlation tensor in equation 5 using RMT. Figure 2 (c) shows the the simulated singular values, \(\rho_{k}^{\gamma}\), of Gaussian correlation tensor fit nicely with the analytic curve given by equation 5. Figure 2 (d) shows the spectrum, \(\tilde{\rho}_{k}^{\gamma}\), of Gaussian random correlation tensor for all \(k\) and
Figure 4: The correlation between the weekly XRP/USD price \(\overline{\text{XRP}/\text{USD}}(t+t_{0})\) and the largest singular value \(\rho_{1}^{1}(t)\) as a function of time lag \(t_{0}\). (a) The Pearson correlation coefficient \(r\) and (b) the associated p value are plotted with lag \(t_{0}\). The curves with black circles and red triangles represent the bubble periods \(EF\) and \(AB\) in figure 1 respectively. The horizontal red dotted line in (b) indicates the significance label for p-values \(<0.05\).
\(\gamma\in[2,3,4,\ldots,D]\). It illustrates that these singular values are extremely small. Here we observe that singular values for the Gaussian correlation tensors are much smaller than the singular values for the empirical correlation tensor and as well as for the reshuffled correlation tensor. It is observed that the singular values of the reshuffled correlation tensor approaches to the singular values of the Gaussian correlation tensor when the time window \(\Delta T\) becomes much larger than \(N\)[12]. Although we have shown the results for the week, January 06 -12, 2020 only, our results will remain qualitatively the same for any other week.
Figure 5: The comparison between the daily XRP/USD price with the singular values and spectral gap for the period, \(CD\) January 06, 2020 to November 01, 2020. The black curves in the graph show the daily XRP/USD price, while the blue curves represent (a) the largest singular value \(\rho_{1}^{1}\), (b) the second largest singular value \(\rho_{2}^{1}\), and (c) the spectral gap \((\rho_{1}^{1}-\rho_{2}^{1})\) of correlation tensors for different weeks. The dotted grey vertical lines indicate the weekly windows.
Figure 6: The comparison of daily XRP/USD price with the correlation \(r(t)\) between the weekly XRP/USD price, \(\overline{\text{XRP}/\text{USD}}(t)\) and the largest singular value \(\rho_{1}^{1}(t-1)\) using a moving window of 9 weeks for three different periods - (a) \(AB\), October 2, 2017- March 4, 2018, (b) \(CD\), January 6, 2020 - November 1, 2020 and (c) \(EF\), February 1, 2021- August 1, 2021. The black curve represent daily XRP/USD closed price. The blue curve with green and red triangles represent correlation \(r(t)\), where the green triangles indicate significant correlations (p-value \(<0.05\)) and the red triangles indicate no significant correlations (p-value \(>0.05\)) The three lower panels show the p-values for the corresponding Pearson correlations. The dotted grey vertical lines represent the weekly windows.
The period \(CG\) contains bubble and non bubble periods. To further explore how the relationship changes between the weekly XRP/USD price \(\overline{\mathrm{XRP}/USD}(t)\) and the singular values \(\rho_{k}^{7}(t)\) during non-bubble and bubble periods, we separately study the following two sub-periods: January \(06,2020\) - November \(01,2020\) and February \(01,2021\) - August \(01,2021\), which is indicated as \(CD\) and \(EF\), respectively, in figure 1. The weekly XRP/USD price, \(\overline{\mathrm{XRP}/USD}(t)\) is calculated as the average daily XRP/USD price for the week. The analysis for the period \(CG\) is given in SI Text 2.
The sub-period, \(EF\), was a bubble period for XRP/USD price. In this period, we find that the weekly networks of XRP transactions has \(753\) regular nodes. Taking into account these regular nodes, we calculate the weekly correlation tensors for this period. The two largest singular values and the spectral gap of the correlation tensors with the daily XRP prices are shown in figure 3. We observe a strong anti-correlation for the weekly XRP/USD price \(\overline{\mathrm{XRP}/USD}(t+1)\) with the largest singular value \(\rho_{k}^{7}(t)\) (\(r=-0.515\) and p-value\(=0.014\)) and the spectral gap \((\rho_{1}^{1}(t)-\rho_{2}^{1}(t))\) (\(r=-0.517\) and p-value \(=0.014\)). A strong correlation is found between \(\overline{\mathrm{XRP}/USD}(t+1)\) and the second largest singular value, \(\rho_{2}^{1}(t)\) (\(r=0.512\) and p-value \(=0.015\)).
We further show the Pearson correlation between the weekly XRP price \(\overline{\mathrm{XRP}/USD}(t+t_{0})\) and the largest singular value \(\rho_{1}^{1}(t)\) with different time lag in figure 4 for the period \(AB\) and \(EF\) respectively. The anti correlation between weekly XRP price and the largest singular value is found to be maximum when the lag is either one week or two weeks. The correlation between \(\overline{\mathrm{XRP}/USD}(t+3)\) and \(\rho_{1}^{1}(t)\) is found \(r=-0.448\) and p-value \(=0.041\) for the period \(EF\). This is an indication that the largest singular value \(\rho_{1}^{1}(t)\) can give an early warning for the XRP price burst, which is consistent with [12].
The sub-period, CD, indicates relatively steady values for XRP/USD price with a range of \(0.30-0.15\). This period has \(465\) regular nodes for the weekly networks. Here also with these regular nodes, we calculate the weekly correlation tensors. The two largest singular values and spectral gap for the weekly correlation tensors are shown together with daily XRP/USD price in figure 5. We observe that the weekly XRP/USD price \(\overline{\mathrm{XRP}/USD}(t+1)\) has no significant correlation with the largest singular value (\(r=0.193\) and p-value \(=0.238\)) and the spectral gap (\(r=0.259\) and p-value \(=0.111\)). It only shows a weak anti-correlation between \(\overline{\mathrm{XRP}/USD}(t+1)\) and the second largest singular value (\(r=-0.333\) and p-value \(=0.039\)). Therefore, we observe a distinct behaviour of the weekly XRP price and the largest singular value for the period \(CD\) in stark contrast to its strong anti correlation behaviour during bubble periods \(AB\) and \(EF\).
To show the temporal relationship between the XRP price and tensor correlation spectra, we calculate the Pearson correlation between weekly XRP/USD price \(\overline{\mathrm{XRP}/USD}(t)\) and the largest singular value \(\rho_{1}^{1}(t-1)\) using a moving time window of length
Figure 7: The variation of singular values and spectral gaps of empirical correlation tensor and reshuffled correlation tensor for different weeks for the period \(CG\). (a) Variation of the largest singular value for empirical \(\rho_{1}^{1}\) and reshuffled \(\rho_{1}^{\prime 1}\) correlation tensors. (b) Variation of the second largest singular value for empirical \(\rho_{2}^{1}\), reshuffled \(\rho_{2}^{\prime 1}\) correlation tensors. (c) Variation of spectral gap for empirical \((\rho_{1}^{1}-\rho_{2}^{1})\) and reshuffled \((\rho_{1}^{\prime 1}-\rho_{2}^{\prime 1})\) correlation tensors. The error bars on the graphs represent the standard deviation. The data is an average of \(40\) different network embeddings.
9 weeks. The Pearson's correlation coefficient \(r(t)\) is given by
\[r(t)=\frac{1}{2\Delta\tau}\sum_{t^{\prime}=t-\Delta\tau}^{t+\Delta\tau}\frac{[ \overline{\mathrm{XRP/USD}}(t^{\prime})-\langle\overline{\mathrm{XRP/USD}} \rangle][\rho_{1}^{1}(t^{\prime}-1)-\langle\rho_{1}^{1}\rangle]}{\sigma_{ \overline{\mathrm{XRP/USD}}}\sigma_{\rho_{1}^{1}}}, \tag{6}\]
where we have taken \(\Delta\tau=4\). The \(\sigma\) and \(\langle\cdot\rangle\) represent standard deviation and mean of the quantities with in the time window \((2\Delta\tau+1)\) respectively. We investigate the temporal correlation separately for three different periods, AB, CD and EF. The number of regular nodes for the three periods are 71, 465 and 753 respectively. The temporal variation of the correlation \(r(t)\) between the weekly XRP prices \(\overline{\mathrm{XRP/USD}}(t)\) and the largest singular values \(\rho_{1}^{1}(t-1)\) of the weekly correlation tensors is shown along with the daily XRP/USD price in figure 6. It is observed that the anti correlation is the strongest and significant during the period \(AB\) and is stronger and significant during the period \(EF\). The anti correlation mostly non-significant during the period \(CD\). It reflects the fact that the formation of a large bubble in the XRP price is indicated by a strong anti correlation with the largest singular value.
The significance of the evolution of the singular values of the empirical correlation tensor is measured by comparing it with the singular values of the reshuffled correlation tensor. The two largest singular values and the spectral gap of the empirical correlation tensor for the period \(CG\) are compared to those of the reshuffled correlation tensor in figure 7. It shows that only the largest singular value of the empirical correlation tensor is larger than its reshuffled counterpart. Moreover, we observe that while the singular values for the reshuffled correlation tensor remain approximately constant, those for empirical correlation tensor exhibit non-trivial variation over the investigated time period.
Up to this point, our focus has primarily centered around delving into the characteristics of singular values. Moving forward, our attention turns to a comprehensive exploration of the singular vectors. We provide a comparison of the distribution of the components of singular vectors for the empirical correlation tensor with the Gaussian random correlation tensor. We mainly focus on the largest left singular vectors \(L_{i1}^{\alpha,\beta}\) and the largest right singular vectors \(R_{i1}^{\alpha,\beta}\) to identify nodes, \(i\), that play key role in the transaction networks. The comparison of the distributions for the singular vectors' components between the Gaussian random correlation tensor and the empirical correlation tensor is shown in figure 8. We can observe differences in the nature of the peaks in the distributions. The distribution for both the left and right largest singular vector components for the random Gaussian correlation tensor follows the Gaussian distribution. For the empirical correlation tensor it is bimodal and thus far from Gaussian in nature.
Nodes with larger values of \(L_{i1}^{\alpha,\beta}\) hold more significance in the correlation tensor. We have observed that the distribution of \(L_{i1}^{\alpha,\beta}\) falls within the range of \((-0.1,0.1)\). To determine which node indices are overrepresented among the large values
Figure 8: The comparison of the distribution for the components of (a) left singular vectors \(L_{i1}^{\alpha,\beta}\) and (b) right singular vectors \(R_{i1}^{\alpha,\beta}\) between Gaussian random correlation tensor and empirical correlation tensor. Here \(i\in[1,2,3,\ldots,N]\), and \(\alpha,\beta\in[1,2,3,\ldots,D]\). The distributions for the components of \(L_{i1}^{\alpha,\beta}\) and \(R_{i1}^{\alpha,\beta}\) fits nicely with normal distribution for the Gaussian random correlation tensor. The mean and standard deviation for the fitted normal distributions are taken \(6.0\times 10^{-5}\), \(0.036\) for \(L_{i1}^{\alpha,\beta}\) and \(5.6\times 10^{-5}\), \(0.036\) for \(R_{i1}^{\alpha,\beta}\) respectively. The empirical period is taken during the week April \(5-11\), 2021, in the bubble period.
of \(L_{i1}^{\alpha,\beta}\), we have set a threshold of 0.05. Any value of \(|L_{i1}^{\alpha,\beta}|\) exceeding this threshold is considered significant. We then calculate the total count, denoted by \(N_{c}\), of \(L_{i1}^{\alpha,\beta}\) values that surpass 0.05. The expected frequency of occurrence for a particular node index \(j\) within \(N_{c}\) is evaluated by dividing \(N_{c}\) by the total number of nodes, denoted by \(N\). We consider a node index to be overrepresented in \(N_{c}\) if its frequency surpasses \((N_{c}/N+10)\). This threshold is deliberately set slightly higher than the expected frequency by random chance \(N_{c}/N\). Similarly, we identify the node indices that are overrepresented in the range \(L_{i1}^{\alpha,\beta}<-0.05\). For the period (bubble period), April 5-11, 2021, our analysis reveals that there are 112 node indices that are overrepresented for \(L_{i1}^{\alpha,\beta}>0.05\), while 189 node indices are overrepresented for \(L_{i1}^{\alpha,\beta}<-0.05\). Combining these two sets, we find a total of 268 unique node indices that are considered important based on our criteria. To gain further insights, we compare the total transaction volume, mean inflow and outflow of XRP for these 268 important nodes (referred to as the "driver set") with the remaining regular nodes, which amount to 485 in total (753 nodes minus the 268 driver set nodes). This comparison is illustrated in figure 9. In figure 9 (a), we observe that the total transaction volume between the driver set of nodes increases during the bubble period, however such change in transaction volume is not detected for the remaining set of nodes. Notably, the mean inflow and outflow exhibit similar variations for both sets of nodes. However, what distinguishes the driver set is a noticeable jump in the mean inflow and outflow during the bubble period, suggesting a distinct behavior among these nodes [figure 9 (b)]. We also observe two sudden peaks in the mean inflow of remaining set of nodes during the week, March 16 - 22, 2020 and September 28 - October 04, 2020 respectively due to a large inflow \((\ 10^{11}\) XRP\()\) to a distinct wallet. Figure 9 (c) shows the temporal variation for total number of these node sets in the weekly transaction networks.
Similarly, we conducted a similar analysis for the largest right singular vector, denoted as \(R_{i1}^{\alpha,\beta}\), which yielded comparable results. For the bubble period of April 5-11, 2021, we identified 159 node indices that were overrepresented for \(R_{i1}^{\alpha,\beta}>0.05\), while 167 node indices were overrepresented for \(R_{i1}^{\alpha,\beta}<0.05\). Combining these two sets, we found a total of 247 unique node indices that met our criteria.
To gain further insights, we compare the total transaction volume, the mean inflow and outflow of XRP for these 247 driver set nodes with the remaining regular nodes, which amounted to 506 in total (753 nodes minus the 247 driver set nodes). This comparison is depicted in figure 10.
Taking into account both the left and right largest singular vectors, we identified a total of 313 unique node indices in the driver set. These findings suggest that these particular nodes exhibit noteworthy behavior and warrant closer examination in the context of our analysis.
Figure 9: Comparative analysis of XRP Transaction volumes, flows, and numbers between the driver set of nodes and the remaining regular set of nodes for the largest left singular vectors, \(L_{i1}^{\alpha,\beta}\) (\(t=\text{April }5-11,2021\)) over the period \(CG\). (a) Comparison of the weekly transaction XRP volumes (in millions) between the driver set of nodes (red) and the remaining regular set of nodes (green). (b) Comparison of mean inflow and outflow of XRP (in millions) between the driver set of nodes (black and red) and the remaining regular set of nodes (green and purple).(c) Number of nodes present in the weekly networks for the driver set of nodes (black circle) and the remaining regular set of nodes (green triangle).
## 3 Conclusion
The analysis of time series data by cross correlation matrix [14, 15] provides useful information. In the cross correlation method, the correlation is measured for different time dependent variables. such as stock price data, foreign exchange rate data or even medical data. However, in this study, we have used snapshots of weekly weighted directed XRP transaction networks. To measure the correlations between the regular nodes we have considered their embedded vectors. Correlations between the components of the regular nodes are represented as the correlation tensor. A double SVD has been used to get the singular values of the correlation tensor.
We have studied the temporal variation of correlation between the XRP price and correlation tensor spectra of transaction networks. It is observed that the dependence between the XRP price and the largest singular value of the correlation tensor changes depending on the market situation. For the non-bubble period, we found that there is no significant correlation between the XRP price and the largest singular value. In stark contrast, for the bubble period, we observe a strong anti-correlation between the XRP price and the largest singular value. Moreover, the significance of the empirical singular values is shown by a comparison with the singular values of the reshuffled correlation tensor. We have also provided an theoretical expression for the singular values of the Gaussian random correlation tensor using random matrix theory. We also showed how the distributions for the components of singular vector deviate from the normal distribution which is followed in case of Gaussian random correlation tensor. From the singular vectors, we identified a small subset of nodes that drive the XRP market during the bubble period.
|
2302.14592 | Noise-assisted digital quantum simulation of open systems | Quantum systems are inherently open and susceptible to environmental noise,
which can have both detrimental and beneficial effects on their dynamics. This
phenomenon has been observed in bio-molecular systems, where noise enables
novel functionalities, making the simulation of their dynamics a crucial target
for digital and analog quantum simulation. Nevertheless, the computational
capabilities of current quantum devices are often limited due to their inherent
noise. In this work, we present a novel approach that capitalizes on the
intrinsic noise of quantum devices to reduce the computational resources
required for simulating open quantum systems. Our approach combines quantum
noise characterization methods with quantum error mitigation techniques,
enabling us to manipulate and control the intrinsic noise in a quantum circuit.
Specifically, we selectively enhance or reduce decoherence rates in the quantum
circuit to achieve the desired simulation of open system dynamics. We provide a
detailed description of our methods and report on the results of noise
characterization and quantum error mitigation experiments conducted on both
real and emulated IBM Quantum computers. Additionally, we estimate the
experimental resource requirements for our techniques. Our approach holds the
potential to unlock new simulation techniques in Noisy Intermediate-Scale
Quantum (NISQ) devices, harnessing their intrinsic noise to enhance quantum
computations. | José D. Guimarães, James Lim, Mikhail I. Vasilevskiy, Susana F. Huelga, Martin B. Plenio | 2023-02-28T14:21:43Z | http://arxiv.org/abs/2302.14592v3 | # Noise-Assisted Digital Quantum Simulation of Open Systems
###### Abstract
Quantum systems are inherently open and subject to environmental noise, which can have both detrimental and beneficial effects on their dynamics. In particular, noise has been observed to enable novel functionalities in bio-molecular systems, making the simulation of their dynamics an important target for digital and analog quantum simulation. However, current quantum devices are typically noisy, limiting their computational capabilities. In this work, we propose a novel approach that leverages the intrinsic noise of a quantum device to reduce the quantum computational resources required for simulating open quantum systems. We achieve this by combining quantum noise characterization methods with quantum error mitigation techniques, which allow us to transform and control the intrinsic noise in a quantum circuit. Specifically, we selectively enhance or reduce decoherence rates in the quantum circuit to achieve the desired simulation of open system dynamics. We describe our methods in detail and report on the results of noise characterization and quantum error mitigation on real and emulated IBM Quantum computers. We also provide estimates of the experimental resource requirements for our techniques. We believe that this approach can pave the way for new simulation techniques in Noisy Intermediate-Scale Quantum (NISQ) devices, where their intrinsic noise can be harnessed to assist quantum computations.
## I Introduction
The dynamics of a closed quantum system in complete isolation from its surroundings is governed by the Schrodinger equation of the system degrees of freedom only and, due to the properties of the Hilbert space description of quantum many particle systems, displays a level of complexity that renders its simulation on a classical computer inefficient. Thus, the founding fathers of the field of quantum computation posited that a programmable but otherwise closed quantum device, a quantum computer, should be able to simulate this dynamics by harnessing the increased complexity of controlled quantum systems to our advantage [1; 2]. Thanks to the sustained research and development effort over the last 30 years we are now reaching a situation where the construction of ever more complex quantum information processors is becoming a reality. However, in practice interactions with uncontrolled environmental degrees of freedom, a.k.a. noise, are unavoidable and render present day quantum information processors unsuitable in principle for the efficient simulation of the pure state dynamics of isolated quantum systems. In fact, they are so noisy that even quantum error correction with an arbitrary overhead in physical resources would not yet be able to remove the effect of this noise.
Under the influence of intense noise, classical correlations tend to prevail, allowing for an accurate and efficient classical description. But for a moderate level of noise, such as that present in today's quantum information processors, the situation is different because the full density matrix formalism needs to be adopted to achieve an accurate description of the open quantum system. While the effect of some environments may be captured accurately by Markovian master equations [3; 4], the situation is even more challenging for complex environments that display temporal correlations as now environmental degrees of freedom need to be accounted for in some detail making the full system extremely large and challenging to simulate [5; 6; 7; 8; 9; 10; 11].
This added complexity suggests that under suitable conditions and control, environmental noise on quantum systems may confer additional benefits, thus adding motivation to the development of methods to simulate the properties of noisy quantum systems. The effect of environmental degrees of freedom on the static and dynamical properties of a quantum system is especially important in cases, where the time and energy scales involved are likely to make interactions between the system and the surrounding environment a key actor in their own right in the physics at play. Indeed, it was recognised early on in the development of quantum information processing that noise and dissipation may serve as a resource for robust state preparation [12; 13]. More recently, the recognition that quantum dynamics may play an important role in certain processes of life [14] led to the discovery that the interplay of coherent quantum dynamics and environmental noise has the potential to impart fundamental advantages on processes of life [15; 16; 17; 18]. The electronic properties of bio-molecular complexes are exceedingly difficult to compute and it is well-recognized that such quantum chemistry challenges represent a promising application for quantum computers [19]. However, these algorithms typically treat closed static systems while capturing the properties of environmentally assisted quantum dynamics requires the extension of these methods to open systems dynamics on a quantum information processor. This implies considerable, potentially crippling, overheads when pursued in the standard approaches using fully error corrected and thus noise-free devices.
These considerations raise the natural question whether it might be advantageous to subject a quan
tum information processor to carefully controlled internal noise sources in order to increase its efficiency in simulating a desired open system dynamics. Indeed, very first steps in this direction have been taken in analog quantum simulators [20] where added noise increases the efficiency of the quantum simulation of open system dynamics compared to standard approaches [21]. Nevertheless, until now noise in the field of digital quantum computation and simulation has been regarded as _detrimental_, to be fought by quantum error correction, and the question whether it can be leveraged for a particular quantum application has not been considered nor have methods been developed to achieve it. Here we address this challenge by combining the capability of a Noisy Intermediate-Scale Quantum (NISQ) computer to execute noisy quantum gate sequences with methods from error mitigation and controlled addition of errors to reshape the noise intrinsic to the NISQ device in such a manner that it models accurately the desired environmental noise of an open quantum system model of practical interest (see Fig. 1 for an overview of our scheme).
As a major benefit of this approach, the unavoidable decoherence of the quantum bits and quantum gates in a NISQ device is used to replace some of the quantum computational resources, i.e. qubits and two-qubit quantum gates, that would otherwise be necessary to simulate faithfully the effect of noise in the desired open quantum system model. Thus, the overall resource requirements for achieving the desired open system simulation are reduced and the capabilities of the same device for executing more complex applications are extended. We exemplify our approach of turning a bug into a feature, with the explicitly worked out example of the _simulation of the time-evolution of open quantum systems_. We show that it is possible to leverage noise processes, intrinsic and added noise, in a current NISQ device when the action of the desired environment to be simulated onto the open system can be approximated by a _Markovian stochastic Pauli noise channel_[3], but we stress that our methods are not restricted to this specific case.
This paper is organized as follows. In Sec. II, we discuss the implementation of unitary time-evolution operator on a quantum circuit via the Trotter- Suzuki product formula and the intrinsic noise of quantum devices. We briefly explain how the intrinsic noise can be transformed to stochastic Pauli noise channels by using Randomized Compiling [22; 23] and how the corresponding error probabilities can be estimated by Cycle Benchmarking [24] and Error Reconstruction [25] techniques. We demonstrate that the Pauli noise channels with estimated error probabilities enable one to simulate open quantum system dynamics under a Lindblad-type noise on NISQ devices in such a way that only the open-system degrees of freedom are encoded in the qubits. We show that the open-system dynamics computed by NISQ devices can be well-described by classical solutions of the Lindblad equation constructed based on the estimated error probabilities of the Pauli noise channels. In Sec. III, we discuss the conventional Probabilistic Error Cancellation technique [26; 27], developed to fully mitigate noise in a quantum circuit, and demonstrate that it can also be employed to partially mitigate the error probabilities of stochastic Pauli noise channels in a controlled manner. We show that our approach can be used to implement various Lindblad models with desired decoherence rates on NISQ devices.
## II Encoding of the time evolution in a quantum circuit
For simulations of closed system dynamics, various methods have been developed, such as hybrid classical-quantum variational approaches [28; 29; 30; 31], quantum tensor networks [32; 33; 34], and quantum signal processing techniques [35; 36]. In this work, we consider the Trotter-Suzuki product formula [37; 38] suitable for closed-system simulations on NISQ devices [39; 40]. We demonstrate that it can also be used for efficient quantum simulations of open-system dynamics on a noisy quantum device.
### Trotter-Suzuki product formula
The dynamics of a quantum system is governed by a Hamiltonian \(\hat{H}\). We consider a Hamiltonian decomposed into \(N\) components represented by the tensor products of Pauli matrices \(\hat{X},\hat{Y},\hat{Z}\) of multiple qubits
\[\hat{H}=\sum_{j=1}^{N}\hat{H}_{j},\quad\hat{H}_{j}=\alpha_{j}\hat{P}_{j}, \tag{1}\]
where \(\alpha_{j}\in\mathbb{R}\) and \(\hat{P}_{j}=\{\hat{X},\hat{Y},\hat{Z}\}^{\otimes n_{j}}\) is a Pauli string acting on \(n_{j}\) qubits. As an example, we consider the Hamiltonian of a linear chain consisting of \(n\) molecules
\[\hat{H}= -\sum_{m=1}^{n}\frac{E_{m}}{2}\hat{Z}_{m} \tag{2}\] \[+\sum_{m=1}^{n-1}\frac{J_{m,m+1}}{2}(\hat{X}_{m}\hat{X}_{m+1}+\hat {Y}_{m}\hat{Y}_{m+1}),\]
where each qubit describes a two-level molecule consisting of ground and excited states, encoded by \(|0\rangle\) and \(|1\rangle\), respectively, with an energy-gap of \(E_{m}\). The inter-qubit couplings \(J_{m,m+1}\) describe a coherent excitation transfer between nearest-neighbour molecules. This model has been widely used to study energy transfer dynamics in various molecular systems, such as photosynthetic complexes and organic solar cells, but a more general form of Hamiltonian can be considered in a noise-assisted digital quantum simulation, including spin and fermionic models, such as the Heisenberg [41; 42] and Fermi-Hubbard models [39; 43], respectively.
The product formula describing the time evolution of a quantum system over time \(t\) is formally expressed as
\[e^{-i\hat{H}t}\approx\prod_{d=1}^{D}\hat{U}_{k}(\Delta t), \tag{3}\]
where \(\hat{U}_{k}(\Delta t)\) represents the unitary time evolution over a finite Trotter time-step \(\Delta t\), described by the \(k\)-th order Trotter-Suzuki product formula, where the first two lowest order examples are given by
\[\hat{U}_{1}(\Delta t) =\prod_{j=1}^{N}e^{-i\hat{H}_{j}\Delta t}, \tag{4}\] \[\hat{U}_{2}(\Delta t) =\left[\prod_{j=1}^{N}e^{-i\hat{H}_{j}\Delta t/2}\right]\left[ \prod_{j^{\prime}=N}^{1}e^{-i\hat{H}_{j^{\prime}}\Delta t/2}\right]. \tag{5}\]
The total time evolution is then described by \(D=t/\Delta t\) Trotter layers where \(\hat{U}_{k}(\Delta t)\) is considered in each layer.
An estimate for the number of implemented Trotter layers \(D=t/\Delta t\) required to obtain a Trotter decomposition error \(||\hat{U}_{k}^{D}(\Delta t)-e^{-i\hat{H}t}||=O(\varepsilon_{\text{Tot}})\) can be given as follows [37],
\[D=O\left(\frac{\alpha_{\text{comm}}^{1/k}t^{1+1/k}}{\varepsilon_{\text{Tot}} ^{1/k}}\right) \tag{6}\]
for a \(k\)th-order Trotter-Suzuki product formula, where,
\[\alpha_{\text{comm}}=\sum_{l_{1},l_{2},\ldots,l_{k+1}=1}^{N}||[\hat{H}_{l_{k +1}},\ldots[\hat{H}_{l_{2}},\hat{H}_{l_{1}}]\ldots]||,\]
and \(||\cdot||\) denotes the spectral norm. \(\hat{H}_{l_{k}}\) denotes a Pauli string term out of a total of \(N\) terms the Hamiltonian is decomposed into (see Eq. (1)).
Figure 1: Overview of the noise-assisted technique for digital quantum simulation of open system dynamics proposed in this work. The \(k\)-th order Trotter-Suzuki decomposition of a time-evolution operator \(\exp(-i\hat{H}t)\) is implemented on a quantum circuit for a given open-system Hamiltonian \(\hat{H}\), leading to \(D\) Trotter layers \(\hat{U}(\Delta t)\) with a Trotter time-step \(\Delta t=t/D\). The intrinsic noise of quantum devices is transformed to stochastic Pauli noise channels \(\mathcal{E}\) by using Randomized Compiling. In the noise characterization step, the Pauli error probabilities \(\epsilon_{k}\) are estimated once for a noisy Trotter layer \(\mathcal{U}(\Delta t)\) by using Cycle Benchmarking and Error Reconstruction techniques. In the digital quantum simulation of open system dynamics implemented by \(D\) Trotter layers, the stochastic Pauli noise induces decoherence effects on the open system dynamics. In the noise control step, the stochastic Pauli noise is partially mitigated in a controlled manner by using a Probabilistic Error Cancellation (PEC) technique, leading to reduced error probabilities \(\epsilon_{k}^{\text{(mit)}}=\epsilon_{k}(1-r_{k})\) with mitigation factors \(r_{k}\in[0,1]\). With an optimal choice of the Trotter time-step \(\Delta t\), this enables one to implement a Lindblad equation with target decoherence rates \(\gamma_{k}=\epsilon_{k}^{\text{(mit)}}/\Delta t\) on NISQ devices.
We note that variants of Trotter-Suzuki product formulae can also be considered in our approach to reduce simulation error, such as symmetry-protected formulae [42], random formulae [44], and implementation of a specific Trotter sequence of Hamiltonian terms that preserves the locality of the simulated system [37].
### Noise characterization
On NISQ devices, the implementation of each unitary gate of a Trotterized time-evolution operator, such as \(e^{-i\hat{H}_{j}\Delta t}\) in Eq. (4), suffers from noise and dissipation. As a result, the time evolution of a quantum system encoded in the qubits does not only depend on the Hamiltonian implemented on a quantum circuit, but also on the parameters that characterize the intrinsic noise of the quantum device. When its noise can be well-described by a Markovian theory, as demonstrated in previous studies on superconducting quantum computing platforms [45; 46], the dynamics of the density matrix \(\hat{\rho}(t)\) of the open system encoded in the qubits is described by a Markovian quantum master equation in the form
\[\frac{d\hat{\rho}(t)}{dt}=\mathcal{L}[\hat{\rho}(t)]=-i[\hat{H},\hat{\rho}(t)] +\mathcal{D}_{\rm intrinsic}[\hat{\rho}(t)], \tag{7}\]
where \(\mathcal{D}_{\rm intrinsic}[\hat{\rho}(t)]\) represents a Lindblad dissipator describing the intrinsic noise of NISQ devices.
In this work, we aim to implement the decoherence of open quantum systems modelled by a Lindblad equation by harnessing the intrinsic noise of NISQ devices as a resource, rather than encoding environmental degrees of freedom on qubits. Therefore, to simulate open-system dynamics under various decoherence models of interest, we need to characterize the noise channels present in NISQ devices and control the corresponding noise rates. To that end, we apply a noise characterization technique to the noisy implementation \(\mathcal{U}\) of the Trotter layer on a quantum circuit (we omitted the \(k\)-th order and \(\Delta t\) dependence of \(\mathcal{U}_{k}(\Delta t)\) for simplicity), prior to the implementation of the digital quantum simulation of open-system dynamics. We employ the Randomized Compiling technique [22; 23] to transform the intrinsic coherent noise in a NISQ device, described by the Kraus operators in the form of \(\sum_{j\neq k}\epsilon_{jk}\hat{P}_{j}\hat{\rho}\hat{P}_{k}\), to stochastic Pauli noise channels \(\sum_{k}\epsilon_{k}\hat{P}_{k}\hat{\rho}\hat{P}_{k}\), where \(\hat{P}_{k}\) are Pauli strings, including identity operator, and \(\epsilon_{k}\) are the corresponding error probabilities satisfying \(\sum_{k}\epsilon_{k}=1\) and \(\epsilon_{k}\geq 0\). Randomized Compiling is implemented in both noise characterization and digital quantum simulations of open-system dynamics on real IBMQ devices, as explained below.
We employ the Cycle Benchmarking [24], one of the Randomized Benchmarking methods [47], together with the Error Reconstruction technique [25] to estimate the error probabilities \(\epsilon_{k}\) of quantum circuits on NISQ devices, as recently demonstrated in Ref. [23; 45; 46]. This enables the characterization of \(K\)-qubit Pauli noise channels acting on \(K\) nearest-neighbour qubits. For instance, when \(K=1\), the estimated stochastic Pauli noise channels acting on a single qubit \(m\) are expressed as
\[\mathcal{E}_{m}^{(1)}(\hat{\rho})=\epsilon_{0}\hat{\rho}+\epsilon_{X}\hat{X}_ {m}\hat{\rho}\hat{X}_{m}+\epsilon_{Y}\hat{Y}_{m}\hat{\rho}\hat{Y}_{m}+ \epsilon_{Z}\hat{Z}_{m}\hat{\rho}\hat{Z}_{m}. \tag{8}\]
For \(K=2\), the stochastic Pauli noise channels acting on two nearest-neighbour qubits \(m\) and \(m+1\) consist of the single-qubit noise in Eq. (8), independently acting on each qubit, and the correlated two-qubit noise acting on two qubits at the same time
\[\mathcal{E}_{m,m+1}^{(2)}(\hat{\rho}) =\mathcal{E}_{m}^{(1)}(\hat{\rho})+\mathcal{E}_{m+1}^{(1)}(\hat{ \rho}) \tag{9}\] \[\quad+\epsilon_{XX}\hat{X}_{m}\hat{X}_{m+1}\hat{\rho}\hat{X}_{m} \hat{X}_{m+1},\] \[\quad+\epsilon_{XY}\hat{X}_{m}\hat{Y}_{m+1}\hat{\rho}\hat{X}_{m} \hat{Y}_{m+1}+\cdots,\]
where the latter takes into account all possible combinations of the Pauli operators of qubits \(m\) and \(m+1\). Formally, the \(K\)-qubit noise channels are expressed as
\[\mathcal{E}^{(K)}(\rho)=\sum_{k=0}^{4^{K}-1}\epsilon_{k}\hat{P}_{k}\hat{\rho} \hat{P}_{k}, \tag{10}\]
with \(\hat{P}_{k}\) denoting Pauli strings. For a quantum circuit consisting of \(n\) qubits, when \(K=n\), all possible Pauli strings are considered in the noise characterization. When \(K<n\), the \(K\)-qubit Pauli noise channels can be characterized independently for each subgroup consisting of \(K\) nearest-neighbour qubits.
We remark that the Cycle Benchmarking technique can be employed to characterize the noise acting on a quantum circuit \(\hat{U}\) satisfying \(\hat{U}^{m}=\hat{I}\) for several integer values \(m\). The Trotter layer \(\hat{U}_{k}(\Delta t)\) constructed based on an open-system Hamiltonian, however, does not satisfy this condition in general. In this work, we modified the parameters of single-qubit gates \(\hat{R}_{Z}(\theta)=e^{-i\theta\hat{Z}/2}\) of \(\hat{U}_{k}(\Delta t)\), specifically \(\theta=J_{m,m+1}\Delta t\) to \(\theta=\pi\) and \(\theta=-E_{m}\Delta t\) to \(\theta=0\) (see Eq. (2)), while maintaining its two-qubit gate structure. This yields a Clifford circuit \(\hat{V}\), so that the Cycle-Benchmarking condition \(\hat{V}^{m}=\hat{I}\) is satisfied for every even integer \(m\). Since the degree of noise of two-qubit gates is approximately two orders of magnitude higher than that of single-qubit gates on superconducting quantum devices, this approach enables one to estimate the error probabilities \(\epsilon_{k}\) of stochastic Pauli noise acting on the original Trotter layer \(\hat{U}_{k}(\Delta t)\) in an accurate manner, as demonstrated below.
### Digital quantum simulation of Lindblad model
To demonstrate that the estimated error probabilities \(\epsilon_{k}\) of the stochastic Pauli noise channels can be used to investigate open-system dynamics under a Lindblad-type noise, we consider a Markovian quantum master equation in the form
\[\frac{d\hat{\rho}(t)}{dt}=\mathcal{L}[\hat{\rho}(t)]=-i[\hat{H},\hat{\rho}(t)] +\mathcal{D}_{\rm stochastic}[\hat{\rho}(t)], \tag{11}\]
where the Lindblad dissipator \(\mathcal{D}_{\text{stochastic}}[\hat{\rho}(t)]\) is modelled by
\[\mathcal{D}_{\text{stochastic}}[\hat{\rho}(t)] =\sum_{k=0}^{4^{K}-1}\gamma_{k}\left(\hat{P}_{k}\hat{\rho}(t)\hat{P }_{k}-\hat{\rho}(t)\right), \tag{12}\] \[\gamma_{k} =\epsilon_{k}/\Delta t. \tag{13}\]
Here the decoherence rates \(\gamma_{k}\) are defined as a function of the error probabilities \(\epsilon_{k}\) and the Trotter time-step \(\Delta t\).
In the following, we demonstrate by means of an example that the dynamics of a reduced system density matrix \(\hat{\rho}_{c}(t)\) simulated by using Eq. (11)-(13) on classical computers is well-matched to its counterpart \(\hat{\rho}_{q}(t)\) obtained from real and emulated quantum computers, where the system Hamiltonian is implemented by using the first-order Trotter-Suzuki formula in Eq. (4) and the noise characteristics have been obtained as described above. We note that the Randomized Compiling is applied to every Trotter layer in our approach, so that the stochastic Pauli noise channels, identified by the noise characterization scheme presented in Sec. II.2, are maintained during the digital quantum simulation of open-system dynamics (see Appendix A for more details).
Figure 2: (a) Error probabilities \(\epsilon_{k}\) of stochastic Pauli noise channels estimated for a quantum circuit consisting of two qubits (\(n=2\)) are shown where the real device _ibmq jakarta_ with a circuit structure shown in Fig. 3(a) was employed. The error probabilities of single-qubit noise channels, \(\hat{X}_{m}\), \(\hat{Y}_{m}\), \(\hat{Z}_{m}\), and two-qubit dephasing \(\hat{Z}_{1}\hat{Z}_{2}\) are shown in blue, while the other two-qubit noise channels are displayed in cyan. (b) Population dynamics of qubits computed by the real device _ibmq jakarta_ are shown in dots, while those obtained by classically solving a Lindblad equation with decoherence rates determined by the measured error probabilities in (a) are shown in crosses (see the main text). The negative error probabilities found within error bars (\(\hat{Z}_{1}\hat{Y}_{2}\)) are taken to be zero, and the Lindblad equation was solved by using the first-order Trotter-Suzuki product formula. (c) Real and imaginary parts of inter-qubit coherence dynamics computed by the emulated quantum computer _ibmq jakarta_ are compared with classical solutions of the Lindblad equation obtained by a standard RK4 solver (see solid lines). (d) For a linear chain consisting of four qubits (\(n=4\)), the population dynamics of the reduced density matrix \(\text{Tr}_{1,4}[\hat{\rho}(t)]\) of the second and third qubits are displayed, where the results obtained by the real device _ibmq lagos_ are shown in dots and the classical solutions of the Lindblad equation are shown in crosses. In (a,b) and (d), \(R=30\) and \(R=22\) randomized compiled circuits were used, respectively (see Appendix A). In all simulations, dimensionless parameters of open-system Hamiltonian in Eq. (2) are taken to be \(E_{m}=122-0.5m\) and \(J_{m,m+1}=0.5\), motivated by typical electronic parameters of photosynthetic pigment-protein complexes with site energies \(12200-50m\,\text{cm}^{-1}\) and inter-site electronic coupling strength \(50\,\text{cm}^{-1}\)[14]. The initial state is taken to be \(|1,0,\cdots,0\rangle\) where the first qubit is in \(|1\rangle\), while all the other qubits are in \(|0\rangle\).
In Fig. 2(a) and (b), we consider a quantum system encoded in two qubits (\(n=2\)) modelled by the Hamiltonian in Eq. (2). The structure of the implemented quantum circuit is shown in Fig. 3(a). For a real IBMQ computer, we estimated the error probabilities \(\epsilon_{k}\) of the full stochastic Pauli noise channel (\(K=2\)). As shown in Fig. 2(a), we found that the error probabilities \(\epsilon_{k}\) are not uniform with relatively higher values for \(\hat{P}_{k}\in\{\hat{X}_{1},\hat{Y}_{1},\hat{Z}_{1},\hat{Z}_{2},\hat{X}_{1}\hat {Y}_{2},\hat{Y}_{1}\hat{Z}_{2},\hat{Z}_{1}\hat{Z}_{2}\}\), hinting that qubit 1 is more noisy than qubit 2. As shown in Fig. 2(b), the population dynamics of the reduced system density matrices \(\langle i,j|\,\hat{\rho}_{q,c}(t)\,|i,j\rangle\) simulated by quantum and classical computers are well-matched for all possible values of \(i,j\in\{0,1\}\). It is found that when only single-qubit Pauli noise channels are characterized (\(K=1\)), the quantum and classical results are not matched (not shown here), which can be rationalized based on the fact that the error probabilities of two-qubit Pauli noise channels, such as \(\hat{P}_{k}\in\{\hat{X}_{1}\hat{Y}_{2},\hat{Y}_{1}\hat{Z}_{2},\hat{Z}_{1}\hat {Z}_{2}\}\), are not negligible. Fig. 2(c) shows that the inter-qubit coherence dynamics \(\langle 10|\,\hat{\rho}_{q,c}(t)\,|01\rangle\) are also well-matched, where the emulated noisy quantum device _ibmq jakarta_ was used instead of the real IBMQ device due to the long queue waiting time on the IBMQ platform. In Fig. 2(d), we consider a larger quantum system encoded in four qubits (\(n=4\)) with a circuit structure shown in Fig. 3(b). It is found that the reduced system dynamics \(\hat{\rho}_{q}(t)\) computed by the real IBMQ computer are well-matched to the classical solutions \(\hat{\rho}_{c}(t)\) of the Lindblad equation when also two-qubit stochastic Pauli noise channels are characterized (\(K=2\)). This implies that the Pauli noise channels acting on more than two qubits are negligible in our case due to the short depth of the Trotter layer structure shown in Fig. 3(b). We note that the first-order Trotter decomposition is considered in both quantum and classical simulations to investigate the accuracy of the noise characterization scheme considered in our work, independent of the Trotter decomposition error induced by a finite Trotter time-step \(\Delta t\). We performed all the simulations with a sufficiently small \(\Delta t\), for which classical solutions \(\hat{\rho}_{c}(t)\) can also be obtained by using a standard RK4-based solver. These results demonstrate that the intrinsic noise of NISQ devices can be transformed to the stochastic Pauli noise channels via Randomized Compiling, which can be used as a platform to simulate open-system dynamics under a Lindblad-type noise.
## III Digital control of decoherence in a quantum circuit
So far we have demonstrated that the intrinsic noise of quantum devices can be transformed to stochastic Pauli noise channels and the corresponding error probabilities \(\epsilon_{k}\) can be estimated, enabling one to implement a Lindblad equation on quantum computers with decoherence rates \(\gamma_{k}=\epsilon_{k}/\Delta t\) (see Eq. (11)-(13)). Now we show that the error probabilities of the Pauli noise channels can be controlled independently by using a partial Probabilistic Error Cancellation (PEC) technique [26; 27], leading to reduced error probabilities \(\epsilon_{k}(1-r_{k})\) with independent partial mitigation factors \(r_{k}\in[0,1]\). This approach makes it possible to implement an arbitrary set of target decoherence rates \(\Gamma_{k}\) on NISQ devices, so that one can investigate open-system dynamics described by a Lindblad equation of interest in the following form
\[\mathcal{D}^{(\text{controlled})}_{\text{stochastic}}[\hat{\rho}(t)] =\sum_{k=0}^{4^{K}-1}\Gamma_{k}\left(\hat{P}_{k}\hat{\rho}(t) \hat{P}_{k}-\hat{\rho}(t)\right), \tag{14}\] \[\Gamma_{k} =\epsilon_{k}(1-r_{k})/\Delta t. \tag{15}\]
We note that our method can be modified to implement more general Lindblad noise models beyond the stochastic Pauli noise channels in Eq. (14), such as amplitude damping noise as will be discussed later.
### Probabilistic Error Cancellation
PEC starts by identifying the noise channel \(\mathcal{E}\) acting on an ideal, noiseless circuit \(\mathcal{C}(\hat{\rho})=\hat{U}_{k}(\Delta t)\hat{\rho}\hat{U}_{k}^{\dagger} (\Delta t)\), for instance, describing the Hamiltonian dynamics of a quantum system over a Trotter time-step \(\Delta t\). As discussed in Sec. II.2, one can characterize the noise of a quantum circuit, yielding \(K\)-qubit stochastic Pauli noise channels, \(\mathcal{E}(\hat{\rho})=\sum_{k=0}^{4^{K}-1}\epsilon_{k}\mathcal{P}_{k}(\hat{ \rho})=\sum_{k=0}^{4^{K}-1}\epsilon_{k}\hat{P}_{k}\hat{\rho}\hat{P}_{k}\) with \(\hat{P}_{0}=\hat{I}^{\otimes K}\) denoting identity operator. To fully mitigate the noise, the conventional PEC has considered the
inverted noise channel \(\mathcal{E}^{-1}\) acting on the noisy quantum circuit, namely \(\mathcal{E}^{-1}\) applied to \(\mathcal{U}(\hat{\rho})=\mathcal{E}\cdot\mathcal{C}(\hat{\rho})\). Since \(\mathcal{E}^{-1}\) is not a complete-positive (CP) map [26; 27], one cannot physically implement it, hence one cannot perfectly cancel the stochastic Pauli noise channels in \(\mathcal{U}\). However, when the error probabilities \(\epsilon_{k}\) are sufficiently low, the conventional PEC technique can almost fully mitigate the noise (up to first order on \(\epsilon=\sum_{k>0}\epsilon_{k}\)) as \(\mathcal{C}(\hat{\rho})\approx\mathcal{E}^{-1}\mathcal{U}(\hat{\rho})\) by implementing the non-physical inverted noise channel in a probabilistic manner.
When \(K\)-qubit Pauli noise channels act on a quantum circuit consisting of \(K\) qubits (\(n=K\)), the inverted noise channel is written as [26; 27]
\[\mathcal{E}^{-1}=\sum_{k=0}^{4^{K}-1}q_{k}\mathcal{P}_{k}=C_{\text{mit}}\sum_{ k=0}^{4^{K}-1}p_{k}^{\text{(PEC)}}\text{sign}(q_{k})\mathcal{P}_{k}, \tag{16}\]
where \(q_{0}=1+\sum_{k>0}\epsilon_{k}\), \(q_{k>0}=-\epsilon_{k}\), \(p_{k}^{\text{(PEC)}}=|q_{k}|/C_{\text{mit}}\) with \(C_{\text{mit}}=\sum_{k=0}|q_{k}|\) called the mitigation cost. In the probabilistic application of the non-CP map \(\mathcal{E}^{-1}\), one of the Pauli operators \(\hat{P}_{k}\) is randomly chosen based on the probabilities \(p_{k}^{\text{(PEC)}}\) and then applied to a Trotter layer. For a quantum circuit with \(D=t/\Delta t\) Trotter layers, the probabilistic non-CP map is applied \(D\) times with the Pauli operators \(\hat{P}_{k}\) independently sampled for each Trotter layer, as schematically shown in Fig. 4(a).
PEC can also be applied to a quantum circuit consisting of \(n\) qubits where \(K\)-qubit Pauli noise channels with \(K<n\) act on several subgroups consisting of \(K\) qubits. Here the noise mitigation can be implemented by considering multiple inverted noise channels acting on different subgroups. Fig. 4(b) shows an example of \(n>K=2\) where the inverted noise channels act on every pair of nearest-neighbour qubits. We note that when different \(K\)-qubit Pauli noise channels act on the qubits in the intersection of different subgroups, one needs to adjust the mitigation probabilities \(p_{k}^{\text{(PEC)}}\) in such a way that the noise acting on the shared qubits is not cancelled multiple times.
In PEC, the outcome of an observable \(\hat{O}\) measured on a noise-mitigated quantum circuit is multiplied by a product of mitigation costs and other prefactors in Eq. (16), \(\prod_{d=1}^{D}\prod_{m}C_{\text{mit}}^{(m)}\text{sign}(q_{k}^{(m,d)})\), where \(m\) describes different subgroups of qubits under the action of \(K\)-qubit Pauli noise channels. The value of \(\text{sign}(q_{k}^{(m,d)})\) depends on which Pauli operator \(\hat{P}_{k}\) is randomly sampled in the \(d\)-th Trotter layer. The total mitigation cost of quantum simulation is defined as
\[C_{\text{tot}}=\prod_{d=1}^{D}\prod_{m}C_{\text{mit}}^{(m)}. \tag{17}\]
For a noise-mitigated density matrix \(\hat{\rho}_{\text{mit}}(t)\), the expectation value \(\text{Tr}[\hat{O}\hat{\rho}_{\text{mit}}(t)]\) of the observable \(\hat{O}\) is obtained by classically averaging the outcomes of the PEC scheme, requiring multiple copies of quantum circuits.
### Decoherence rate control scheme
So far we have discussed the conventional PEC that aims to fully mitigate the stochastic Pauli noise channels [26; 27]. Contrary to the previous studies on PEC, here we aim to partially cancel the noise, so that the controlled error probabilities of the Pauli noise channels can be used as a resource for open-system simulations. To that end, we consider \(q_{0}=1+\sum_{k>0}r_{k}\epsilon_{k}\) and \(q_{k>0}=-r_{k}\epsilon_{k}\) with \(r_{k}\in[0,1]\), renormalizing the probabilities of the PEC scheme, namely \(p_{k}^{\text{(PEC)}}=|q_{k}|/C_{\text{mit}}\) with \(C_{\text{mit}}=\sum_{k=0}|q_{k}|\). The corresponding _partially_ inverted noise channel enables one to reduce the Pauli error probabilities in a controlled manner, \(\epsilon_{k}\rightarrow(1-r_{k})\epsilon_{k}\). The reduced error probabilities make it possible to implement the Lindblad equation in Eq. (14) on NISQ devices with controlled decoherence rates \(\Gamma_{k}=\epsilon_{k}(1-r_{k})/\Delta t\).
For a given set of target decoherence rates \(\Gamma_{k}\), our scheme works as follows. Before our PEC scheme is applied to a quantum circuit, for a given Trotter time-step \(\Delta t\), the decoherence rates \(\gamma_{k}=\epsilon_{k}/\Delta t\) of the stochastic Pauli noise channels may satisfy \(\Gamma_{k}>\gamma_{k}\) for some \(k\). Since the error probabilities \(\epsilon_{k}\) of the Pauli noise channels can be decreased but not increased by our PEC scheme, namely \(\epsilon_{k}\rightarrow(1-r_{k})\epsilon_{k}\) with \(r_{k}\in[0,1]\), the decoherence rates \(\gamma_{k}\) implemented on the quantum device can be increased only by reducing the Trotter time-step \(\Delta t\). Hence we decrease \(\Delta t\) until \(\Gamma_{k}\leq\gamma_{k}=\epsilon_{k}/\Delta t\) is satisfied for all \(k\) (see Fig. 5(a)). Here one can find a range of \(\Delta t\in(0,\Delta t_{\text{max}}]\) satisfying this condition and take the maximum value \(\Delta t_{\text{max}}\) to minimize the number \(D=t/\Delta t_{\text{max}}\) of Trotter layers. After that, for some \(k\) where the target
Figure 4: Probabilistic Error Cancellation technique applied to a quantum circuit consisting of \(n\) qubits and multiple Trotter layers. (a) To mitigate \(K\)-qubit Pauli noise channels with \(K=n\), one of the Pauli strings \(\hat{P}_{k}\) is randomly generated based on a probability distribution \(p_{k}^{\text{(PEC)}}\) for each Trotter layer independently. (b) When \(K=2\), one of the Pauli strings \(\hat{P}_{m,k_{m}}\) acting on only one or two qubits is randomly generated for every pair of nearest-neighbour qubits \(m\) and \(m+1\).
decoherence rates are lower than the implemented decoherence rates, namely \(\Gamma_{k}<\tilde{\Gamma}_{k}=\epsilon_{k}/\Delta t_{\max}\), we apply our PEC scheme to partially mitigate the corresponding error probabilities \(\epsilon_{k}\), so that \(\Gamma_{k}=\epsilon_{k}(1-r_{k})/\Delta t_{\max}\) holds for all \(k\) (see Fig. 5(b)). In this way, one can implement arbitrary target decoherence rates \(\Gamma_{k}\), in principle, on NISQ devices.
We note that the error probabilities \(\epsilon_{k}\) of real quantum devices are not uniform, as shown in Fig. 2(a). This implies that even a simple decoherence model with uniform target decoherence rates \(\Gamma_{k}\) requires our _noise-specific_ mitigation scheme with non-uniform values of \(r_{k}\). In addition, the control of the Trotter time-step \(\Delta t\) alone is not sufficient to implement arbitrary target decoherence rates, especially when \(\Gamma_{k}\) are small, as a larger \(\Delta t\) results in a higher Trotter decomposition error. This is indeed the main issue of Ref. [48].
### Implementation
To demonstrate that our partial error mitigation scheme can be used to implement the Lindblad equation in Eq. (14) on NISQ devices with controlled decoherence rates in Eq. (15), here we consider a quantum system encoded in two qubits (\(n=2\)). For simplicity, we consider a uniform mitigation factor \(r=r_{k}\) for all \(k\). We used the emulated noisy IBMQ device _ibmq lags_ as our testbed.
For two different mitigation factors \(r=0.2\) and \(r=0.8\), respectively, Fig. 6(a) and (b) show the population dynamics \(\left\langle i,j\right|\hat{\rho}_{q}(t)\left|i,j\right\rangle\) with \(i,j\in\{0,1\}\) simulated by the emulated quantum computer, which are well-matched to the solutions \(\hat{\rho}_{c}(t)\) of the Lindblad equation solved on classical computers by a RK4-based solver (coherence dynamics are also well-matched, not shown here). As expected, the open-system dynamics becomes more coherent with a slower decay of oscillations for a larger mitigation factor \(r\). Here the number of samples considered in our generalized PEC scheme is increased until the quantum and classical results are well-matched, and it is found that a larger number of samples is required for a higher mitigation factor \(r\).
To demonstrate the dependence of the sampling cost of our PEC scheme on the mitigation factor \(r\), Fig. 6(c) shows the difference in population dynamics simulated by quantum and classical computers, quantified by
\[\eta(\hat{\rho}_{q}(t),\hat{\rho}_{c}(t))=\frac{1}{4}\sum_{i=0}^{1}\sum_{j=0}^ {1}\left|\left\langle i,j\right|\hat{\rho}_{q}(t)-\hat{\rho}_{c}(t)\left|i,j \right\rangle\right.\right|, \tag{18}\]
for the mitigation factors \(r=0.2\) and \(r=0.8\). Here the number of samples of our PEC scheme is taken to be independent of the mitigation factor \(r\), and the classical solutions \(\hat{\rho}_{c}(t)\) of the Lindblad equation were obtained by using the Trotter-Suzuki product formula, instead of the standard RK4-based solver, so that the Trotter decomposition error is identical in quantum and classical simulations. It is notable that the average difference \(\eta(\hat{\rho}_{q}(t),\hat{\rho}_{c}(t))\) between quantum and classical results is larger for higher \(r\). In addition, as shown in Fig. 6(d), the variance of \(\eta(\hat{\rho}_{q}(t),\hat{\rho}_{c}(t))\) is also larger for higher \(r\), hinting that the sampling cost of our PEC scheme, required to obtain reliable open-system dynamics, increases as a function of the mitigation factor \(r\), as will be analyzed later in more detail.
These results demonstrate that a Lindblad equation with controlled decoherence rates can be implemented on NISQ devices by our partial error mitigation scheme. We note that quantum and classical results are also well-matched when a larger quantum system is considered, encoded in four qubits (\(n=4\)), together with non-uniform mitigation factors \(r_{k}\), as shown in Appendix B.
### Resource scaling of partial noise mitigation
The application of PEC to large quantum circuits faces a bottleneck due to the enhanced statistical fluctuations. For a given observable \(\hat{O}\), when a finite number \(M\) of outcomes is measured in experiments without PEC, the uncertainty in expectation value \(\langle\hat{O}\rangle_{M}\) may be described by its variance \(\Delta\hat{O}_{M}\propto M^{-1}\). It has been shown that when PEC is employed, the variance is approximately increased to \(\Delta\hat{O}_{M}^{(\text{PEC})}\propto C_{\text{tot}}^{2}M^{-1}\) where \(C_{\text{tot}}\geq 1\) is the total mitigation cost in Eq. (17) [49; 26]. Therefore, to maintain the degree of the uncertainty in expectation values, one needs to increase the number of single-shot measurements from \(M\) to \(MC_{\text{tot}}^{2}\) when PEC is employed. For the full mitigation scheme with \(r_{k}=1\) for all \(k\), it can be shown that the total mitigation cost \(C_{\text{tot}}\) increases exponentially as a function of the number \(n\) of qubits, the
Figure 5: Decoherence rate control scheme consisting of two steps. (a) For a given set of Pauli error probabilities \(\epsilon_{k}\), a Trotter time-step \(\Delta t\) is decreased so that \(\gamma_{k}=\epsilon_{k}/\Delta t\) becomes larger than or equal to target decoherence rates \(\Gamma_{k}\). The maximum value of the Trotter time-step satisfying this condition is denoted by \(\Delta t_{\max}\). (b) If \(\tilde{\Gamma}_{k}=\epsilon_{k}/\Delta t_{\max}\) is larger than the target decoherence rate \(\Gamma_{k}\) for some \(k\), the corresponding error probability \(\epsilon_{k}\) is partially mitigated by using Probabilistic Error Cancellation, so that the target decoherence rate \(\Gamma_{k}=\epsilon_{k}(1-r_{k})/\Delta t_{\max}\) is implemented with an optimal mitigation factor \(r_{k}\).
depth of quantum circuits, and the total error probability \(\sum_{k>0}\epsilon_{k}\) of stochastic Pauli noise channels.
For the case that the error probabilities \(\epsilon_{k}\) are partially reduced with the mitigation factors \(r_{k}\in[0,1]\), the total mitigation cost \(C_{\rm tot}\) in Eq. (17) can be computed analytically
\[C_{\rm tot}=\prod_{d=1}^{D}\prod_{m}(1+2\epsilon_{r})=(1+2\epsilon_{r})^{g(n)D}, \tag{19}\]
where \(\epsilon_{r}=\sum_{k>0}\epsilon_{k}r_{k}\) is the sum of the error probabilities \(\epsilon_{k}\) weighted by the corresponding mitigation factors \(r_{k}\), and \(g(n)\) is the number of \(K\)-qubit Pauli noise channels acting on a quantum circuit consisting of \(n\) qubits. For simplicity, we assume that \(\epsilon_{r}\) are identical for all the \(K\)-qubit stochastic Pauli noise channels acting on different subgroups of qubits (\(n>K\)). Practically we are interested in the limit of \(n\gg K\) where a large open-system is encoded in a quantum circuit, and the noise is not strongly correlated amongst many qubits and therefore well-characterized by small \(K\approx 2\). In this case, the number of different \(K\)-qubit Pauli noise channels acting on nearest-neighbour qubits increases linearly as a function of the total number \(n\) of qubits, \(g(n)\propto n\). In the limit of a small total error probability \(\epsilon_{r}\) weighted by the mitigation factors and a sufficiently large number of qubits and/or Trotter layers, namely \(\epsilon_{r}\to 0\) and \(nD\rightarrow\infty\), the total mitigation cost can be approximately described by an exponential function \(C_{\rm tot}\sim e^{2\epsilon_{r}nD}\). For the finite values of \(\epsilon_{r}\), \(n\) and \(D\), now we show that the total mitigation factor can also be well-described by an exponential function in the form
\[C_{\rm tot}\sim e^{\lambda nD\epsilon_{r}}, \tag{20}\]
with a positive constant \(\lambda\) introduced as a fitting parameter of simulated results.
In Fig. 7(a) and (b), we show the total mitigation cost \(C_{\rm tot}\) in a logarithmic scale, computed based on its defini
Figure 6: (a,b) Population dynamics of an open quantum system encoded in two qubits (\(n=2\)), computed by the emulated noisy IBMQ device with partial noise mitigation scheme. In (a) and (b), uniform mitigation factor \(r=0.2\) and \(r=0.8\) were considered, respectively, where \(160C_{\rm tot}^{2}\) circuits were employed to partially mitigate noise with \(C_{\rm tot}^{2}\) computed by using Eq. (19). As shown in Eq. (20), the total mitigation factor \(C_{\rm tot}\) increases exponentially as a function of the number \(D\) of Trotter layers and the uniform mitigation factor \(r\). This implies that the partial PEC cost increases as a function of time \(t\), and the number \(160C_{\rm tot}^{2}\) of circuits considered in simulations is smaller for \(r=0.2\) than for \(r=0.8\). The simulated results obtained by the emulated noisy IBMQ device are well matched to classical solutions of the Lindblad equation in Eq. (14) with controlled decoherence rates in Eq. (15), shown in solid lines. (c,d) To demonstrate that the number of circuits required for the partial PEC scheme depends on the uniform mitigation factor \(r\), the same number of circuits was considered for \(r=0.2\) and \(r=0.8\) in independent simulations, specifically \(16C_{\rm tot}^{2}\) calculated by using Eq. (19) with \(r=0.8\). To quantify the error introduced by a finite number of circuits considered in the partial PEC scheme, the difference in population dynamics simulated by quantum and classical computers is considered (see the main text), leading to (c) average error and (d) its standard deviation. The Hamiltonian parameters considered in simulations are as in Fig. 2.
tion in Eq. (17), as a function of the number \(D\) of Trotter layers. The error probabilities \(\epsilon_{k}\), determining the cost function \(C_{\rm tot}\), were obtained from a real IBMQ computer (see Sec. II.2). For simplicity, we consider a uniform mitigation factor \(r\), satisfying \(r=r_{k}\) for all \(k\). The maximum cost supported by current quantum devices is highlighted by horizontal dashed lines. In Fig. 7(a), where \(r=1\), \(C_{\rm tot}\) is displayed for different sizes of quantum circuits, \(n\in\{2,3,4\}\). It is found that the numerical results can be well fitted by an exponential function in the form \(e^{\alpha(n)D}\) with \(\alpha(n)\approx 0.186\,n\), demonstrating the linear dependence of \(\log(C_{\rm tot})\) on the number \(n\) of qubits. In Fig. 7(b), where \(n=4\), \(C_{\rm tot}\) is shown for different uniform mitigation factors \(r\in\{0.25,0.50,0.75,1.00\}\), where numerical results can be well-fitted by \(e^{\beta(r)D}\) with \(\beta\approx 0.678\,r\), revealing the linear dependence of \(\log(C_{\rm tot})\) on the partial mitigation factor \(r\). These results are in line with the approximate form of the total mitigation cost \(C_{\rm tot}\) in Eq. (20).
We note that the total mitigation cost \(C_{\rm tot}\) of our generalized PEC scheme decreases exponentially, as the partial mitigation factor \(r<1\) is reduced, when compared to the conventional full mitigation scheme with \(r=1\). This implies that our PEC scheme can be applied to a larger quantum circuit consisting of an increased number of qubits and/or Trotter layers, hinting that digital quantum simulation of open-system dynamics via our technique may be a promising application to NISQ devices. It is also notable that the total mitigation cost can be exponentially decreased further as the noise probabilities \(\epsilon_{k}\) are reduced in the future quantum devices. To highlight this aspect, in Fig. 7(c), where \(n=4\), we computed \(C_{\rm tot}\) based on randomly generated error probabilities \(\epsilon_{k}\) from Gaussian distributions with equal average \(\langle\epsilon_{k}\rangle\) and FWHM \(\Delta\epsilon_{k}=\frac{1}{2}\langle\epsilon_{k}\rangle\) for all \(k\). It is found that when \(\langle\epsilon_{k}\rangle=0.01\), this model can quantitatively reproduce the total mitigation cost computed based on the error probabilities of a real IBMQ computer (see the case of \(n=4\) in Fig. 7(a)). As the average error probabilities \(\langle\epsilon_{k}\rangle\) are reduced from \(0.01\), via \(0.005\), to \(0.002\), the total mitigation cost decreases exponentially, as shown in Fig. 7(c).
These results demonstrate that our partial noise mitigation scheme can be employed to implement a Lindblad model with arbitrary target decoherence rates \(\Gamma_{k}\) on NISQ devices, but it requires a sufficiently large number of samples that scales with \(C_{\rm tot}^{2}\). In real quantum devices, however, the maximum number \(M_{\rm max}^{\rm(NISQ)}\) of circuits that can be executed within a given period of time is finite. According to Ref. [50], \(M_{\rm max}^{\rm(NISQ)}\sim 10^{8}\) for moderate circuit depths within a day. Therefore, the number of samples required for our technique should satisfy \(C_{\rm tot}^{2}\sim e^{\lambda nD\epsilon_{r}}\lesssim M_{\rm max}^{\rm( NISQ)}\). For a given set of target decoherence rates \(\Gamma_{k}\), this inequality can be expressed as
\[\epsilon\lesssim\frac{\Gamma t}{D}+\frac{\ln\Bigl{(}M_{\rm max}^{\rm(NISQ)} \Bigr{)}}{2\lambda nD^{2}}, \tag{21}\]
where \(\epsilon=\sum_{k>0}\epsilon_{k}\) denotes the total Pauli noise probability, \(\Gamma=\sum_{k>0}\Gamma_{k}\) the total target decoherence rate, and \(t\) the simulation time with a Trotter time-step \(\Delta t\leq\Delta t_{\rm max}\) (see Sec. III.2). The first term in Eq. (21) shows that the total Pauli noise probability \(\epsilon\) scales linearly with the total target decoherence rate \(\Gamma\). On one hand, this implies that when the total target decoherence rate \(\Gamma\) is sufficiently high, the corresponding Lindblad model can be readily implemented on a quantum device even if its
Figure 7: (a,b) For the Pauli error probabilities estimated from real device _ibmq lagos_, (a) the total mitigation cost \(C_{\rm tot}\) is shown as a function of the number \(D\) of Trotter layers for different numbers of qubits, \(n\in\{2,3,4\}\), with a fixed uniform mitigation factor \(r=1\). The total mitigation cost \(C_{\rm tot}\) is well fitted by an exponential function \(e^{\alpha D}\), as shown in solid lines, with the values of \(\alpha\) shown in the inset. The dependence of \(C_{\rm tot}\) on the number \(n\) of qubits can be well described by \(\alpha\approx(0.186\pm 0.007)n\). (b) \(C_{\rm tot}\) is shown as a function of \(D\) for several values of uniform mitigation factor, \(r\in\{0.25,0.50,0.75,1.00\}\), where the number of qubits is taken to be \(n=4\). \(C_{\rm tot}\) can be well fitted by an exponential function \(e^{\beta D}\), and the dependence of \(C_{\rm tot}\) on the uniform mitigation factor \(r\) is well described by \(\beta\approx(0.678\pm 0.013)r\). (c) \(C_{\rm tot}\) is shown as a function of \(D\) for \(n=4\) and \(r=1\) where the Pauli error probabilities were randomly generated from Gaussian distributions with equal average \(\langle\epsilon_{k}\rangle\) and FWHM \(\Delta\epsilon_{k}=\frac{1}{2}\langle\epsilon_{k}\rangle\) for all \(k\). Horizontal dashed lines indicate the maximum cost supported by current quantum devices, defined by the number of circuits that can be executed in \(24\,\mathrm{h}\)[50].
total noise probability \(\epsilon\) is high. On the other hand, when the total target decoherence rate \(\Gamma\) is low, the quantum device should have a sufficiently low total noise probability \(\epsilon\), or the Pauli noise channels should be heavily mitigated, requiring a high sampling cost. The second term in Eq. (21) shows that the total Pauli noise probability \(\epsilon\) allowed by our partial noise mitigation scheme may be linearly increased by exponentially enhancing the capability of a quantum device to run several copies of circuits within a given period of time, quantified by \(M_{\rm max}^{\rm(NISQ)}\). Note that the second term in Eq. (21) is inversely proportional to the number \(n\) of qubits and the square of the number of Trotter layers, \(D^{2}\), implying that a higher \(M_{\rm max}^{\rm(NISQ)}\) is required for digital quantum simulations of a larger open quantum system on a longer time scale.
We remark that the number \(D\) of Trotter layers, required to achieve a desired Trotter decomposition error \(\varepsilon_{\rm Trot}\) in Eq. (6), depends on the structure of the Hamiltonian of a target open quantum system, the simulation time \(t\), and the order \(k\) of the Trotter-Suzuki product formula considered in simulations. As an example, we may consider the first order product formula (\(k=1\)), and a linear chain (or square grid) structure of qubits with a uniform nearest-neighbor coupling strength \(J\) and on-site energy \(E\) (see Eq. (2)). In this case, the number \(D\) of Trotter layers can be expressed as
\[D=O\left(\frac{n^{d}J(J+E)t^{2}}{\varepsilon_{\rm Trot}}\right), \tag{22}\]
where \(d=1\) (or 2) for the linear chain (or two-dimensional square grid) structure (for other classes of Hamiltonian, see Ref. [37]). Using Eq. (22), now we can express Eq. (21) as a function of the Trotter decomposition error \(\varepsilon_{\rm Trot}\)
\[\epsilon\lesssim O\left(\frac{\varepsilon_{\rm Trot}\Gamma}{n^{d}J(J+E)t}+ \frac{\varepsilon_{\rm Trot}^{2}\ln\!\left(M_{\rm max}^{\rm(NISQ)}\right)}{2 \lambda n^{2d+1}J^{2}(J+E)^{2}t^{4}}\right). \tag{23}\]
Note that the total Pauli noise probability \(\epsilon\) allowed by our scheme increases as a function of the Trotter decomposition error \(\varepsilon_{\rm Trot}\), and it is inversely proportional to the parameters \(J\) and \(E\) of the open-system Hamiltonian, and the simulation time \(t\).
### Amplitude damping noise
So far we have demonstrated that the stochastic Pauli noise channels in Eq. (10) can be implemented by using our partial noise mitigation scheme where the intrinsic noise of quantum devices is transformed to stochastic Pauli noise via Randomized Compiling. Here we discuss how the noise model implemented on quantum devices can be generalized beyond the stochastic Pauli noise, such as the amplitude damping noise model that has been widely considered in classical simulations of open-system dynamics [3].
The local amplitude damping of qubit \(m\) is described by the Kraus operators in the form
\[\mathcal{E}_{m}^{\rm(ad)}(\hat{\rho})=w\hat{\sigma}_{m}^{(-)}\hat{\rho}\hat{ \sigma}_{m}^{(+)}+\hat{\sigma}_{m}^{(0)}\hat{\rho}\hat{\sigma}_{m}^{(0)}, \tag{24}\]
with \(\hat{\sigma}_{m}^{(-)}=\frac{1}{2}(\hat{X}_{m}+i\hat{Y}_{m})\), \(\hat{\sigma}_{m}^{(+)}=\frac{1}{2}(\hat{X}_{m}-i\hat{Y}_{m})\) and \(\hat{\sigma}_{0}=\frac{1}{2}(1+\sqrt{1-w})\hat{I}_{m}+\frac{1}{2}(1-\sqrt{1-w} )\hat{Z}_{m}\), where \(w\) represents an incoherent transition probability from \(|1\rangle\) to \(|0\rangle\) of qubit \(m\). The amplitude damping noise model in Eq. (24) cannot be described by the stochastic Pauli noise channels [51] in Eq. (10), thus requiring an approach different from the noise-assisted technique discussed so far. To that end, we introduce an ancilla qubit coupled to qubit \(m\) where the interaction between them is described by a circuit shown in Fig. 8(a), consisting of three CNOT gates and single-qubit rotations \(\hat{R}_{Y}(\pm\theta/2)=e^{\mp i\theta\hat{Y}_{a}/4}\) with a Pauli operator \(\hat{Y}_{a}\) acting on the ancilla qubit [52]. Crucially, the initial state of the ancilla qubit is reset to \(|0\rangle\) in each Trotter layer, as schematically shown in Fig. 8(b). In this way, the amplitude damping probability \(w=\sin^{2}(\theta/2)\) can be controlled by the single-qubit rotations \(\hat{R}_{Y}(\pm\theta/2)\), resulting in an effective amplitude damping rate given by \(w/\Delta t\).
We note that the source of amplitude damping noise is artificial in the sense that it is induced by the added specific sequence of quantum gates and reset operations in each Trotter layer. This is contrary to the stochastic Pauli noise channels where the faulty implementation of CNOT gates, i.e. their intrinsic noise, is transformed to stochastic Pauli noise via Randomized Compiling.
The additional CNOT and measurement operations, however, can increase the overall error probabilities of the Pauli noise channels. By using the emulated quantum device _ibmq oslo_, we found that for a two-qubit case (\(n=2\)) the amplitude damping and stochastic Pauli noise channels can be implemented at the same time, but the total
Figure 8: (a) Interaction \(\mathcal{E}_{m}^{\rm(ad)}\) between open-system qubit \(m\) and ancilla qubit, devised to introduce amplitude damping of the qubit \(m\). (b) In each Trotter layer, the ancilla qubit is initialized in the state \(|0\rangle\). Followed by the interaction \(\mathcal{E}_{m}^{\rm(ad)}\) with qubit \(m\), the ancilla qubit is reset via a measurement operation, so that it is reused in the next Trotter layer. The amplitude damping probability of qubit \(m\) over a Trotter layer is \(w=\sin^{2}(\theta/2)\).
error probability \(\sum_{k>0}\epsilon_{k}\) of the stochastic Pauli noise channels is increased by several tens of percents. This implies that the total mitigation cost of the Pauli noise channels can be increased (see Eq. (20)) by introducing the amplitude damping circuit. Nevertheless, in this scheme, the mitigation cost does not depend on the amplitude damping probability \(w\), since it is controlled by the single-qubit rotations \(\hat{R}_{Y}(\pm\theta/2)\) and their intrinsic noise is negligible compared to CNOT gates. The amplitude damping channel can be introduced to all the open-system qubits by attaching ancilla qubits individually. The implementation of the amplitude damping noise and its resource scaling deserves a separate investigation and will be present somewhere else in the near future.
## IV Conclusion
In this work, we have established the concept of intrinsic noise-assisted digital quantum algorithms and demonstrated the principle in real and emulated IBM Quantum computers based on superconducting qubits. We show that the application of intrinsic noise-assisted digital quantum algorithms in real world devices can be achieved using three key steps. First, the coherent system dynamics is decomposed via Trotterization into product formulae with time-step \(\Delta t\) of our choice to gain a first level of control over the effective noise realised in the quantum simulation. Based on this, the second crucial step is the characterization of the system-dependent intrinsic noise in the NISQ device on which the algorithm is run. This can be achieved via Cycle Benchmarking and Error Reconstruction techniques. Building on this intrinsic noise reconstruction, the third and final step uses Probabilistic Error Cancellation to control the noise in the quantum circuit, i.e. the effective decoherence in the simulation model.
The principle of digital quantum simulation assisted by the device's intrinsic noise offers a number of advantages in the field of NISQ computation. First, the noise-assisted digital quantum algorithm does not need to alter the quantum hardware to tune the intrinsic noise but achieves it via results postprocessing. Secondly, it does not require additional _quantum_ computational resources, i.e. additional qubits and CNOT gates, for simulating open quantum systems compared to more standard approaches [53, 54, 55, 56, 57, 58]. The quantum resource reduction is achieved via an additional _classical_ overhead, i.e. an increased number of runs of the algorithm in the device, due to the use of a quantum error mitigation technique to control the noise acting on the qubits. As a result, our work provides guiding principles for the execution of quantum digital simulation of open quantum systems on real world devices, where noise is not detrimental, but leveraged for a more efficient computation.
In future work, we expect that the results can be extended in a variety of fruitful directions. First, it will be interesting to benchmark (and adapt) this technique on (to) different quantum hardware technologies beyond superconducting quantum devices which may exhibit particularly useful intrinsic noise. Secondly, in this spirit, the addition of modest quantum resources may allow for the extension to the efficient digital simulation of non-Markovian environments combining it with techniques exposed in Ref. [20, 59]. Thirdly, we expect these techniques to find fruitful combinations with Quantum TEDOPA [11], a technique to simulate non-perturbative dynamics of open quantum systems in a quantum computer, by making use of the results of a recent work on Markovian closure [60]. Finally, we expect further efficiency enhancements of our technique to be possible with improvements in tailormade characterization and error control techniques to give access to general non-perturbative dynamics of open quantum systems.
## Acknowledgements
The authors acknowledge helpful discussions and support by Benjamin Desef. JDG acknowledges funding from the Portuguese Foundation for Science and Technology (FCT) through PhD grant UI/BD/151173/2021. JDG, JL, MBP acknowledge support by the BMBF project PhoQuant (grant no. 13N16110). MIV acknowledges support from the FCT through Strategic Funding UIDB/04650/2020. SFH and MBP acknowledge support by the DFG via the QuantERA project ExtraQt. The authors acknowledge support by the state of Baden-Wurttemberg through bwHPC and the German Research Foundation (DFG) through Grant No. INST 40/575-1 FUGG (JUSTUS 2 cluster) and acknowledge the use of IBMQ devices via the Researcher Program.
## Appendix A Randomized Compiling
The noise we considered in the quantum simulations (see Sec. II.2) is given by stochastic Pauli noise channels defined as \(\sum_{k}\epsilon_{k}\hat{P}_{k}\hat{\rho}\hat{P}_{k}\). The absence of coherent noise \(\sum_{k^{\prime}\neq k}\hat{P}_{k}\hat{\rho}\hat{P}_{k^{\prime}}\) in this stochastic noise model allows us to efficiently cancel it in the quantum circuit by implementing Pauli operators via the PEC scheme (see Sec. III). Therefore, a stochastic Pauli channel is a desirable noise model to have in quantum circuits since it can be straightforwardly cancelled by PEC. In superconducting quantum devices however, several types of noise may occur on the course of the implementation of a set of gates, specifically coherent errors arising from qubit cross-talk or under/over-rotations of gates, and incoherent ones, such as amplitude damping or stochastic Pauli noise. In view of obtaining a stochastic noise model in our quantum simulations, we applied a quantum error mitigation technique, namely Randomized Compiling [22, 23], that transforms coherent noise into stochastic Pauli noise, both in the noise characterization routines
and in the digital quantum simulations implemented on the real IBMQ devices.
Randomized Compiling consists of creating several copies of the circuit and for each noisy gate \(\hat{G}\) acting on \(Q\) qubits in each copy, a randomly sampled \(Q\)-qubit Pauli string \(\hat{P}_{k}\) is applied as follows,
\[\hat{G}=\hat{P}^{\prime}{}_{k}\hat{G}\hat{P}_{k},\quad\hat{P}^{\prime}{}_{k}= \hat{G}\hat{P}_{k}\hat{G}^{\dagger}. \tag{10}\]
We assumed \(\hat{G}\) to be part of the Clifford group, such that \(\hat{P}^{\prime}_{k}\) is another Pauli string. Note that if \(\hat{G}\) is not part of the Clifford group, \(\hat{P}^{\prime}\) would not be a Pauli string, and potentially additional two-qubit gates would be required to be applied in the circuit [22; 23], hence increasing error probabilities.
CNOT gates are the most noisy gates in a quantum circuit, hence we applied Randomized Compiling to them as exemplified in Fig. 9. We executed several \(R\) randomized compiled circuits and then classically averaged the outcomes, thus obtaining an average stochastic Pauli noise process acting on the qubits.
## Appendix B 4-qubit noise-controlled evolution simulations with interaction-specific mitigation
We performed a noise characterization and noise-controlled quantum simulation of an 1D array of \(n=4\) qubits by characterizing and mitigating two-qubit stochastic Pauli channels (i.e. \(K=2\)) on the emulated _ibmq lagos_ device. The structure of the circuit is shown in Fig. 3(b), where we used a first-order Trotter-Suzuki product formula. We implemented non-uniform mitigation factors in the quantum simulations and compared the results with a classically solved Trotterized Lindblad equation where the decoherence rates are given by equation (15). The results are shown in Fig. 10. We measured the population terms of the reduced 2-qubit density matrix (second and third qubit in the 1D array) on the emulated IBMQ device.
The quantum simulation results show a good qualitative agreement with the classical simulation outcomes, suggesting that the proposed noise-assisted simulation technique with non-uniform mitigation factors can be applied for \(n>K=2\) digital quantum simulations on IBMQ computers.
Figure 10: Population dynamics of an open system encoded in four qubits (\(n=4\)) simulated by a quantum computer with non-uniform mitigation factors, shown in dots, which are well matched to classical solutions of the corresponding Lindblad equation, shown in crosses. Here the time evolution of the reduced density matrix \(\mathrm{Tr}_{1,4}[\hat{\rho}(t)]\) of the second and third qubits is displayed. The one- and two-qubit dephasing noise channels, namely \(\hat{P}_{k}\in\{\hat{Z}_{m},\hat{Z}_{m}\hat{Z}_{m+1}\}\), were relatively strongly mitigated by \(r_{k}=0.8\), while all the other \(K=2\) stochastic Pauli noise channels \(\hat{P}_{k^{\prime}}\) were weakly mitigated by \(r_{k^{\prime}}=0.1\). Hamiltonian parameters and initial state are as in Fig. 2, and \(48C_{\mathrm{tot}}^{2}\) circuits were considered in PEC.
Figure 9: Example of the application of Randomized Compiling to the operation \(e^{-i\frac{\theta}{2}X\otimes\hat{X}}\), a term that appears when evolving a quantum system via the Hamiltonian in Eq. (2), decomposed into Hadamard (\(H\)), CNOT and \(\hat{R}_{Z}(\theta)\) gates as shown in circuit (a). Randomized Compiling consists of, firstly applying uniformly sampled random two-qubit Pauli strings \(\hat{P}_{k}\) to the CNOT gates in circuit (a) together with the Pauli gates \(\hat{P}^{\prime}_{k}\) as defined in Eq. (10). This procedure is displayed in circuit (b). The next and final step is to compile the circuit, such that the added Pauli strings in the circuit are absorbed into other nearby single-qubit quantum gates. This compilation process creates a new single-qubit gate \(\hat{C}_{k}\) from the previous ones as shown in circuit (c). |
2309.10486 | Infection patterns in simple and complex contagion processes on networks | Contagion processes, representing the spread of infectious diseases,
information, or social behaviors, are often schematized as taking place on
networks, which encode for instance the interactions between individuals. The
impact of the network structure on spreading process has been widely
investigated, but not the reverse question: do different processes unfolding on
a given network lead to different infection patterns? How do the infection
patterns depend on a model's parameters or on the nature of the contagion
processes? Here we address this issue by investigating the infection patterns
for a variety of models. In simple contagion processes, where contagion events
involve one connection at a time, we find that the infection patterns are
extremely robust across models and parameters. In complex contagion models
instead, in which multiple interactions are needed for a contagion event,
non-trivial dependencies on models parameters emerge, as the infection pattern
depends on the interplay between pairwise and group contagions. In models
involving threshold mechanisms moreover, slight parameter changes can
significantly impact the spreading paths. Our results show that it is possible
to study crucial features of a spread from schematized models, and inform us on
the variations between spreading patterns in processes of different nature. | Diego Andrés Contreras, Giulia Cencetti, Alain Barrat | 2023-09-19T09:55:03Z | http://arxiv.org/abs/2309.10486v2 | # Infection patterns in simple and complex contagion processes on networks
###### Abstract
Contagion processes, representing the spread of infectious diseases, information, or social behaviors, are often schematized as taking place on networks, which encode for instance the structure and intensity of the interactions between individuals. While it is well known that the network structure has a fundamental impact on how a spreading process unfolds, the reverse question is less investigated: do different processes unfolding on a given network substrate lead to different infection patterns? How do the infection patterns depend on a model's parameters or on the nature of the contagion processes? Here we address this issue by investigating the infection patterns for a variety of spreading models of both simple and complex contagion processes. Specifically, we measure for each link of the network the probability that it is used in a contagion event and compare how these probabilities depend on the model used to describe the spreading process. In simple contagion processes, where contagion events involve one connection at a time, we find that the infection patterns are extremely robust against modifications of parameters and across models. In complex contagion models, in which multiple interactions are needed for a contagion event, we observe instead non-trivial dependencies with models parameters. When group interactions are taken into account, the infection pattern changes according to the interplay between pairwise and group contagions. In models involving threshold mechanisms moreover, it is sufficient to slightly modify the threshold to significantly impact the paths followed by the spread. Our results improve our understanding of contagion processes on networks, in particular with respect to the ability to study crucial features of a spread from schematized models, and with respect to the variations between spreading patterns in processes of different nature.
## I Introduction
Contagion processes pervade our societies. Examples include the spread of infectious diseases, both through contacts between hosts and following their mobility patterns, but also information diffusion or the propagation of social behavior [1; 2; 3; 4; 5; 6]. Modeling of these processes often includes a description of the interactions among the hosts as a network, in which nodes represent individuals and a link between nodes correspond to the existence of an interaction along which the disease (or information) can spread. In the resulting field of network epidemiology [4; 6; 7], many results have been obtained for the paradigmatic models of diffusion processes, in which the hosts can only be in a few possible states or compartments, such as susceptible (S, healthy), infectious (I, having the disease/information and able to transmit it), or recovered (R, cured and immunized) [1; 2]. These results concern mainly the context of models aimed at describing the spread of infectious diseases, represented as so-called _simple contagion_ processes: namely, processes in which a single interaction between a susceptible and an infectious can lead to a transmission event [1; 6]. In this context, many studies have provided insights into how the structure of the underlying network influences the spread and impacts the epidemic threshold (separating a phase in which the epidemic dies out from one in which it impacts a relevant fraction of the population), and how various containment strategies can mitigate the spread [6].
Fewer results concern the detailed analysis of the processes dynamics and spreading patterns, despite its relevance [8]. In particular, the reverse question of whether different processes lead to different or similar infection patterns has barely been explored. At the population level, a robustness of the shapes of the epidemic curves has been observed for various spreading models [9; 10] and contact networks [11]. In heterogeneous networks, it has also been shown that simple contagion spreading processes first reach nodes with many neighbours, and then cascade towards nodes of smaller degree [12; 13; 14]. Moreover, in the context of metapopulation models, in which each node of the network represents a geographic area and hosts can travel between nodes on the network, possibly propagating a disease, the heterogeneity of travel patterns has been shown to determine dominant paths of possible propagation at the worldwide level [8; 15; 16], allowing for instance to provide predictions for the arrival time of a pandemic in various parts of the world [17].
In addition, while these results concern simple contagion processes, it is now well known that such models might not be adequate to describe some contagion mechanisms, such as social contagion of behaviors. Empirical evidence has led to the definition and study of models of _complex contagion_[3; 18]: in these models, each transmission event requires interactions with multiple infectious hosts. In particular, models involving threshold phenomena [19] or group (higher-order) interactions [20] have been put forward, but results concerning the detail of their propagation patterns are scarce [14; 21].
Overall, as most results on propagation patterns concern simplified models with few compartments (such as the susceptible-infected-susceptible (SIS) and susceptible-infected-recovered (SIR)) and simple contagion processes, several questions naturally arise: how general are the propagation patterns observed in these models, and are they similar in more realistic models with compartments including latent individuals, asymp
tomatic cases, etc? How well do propagation patterns of simple contagion inform us on complex contagion ones, and do the most important seeds or the nodes most easily reached differ depending on the precise model or type of contagion?
Here, we contribute to tackle such issues by investigating spreading patterns for different types of contagion models on networks and hypergraphs. To this aim, we consider the infection network of a process [8], which gives the probability of a node to be directly infected by another one, averaged over realizations of the process, and generalize it to complex contagion. We compare the resulting patterns within each model as its parameters change, between different models of simple contagion and between different types of contagion processes. We first find an extreme robustness of the contagion patterns across models of simple contagion. These patterns slightly depend on the reproductive number of the spread, but are almost completely determined by the final epidemic size. This indicates also that one can define spreader and receiver indices for each node that quantify their tendency to contaminate or be contaminated by their neighbours: these indices are largely independent of the specific disease model and can thus be computed on simple cases with arbitrary parameters. The situation changes when models of complex contagion are considered. On the one hand, patterns of contagion turn out to be less robust in threshold models. On the other hand, they depend on the interplay between pairwise and group processes for models involving higher-order interactions.
## II Results
### General framework
We consider the context of network epidemiology, i.e., of spreading processes on a weighted network where nodes represent the hosts and weighted links between the hosts correspond to contacts along which a disease can spread, with probability depending on the link weight [6]. Specifically, the weighted networks we will use to perform numerical simulations of spreading processes are empirical networks obtained by temporally aggregating time-resolved data describing contacts between individuals in various contexts [22; 23; 24], where the weight \(W_{ij}\) between two individuals \(i\) and \(j\) is given by their total interaction time (see Methods).
On these networks, we will first consider several models of simple contagion, in which each node can be in several states such as susceptible, latent, infectious, and recovered, and an infectious node can transmit the infection to a susceptible neighbour with a certain probability per unit time. We will consider models with different sets of states, corresponding both to very schematic and to more realistic situations, and both Markovian and non-Markovian processes. On the other hand, we will consider a model of complex contagion that involves higher-order contagion mechanisms, i.e., interactions among groups of nodes [20]: This model describes the fact that the probability of a contagion event can be reinforced by group effects, and is defined on hypergraphs [25] in which interactions can occur not only in pairs but also in larger groups. It has indeed been shown that the inclusion of such effects leads to an important phenomenological change, with the emergence of a discontinuous epidemic transition and of critical mass phenomena. Finally, we will also consider so-called threshold models [19], in which a susceptible node becomes infected when the fraction of its interactions spent with infected neighbors reaches a threshold \(\theta\), to mimic the fact that an individual may adopt an innovation only if enough friends are already adopters. All models and their parameters are described in detail in the Methods section.
For each given spreading model and propagation substrate (network or hypergraph), we perform numerical (Monte Carlo) simulations of the spread at given parameter values, starting from a single infectious seed taken at random in the network, while all other nodes are susceptible (see Methods). The _infection pattern_ of the model is then the weighted and directed graph \(\mathbf{C}\) such that \(C_{ij}\) is the probability (averaged over realizations of the spread) that node \(i\) infected node \(j\)[8; 16]. In practice, it is obtained from the numerical simulations, by counting all the direct infectious events from \(i\) towards \(j\) among all runs, and dividing by the number of runs. The infection pattern hence represents the signature of an epidemic, highlighting the paths that are taken by the contagion process with a higher probability 1. We first note that a non-zero \(C_{ij}>0\) can be obtained if and only if there exists an interaction between \(i\) and \(j\) in the weighted network; moreover, one can expect that the probability \(C_{ij}\) of \(i\) infecting \(j\) depends on the weight \(W_{ij}\) of their connection. However, it also depends on the probability of \(i\) to be infected in the first place, to be infected before \(j\), and of \(j\) not to be infected through another interaction. Overall, one can thus expect \(C_{ij}\) to depend on non trivial properties of the network topology and not only on the weight of the link between \(i\) and \(j\). In particular, even if the interaction weights are symmetric, this is not a priori the case for the infection pattern: the network defined by the matrix \(C_{ij}\) is directed. This is shown in Fig. 1 for a toy network, where the largest values of \(C_{ij}\) do not correspond to the largest link weights. Once \(\mathbf{C}\) is defined, we can moreover use it to compute spreader and receiver indices for each node, respectively as \(s_{i}=\sum_{j}C_{ij}\) and \(r_{i}=\sum_{j}C_{ji}\), i.e., as the out-strength and in-strength of each node in the directed network of the infection pattern.
Footnote 1: \(C\) was defined for metapopulation models [8; 16] as the probability for a contagion to arrive in a geographical area from another one. Here we consider the case instead in which nodes represent hosts; moreover, this definition needs to be generalized in the case of complex contagion processes where the contagion of a node originates from several others, as described later.
It is worth noting here that \(\mathbf{C}\), and as a consequence also the spreader and receiver indices, depend both on the specific model of spread and on its parameters. We will explore these dependencies in detail in the following sections. We will show in the main text the results obtained with a network describing contact data collected in a French primary school [26]. We have also consid
ered data on contacts between individuals collected in a conference [27], a hospital [28], a workplace [27] and a high school [29] and we show the corresponding results in the supplementary material (SM).
### Simple contagion
We consider several models of simple contagion, characterized by different sets of possible states for the hosts and various types of dynamics between states. The simplest is the Susceptible-Infected-Recovered (SIR) model, in which a susceptible individual \(i\) (S) can become infected (I) with rate \(\beta W_{ij}\) when linked with another infected individual \(j\) by an edge of weight \(W_{ij}\). Infected individuals then spontaneously become recovered (R) with rate \(\mu_{I}\) and cannot participate in the dynamics anymore. The most studied extension of this model is the SEIR one, in which susceptible individuals become exposed (E, not yet contagious) with rate \(\beta\) upon contact with an I individual, before becoming infected. In both SIR and SEIR, we consider on the one hand fixed rates of transition from the I to the R state and from the E to the I state; the times that an individual spends in the E and I states, resp. \(\tau_{E}\) and \(\tau_{I}\), are then exponentially distributed random variables (with averages given by the inverses of the transition rates). A more realistic dynamical process is obtained by a non-Markovian dynamics between these states, in which \(\tau_{E}\) and \(\tau_{I}\) are random variables taken from Gamma distributions with given mean and standard deviation. As both SIR and SEIR remain generic models, we also consider a more elaborate model designed to represent the propagation of COVID-19, in which individuals can be exposed and not contagious, pre-symptomatic but already infectious, infectious but asymptomatic, or infectious and symptomatic [30; 13]. These models and their parameters are described in more detail in the Methods section.
For each model and network, once the parameters of the spontaneous transitions are fixed, it is possible to adjust the contagion rate \(\beta\) to obtain a specific value of the reproductive number \(R_{0}\), defined as the expected number of cases directly generated by one initial infected individual in a population where all other individuals are susceptible to infection [1]. For each model and parameter value, we compute the infection pattern \(C\) and the spreader and receiver indices of each node as explained above.
As expected and anticipated in the toy example of Figure 1, we find that the matrix \(\mathbf{C}\) is asymmetric, and we show the similarity of its elements with the weighted adjacency matrix of the underlying network in the SM, section Section S1. We then compare in Figure 2(left) the infection patterns \(\mathbf{C}\) obtained in different simple contagion models, calibrated so as to correspond to the same value of \(R_{0}\). The comparison is performed by computing the cosine similarity between the lists of elements of the matrices \(\mathbf{C}\) obtained in the various cases. Even at fixed \(R_{0}\), each model entails a different time evolution of the epidemic (see SM, section Section S2) with a different spreading velocity, and also different compartments, so corresponds to a different general process. One could hence suppose that the infection pattern could also be largely different from one model to the next. However, Figure 2(left) highlights how the infection patterns are actually extremely similar across models, with similarity values above 0.98. Hence, the probability for each network link of being used for a contagion event is largely independent of the specific contagion model considered (at given \(R_{0}\)), despite the differences in their temporal evolution. Contagion paths are thus not only stable within one model [16] but also across models. In the following analysis, we will thus focus on the simplest SIR model.
Figure 2(center) reports the cosine similarity between matrices \(\mathbf{C}\) obtained with the SIR model at varying \(R_{0}\). Interestingly, although the similarity values are very large, they are lower than between models at fixed \(R_{0}\), revealing a weak dependency of the infection patterns on \(R_{0}\). To understand this point further, it is worth reminding that, while \(R_{0}\) largely determines the initial velocity of the spread, the contagion process remains stochastic, and simulations with a fixed \(R_{0}\) can lead to different final attack rates, i.e., final values of the density of recovered individuals once the spreading process is over (once no contagion can take place any longer). We thus consider the infection patterns at different values of \(R_{0}\) but at fixed final attack rate, and we report in Figure 2(right) the analogous of Fig. 2(center), but where the matrices \(\mathbf{C}\) have been computed taking into account only the simulations with a final attack rate between 0.75 and 0.85. The similarity values become again larger than 0.99. This suggests that the infection pattern of a spreading model mostly depends on its average final attack rate.
Such result moreover leads us to an additional investigation, based on two simple points: (i) the final at
Figure 1: **Simple contagion.** Toy network illustrating the asymmetry of the infection pattern and its dissimilarity with the adjacency matrix. The upper sketch shows the weighted adjacency matrix (links’ width proportional to their weights, nodes’ size proportional to their weighted degree). The lower sketch represents the infection pattern for a simple SIR contagion with \(R_{0}=2\) (averaged over 500 simulations). For each connection only the direction with higher probability of infection is shown and the arrows’ width is proportional to the probability. The nodes are colored according to their spreader index.
tack rate is an increasing function of \(R_{0}\) and (ii) for a given \(R_{0}\), the average attack rate is a continuously increasing function of time, which thus passes through the values of the final attack rates obtained with lower values of \(R_{0}\). At given \(R_{0}\) and at each time, we can also compute a time-dependent \(\mathbf{C}(t)\) built as above but using only the contagion events up to time \(t\). The top panel of Figure 3 displays the similarity between this time-dependent infection pattern for an SIR model with \(R_{0}=4\) and the final infection patterns obtained with lower values of \(R_{0}\). Each such similarity goes through a maximum (with large values above 0.98) as a function of time, and this maximum is obtained when the time-dependent attack rate of the \(R_{0}=4\) process is almost equal to the final attack rate of the process at lower \(R_{0}\). More precisely, at given \(R_{0}=4\) and at each time, we obtain a distribution of attack rates (see bottom panels of Fig. 3): these distributions are typically bimodal, and we plot in the middle panel of Fig. 3 the temporal evolution of the location of the non-zero mode; the colored dots correspond instead to the non-zero modes of the distributions of final attack rates for the lower \(R_{0}\) values (full distributions shown in the bottom panels). The fact that the similarity does not reach 1 can be explained by the fact that the distributions of time-dependent and final attack rates do not coincide completely even when their non-zero modes do.
In other words, at each time step the infection pattern (which describes the contagion probability of each connection until that time) is almost completely determined by the attack rate reached at that specific time. This also means that the infection patterns of processes with lower \(R_{0}\) can be approximated extremely well by using a single process at large \(R_{0}\) (and thus a large final average attack rate) and computing its time-dependent infection pattern.
We finally also show in the SM that the range of values of the spreader and receiver indices depend on the reproductive number \(R_{0}\), but the ranking of nodes by these indices is very robust across models and across values of \(R_{0}\). Moreover, when fixing the attack rate, the ranges of values become equivalent even for different \(R_{0}\), and the ranking of nodes becomes almost independent of \(R_{0}\), showing that also this ranking is almost completely determined by the attack rate, and in any case very robust
Figure 3: **Simple contagion.** Comparison, for the SIR model, between a reference \(R_{0}=4\) and five testing parameter values (\(R_{0}\) from 1.5 to 3.5). Each curve in the upper panel represents the similarity in time between the temporal infection pattern \(\mathbf{C}_{ref}\) of the reference (\(\mathbf{C}_{ref}\) evolving with time, see text) and the infection pattern \(\mathbf{C}_{R_{0}}\) of each testing parameter, with \(\mathbf{C}_{R_{0}}\) obtained at the end of the simulation. The lower panel shows the temporal evolution of the non-zero mode of the distributions of attack rates of the reference spread (black curve), and the non-zero mode of the final attack rate distribution for the testing parameters (colored dots). Below we show the corresponding attack rate distributions.
Figure 2: **Simple contagion.** Left: Cosine similarity between the infection patterns of different models of simple contagion, simulated with the same \(R_{0}=2.5\) (see Methods for the description of the models). Center: Cosine similarity between the infection patterns obtained at varying \(R_{0}\) for the SIR model of simple contagion. Right: Cosine similarity between infection patterns at varying \(R_{0}\) for the SIR model, with fixed attack rate between 0.75 and 0.85.
across parameter values. Overall, our results indicate an extreme robustness of the infection patterns across different models of simple contagion, despite their diversity in the sets of possible states for the hosts and of dynamical transition rules. Moreover, while the infection pattern does depend (very) slightly on the model parameters, it is almost completely determined by the final attack rate of the process. This result is not valid for complex contagion processes, as we will see in the next sections.
### Simplicial contagion
Let us now consider a model of complex contagion in which the propagation can occur both on the links of the network, as in the case of simple contagion, but also on higher order (group) interactions, namely the simplicial contagion model [20], generalized here to weighted hypergraphs. As in [20], we limit ourselves for simplicity to contagion processes on first and second order interactions (pairs and triads), neglecting structures of higher orders, which will only appear as decomposed into links and triangles. We consider a SIR model, where a susceptible host \(i\) can receive the infection (i) with rate \(\beta_{i}W_{ij}\) when sharing a link of weight \(W_{ij}\) with an infected host \(j\), and (ii) with rate \(\beta_{\Delta}W_{ikl}^{\Delta}\) when part of a group \(i,k,l\) of three interacting nodes such that both \(k\) and \(l\) are infected (\(W_{ikl}^{\Delta}\) being the weight of the hyperedge \((i,k,l)\), see Methods). As in simple contagion models, infected nodes recover spontaneously - we consider here a fixed recovery rate \(\mu_{I}\).
As contagion events can occur both through links and triads, we here need to generalize the computation of \(\mathbf{C}\) by defining the number of infection events from \(i\) to \(j\), \(n_{i\to j}\), as follows: if \(j\) is infected by \(i\) in a pairwise interaction, \(n_{i\to j}\) is incremented by one; if instead \(j\) is infected through a triadic interaction with \(i\) and \(l\) who are both infected, both \(n_{i\to j}\) and \(n_{l\to j}\) are incremented by \(1/2\). \(C_{ij}\) is finally the ratio of \(n_{i\to j}\) to the number of numerical simulations considered.
While there is a one-to-one correspondence between \(R_{0}\) and the infection rate \(\beta\) in the case of simple contagion (the other parameters being fixed), a given \(R_{0}\) could here correspond to various pairs \((\beta_{\uparrow},\beta_{\Delta})\). We thus compare the infection patterns obtained when varying both parameters in Fig. 4(a), going from a situation in which the contagion events occur mostly on triads to one in which they occur mostly on links (as shown in Fig. 4(b)). These different ratios between the two parameters \(\beta_{\downarrow}\) and \(\beta_{\Delta}\), given they yield different relative abundances of the two types of infection (simple vs complex), can be expected to give rise to different infection patterns. The similarity values obtained remain however high, even between the most extreme cases (very different relative values of the numbers of infections in pairs and triads) 2. This can be explained by the observation that, in social networks, higher order interactions and pairwise ones largely overlap, i.e., nodes connected in groups with large weights are typically also connected by links with large weights (see section Section S3 in SM). The infection patterns on pairwise links and on triads thus also overlap. In fact, the similarity between the infection patterns of the simple SIR contagion process and the simplicial one, shown in Fig. 4(e) at varying \(R_{0}\) (of the simple contagion) and parameters \((\beta_{\downarrow},\beta_{\Delta})\), are also high, especially when the pairwise contagion events dominate in the simplicial model.
Footnote 2: We show in the SM results concerning the receiver and spreader indices and the subsequent ranking of nodes: similarly to the case of simple contagion, the ranking of nodes are very robust across parameter values, even if the range of values taken by the indices change.
An interesting distinction with the case of simple contagion is however revealed in Fig. 4. Namely, while the infection pattern of a simple contagion process is almost completely determined when fixing its final attack rate (see Fig. 2), this is not the case for the simplicial one. We show indeed in Fig. 4(c) the similarity between infection patterns at different values of the spreading rates, but when these patterns are computed using only simulations with a given final attack rate. In contrast to the case of simple contagion, constraining the attack rate does not change the similarity values, which remain similar to the ones observed in Fig. 4(a). This is clearly due to the fact that the same attack rate can be obtained through very different relative numbers of pairwise and higher order infection events (Fig. 4(d)). The differences between simplicial contagion infection patterns at different parameters measured in Fig. 4(a) are thus mostly due to the differences in the combination between the two competing processes at work in this model (first-order vs. second-order contagions).
The simple and simplicial models entail fundamentally different contagion mechanisms, leading to different physics and different types of phase transitions, including critical mass phenomena [20; 25]. Here indeed, the differences in infection patterns are driven by the differences between pairwise and higher order contagions. However, the resulting infection patterns remain very similar in our simulations, which is probably largely due to the fact that, in the empirical data we consider, links and higher order hyperedges largely overlap, with correlated weights (see SM and [31]) so that both simple and higher order mechanisms tend to use the same infection routes.
### Threshold contagion
We finally investigate the infection patterns resulting from a model of complex contagion driven by threshold effects on a network: in this model [19], a susceptible node can become infected (deterministically) only if the fraction of its neighbors that are infected overcomes a certain threshold \(\theta\), the parameter of the process. In the generalization of this model to weighted networks, a susceptible node becomes infected when the weight of its connections with infected nodes divided by the total weight of its connections exceeds the threshold. We
moreover introduce a recovery parameter \(\mu_{I}\) as in the previous cases, in order to obtain an SIR model as well. As in the simplicial model, the infection of a node \(i\) is typically due to more than one other node. We thus generalize the computation of the infection pattern \(\mathbf{C}\) similarly to the previous case: if \(i\) becomes infected because \(k\) of its neighbours \(i_{1}\), \(i_{2}\),... \(i_{k}\) are infected, each \(C_{i_{a}i}\) is incremented by \(W_{i_{a}i}/\sum_{k=1}^{k}W_{i_{b}i}\), i.e., by the relative contribution of \(i_{a}\) to the infection event.
We compare the infection patterns of this model at various values of the parameter \(\theta\) in Fig. 5(a). Interestingly, the values of the cosine similarity between patterns are still high, but typically much lower than in the previous cases, suggesting that the parameter \(\theta\) plays a stronger role in determining the infection pattern than \(\beta\) (or \(R_{0}\)) in simple contagion processes (see SM for results on the receiver and spreader indices). This can be understood by the following argument: in simple contagion, all existing paths on the network can potentially support a contagion; on the other hand, changing the value of \(\theta\) corresponds to allowing some infection patterns and impeding others, as it can change the number of infected neighbors needed to infect a given node. Smaller values of \(\theta\) imply an easier and faster infection of nodes, while larger values only allow contagion of nodes connected with many infected, thus constraining infection to follow more specific patterns.
In Fig. 5(b) we also compare the infection patterns of the threshold contagion model with the ones of simple contagion, showing that the two processes are characterized by rather different infection patterns. The similarity is higher for larger values of \(\theta\): as \(\theta\) becomes large, the condition needed for infection of a node \(i\) becomes stricter and can be fulfilled only if the neighbours \(j\) to which \(i\) is linked by its largest weights are infected. Thus, the infection pattern becomes closer to the one of a simple process.
We finally note that the values of the similarity between infection patterns at different parameter values, and comparing simple and threshold-based contagion, remain high and typically above 0.7: even if the parameter dependency is higher than in the previous cases, it remains low. Moreover, the similarity of infection patterns between processes remains large, due to the fact that in all cases the infection patterns largely depend on (and are correlated with) the underlying weighted adjacency matrix (see SM section Section S1).
## III Discussion
We have here investigated the infection patterns of various models of contagion processes on networks, using as substrate several empirical networks of contacts between individuals. In particular, while it is well known that the network structure impacts the spreading patterns, the question of how these patterns depend on a spread's parameters or on the type of spreading process considered (i.e., simple vs. complex contagion) has been much less considered. This issue can be tackled quantitatively by defining the infection patterns, in the case of simple contagion processes, as measuring for each con
Figure 4: **Simplicial contagion.** (a): Cosine similarity between infection patterns at varying different combinations of \(\beta_{\downarrow}\) and \(\beta_{\Delta}\). (b): Number of contagions taking place via first and second order simplices in the simulations of the previous panel. \(\mathbf{C}_{1}\) is the infection pattern matrix obtained considering only infections taking place via pairwise links and \(\mathbf{C}_{2}\) is the analogous for trials infections, with \(\mathbf{C}_{1}+\mathbf{C}_{2}=\mathbf{C}\). In the plot we report the sum of all elements of the matrices \(\sum_{ij}(\mathbf{C}_{1})_{ij}\) and \(\sum_{ij}(\mathbf{C}_{2})_{ij}\), which give the respective fractions of contagion events of each type. (c): Cosine similarity between infection patterns at varying different combinations of \(\beta\) and \(\beta_{\Delta}\), when computing the infection patterns using only simulations with attack rate between 0.6 and 0.7. (d): Number of contagions taking place via first and second order simplices in the simulations of the previous panel. (e): Cosine similarity between infection patterns of simplicial contagion (for the same range of values of \(\beta_{\downarrow}\) and \(\beta_{\Delta}\)) and simple contagion (for different values of \(R_{0}\)).
nected pair of nodes of the network the probability that an infection event occurs from one to the other [8; 16]. We have shown that these patterns are extremely robust across models of simple contagion with different sets of possible disease states and model parameters; in particular, within one model they slightly depend on the reproductive number but are almost fully determined by the final attack rate. The infection patterns also allow us to define a receiver and a spreader indices for each node, which give a ranking of nodes according to their relative risk of becoming infected during the spread and to spread to other nodes. The corresponding ranking of nodes is also very robust across models and parameters. Overall, the spreading patterns and node indices can thus be computed using arbitrary models, including very simplified ones with arbitrary spreading parameters, and still provide information on the patterns that would be obtained with a different model describing a different disease, or different parameters.
We have also generalized the infection patterns to complex contagion processes in which each contagion event can involve several infecting nodes. We have observed that the infection patterns are then less robust; in models where simple and complex contagion events can co-exist, the robustness of patterns and their similarity to the case of simple processes depends on the ratio between events of simple and complex contagions. In a threshold-based model, patterns differ more across parameter values. We also note that even if it has been shown that observing the propagation patterns of single processes makes it possible to distinguish between processes based on simple contagion, higher-order contagion, or threshold processes [14], the similarity between the averaged infection patterns we have discussed here remains in all cases rather high, due to the fact that these patterns are correlated with the matrix of link weights describing the network.
Our work has limitations worth mentioning, which also open some avenues for future work. The set of networks on which we have performed our investigation corresponds to diverse contexts of empirical contacts and thus entails a variety of complex interaction patterns, but remains limited. It would be interesting to extend our study to synthetic (hyper)networks where the distributions of degrees and of group sizes and the overlap between dyads and triads could be controlled. Our work also deals with static networks, and could be extended to temporal networks, especially as the propagation paths and infection risk might then be measured during a certain period while the propagation could then take place at another time [32; 33]. Finally, the infection patterns could also be studied for other models of complex contagion (including contagion events in groups of arbitrary sizes [34]).
## IV Methods
### Models of simple contagion
We consider three different epidemic processes, all of them agent-based compartmental models, i.e., in which each agent (represented by a node of the network) can pass through a finite set of possible compartments describing the evolution of a disease.
In the SIR model, a susceptible node \(i\) (in compartment S) can become infected (changing compartment to I) by contact with one of its neighbors \(j\). This transition takes place with rate \(\beta W_{ij}\), where \(\beta\) is the infection rate, a free parameter of the model, and \(W_{ij}\) is the weight of the connection between \(i\) and \(j\). Each node will then recover (becoming R) at rate \(\mu_{I}\), another free parameter.
The SEIR model is similar to the previous one with the addition of one state: exposed (E). Newly infected individuals first enter the exposed (non-infectious) state and, with a rate \(\mu_{E}\), they transition to the infectious state. Again, they will recover at rate \(\mu_{I}\). We consider three versions of SEIR models: SEIRe1, SEIRe4, and SEIRi4, which only differ by the values of their parameters, which are given in Table 1.
In both SIR and SEIR, the recovery rate \(\mu_{I}\) and the exposed-to-infected rate \(\mu_{E}\) are constant, implying that the times spent by an agent in the infected and exposed states are random variables drawn from exponential distributions with respective averages \(\tau_{I}=1/\mu_{I}\) and \(\tau_{E}=1/\mu_{E}\) (which are thus gamma distributions with standard deviations \(\sigma_{X}=\tau_{X}\) with \(X=I,E\)). Instead of constant rates, we can also consider times in the E and I states distributed according to gamma distributions with averages \(\tau_{E}=1/\mu_{E}\) and \(\tau_{I}=1/\mu_{I}\) and standard deviations \(\sigma_{X}=\eta_{X}\tau_{X}\) with \(\eta\neq 1\), thus obtaining non-markovian models. We consider the extension of the three versions of the SEIR model (SEIRe1, SEIRe4, and SEIRi4) to this non-markovian framework, namely SEIRe1v02, SEIRe4v025, and SEIRi4v025, where \(\tau_{I}\) and \(\tau_{E}\) are drawn from gamma distributions with \(\eta_{I}\) and \(\eta_{E}\) equal to 0.25 (see Table 1).
We also consider the COVID model describing the
Figure 5: **Threshold contagion.** (a): Cosine similarity between infection patterns at varying \(\theta\). (b): Cosine similarity between infection patterns of threshold contagion (for different values of \(\theta\)) and simple contagion (for different values of \(R_{0}\)).
propagation of SARS-CoV2 used in [13; 30]. In this model, when a susceptible agent is contaminated it transitions to an exposed state followed by a pre-symptomatic infectious state, remaining in these states for times extracted from gamma distributions with respective averages \(\tau_{E}\) and \(\tau_{p}\), and standard deviations \(\sigma_{E}\) and \(\sigma_{p}\). Then individuals can either evolve into a sub-clinical infection or manifest a clinical infection, with respective probabilities \(1-p_{c}\) and \(p_{c}\). The duration in the infectious state is extracted from a gamma distribution with average \(\tau_{I}\) and standard deviation \(\sigma_{I}\). An individual \(i\) in the infected states (pre-symptomatic, sub-clinical or clinical) can transmit the disease to a susceptible individual \(j\) when in contact with it with respective rates of transmission \(r_{p}\beta W_{ij}\), \(r_{sc}\beta W_{ij}\), and \(\beta W_{ij}\).
Table 1 shows the values for the different parameters used in these models. Moreover, in all cases, the parameter \(\beta\) is tuned to obtain a desired specific value for the basic reproductive number \(R_{0}\), as detailed in the next section.
### Reproductive number and calibration
The reproductive number, \(R_{0}\), is defined as the number of cases directly generated by one infected individual in a population where all the others are susceptible. In detail, each simulation begins with one random infected node \(i\) and to obtain \(R_{0}\) we count all the neighbors of \(i\) that are directly infected by it until \(i\) becomes recovered. Each stochastic simulation entails a different value of \(R_{0}\), so the values that are reported in the results correspond to the values averaged over 1000 simulations for each set of parameters.
### Data sets
We use high-resolution face-to-face empirical contacts data collected using wearable sensors in different settings. The data sets are publicly available on the website [http://www.sociopatterns.org/datasets](http://www.sociopatterns.org/datasets). Data sets are available as lists of contacts over time (with a temporal resolution of 20 s) between anonymized individuals. We obtain the networks for the simulations by temporally aggregating each data set into a single weighted graph, where the weight of a link between two nodes is given by the average daily time in contact between the corresponding individuals. The considered data sets are:
* **Primary school**, which describes the contacts among 232 children and 10 teachers in a primary school in Lyon, France, during two days of school activity in 2009 [35]. The school is composed of 5 grades, each of them comprising 2 classes, for a total of 10 classes.
* **Workplace**, gathering the contacts among 214 individuals, measured in an office building in France during two weeks in 2015 [27].
* **Hospital**, which describes the interaction among 42 health care workers (HCWs) and 29 patients in a hospital ward in Lyon, France, gathered during three days in 2010 [36].
* **High school**, describing the contacts among 324 students of "classes preparatoires" in Marseille, France, during one week in 2013 [37].
* **Conference**, which describes the interactions of 405 participants to the 2009 SFHH conference in Nice, France [38].
### Weighted hypergraphs
The data sets describe temporally resolved interactions between individuals, as explained above. To obtain the weighted hypergraphs used in the higher-order spreading model, we aggregate the data in time as follows. First, the weighted network is generated by considering each interaction between two individuals as a pairwise link of a static network with weight given by the number of times (consecutive or not) that the connection has appeared in the temporal data, and we then normalize each weight by the maximum observed weight.
To build the weighted hyperedges of larger size, we restrict ourselves here to second order, i.e. to triads. To this aim, we consider all the simultaneous interactions between cliques of at least three nodes in the temporally resolved data set. These include triads of nodes but also larger groups, which we decompose into all the possible groups of three nodes. The weight of each triad is then taken as the number of times that it has appeared in the data set, and we normalize also here by the maximum observed value.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline
**SIR model** & \(\mu_{I}\) & & & & \\ \hline SIR & 0.25 & & & & \\ \hline
**Markovian** & & & & & \\
**SEIR models** & \(\mu_{E}\) & \(\eta_{E}\) & \(\mu_{I}\) & \(\eta_{I}\) & \\ \hline SEIRe1 & 1 & 1 & 1 & 1 & \\ SEIRe4 & 0.25 & 1 & 1 & 1 & \\ SEIRi4 & 1 & 1 & 0.25 & 1 & \\ \hline
**Non-** & & & & & \\
**Markovian** & & & & & \\
**SEIR models** & \(\mu_{E}\) & \(\eta_{E}\) & \(\mu_{I}\) & \(\eta_{I}\) & \\ \hline SEIRe1v025 & 1 & 0.25 & 1 & 0.25 & \\ SEIRe4v025 & 0.25 & 0.25 & 1 & 0.25 & \\ SEIRi4v025 & 1 & 0.25 & 0.25 & 0.25 & \\ \hline
**COVID** & & & & & \\
**model** & \(\tau_{E}\pm\sigma_{E}\) & \(\tau_{p}\pm\sigma_{p}\) & \(\tau_{I}\pm\sigma_{I}\) & \(p_{c}\) & \(r_{p}\) & \(r_{sc}\) \\ \hline COVID & \(4\pm 2.3\) & \(1.8\pm 1.8\) & \(5\pm 2.0\) & 0.5 & 0.55 & 0.55 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Model parameters.
## Acknowledgments
This work was supported by the Agence Nationale de la Recherche (ANR) project DATAREDUX (ANR-19-CE46-0008).
|
2302.14586 | Implementing RFI mitigation in Radio Science | This paper presents an overview of methods for mitigating radio frequency
interference (RFI) in radio science data. The primary purpose of mitigation is
to assist observatories to take useful data outside frequency bands allocated
to the Science Services (RAS and EESS): mitigation should not be needed within
Passive bands. Mitigation methods may be introduced at a variety of points
within the data acquisition system in order to lessen the RFI intensity and to
limit the damage it does. These methods range from proactive methods to change
the local RFI environment by means of regulatory manners, to pre- and
post-detection methods, to various pre-processing methods, and to methods
applied at or post-processing. | Willem A. Baan | 2023-02-28T14:13:58Z | http://arxiv.org/abs/2302.14586v1 | # Implementing RFI mitigation in Radio Science
###### Abstract
This paper presents an overview of methods for mitigating radio frequency interference (RFI) in radio science data. The primary purpose of mitigation is to assist observatories to take useful data outside frequency bands allocated to the Science Services (RAS and EESS): mitigation should not be needed within Passive bands. Mitigation methods may be introduced at a variety of points within the data acquisition system in order to lessen the RFI intensity and to limit the damage it does. These methods range from proactive methods to change the local RFI environment by means of regulatory manners, to pre- and post-detection methods, to various pre-processing methods, and to methods applied at or post-processing.
Methods: observational - Techniques: interferometric, spectroscopic, miscellaneous - Radio Frequency Interference (Published in JAI, VOL. 08, No.01, 2019 );
## 1 Introduction
Radio Frequency Interference (RFI) has become a significant factor in our ability to observe natural phenomena on Earth, in our solar system, and the nearby and distant universe. From the radio scientist's point of view, RFI is considered to be any unwanted addition to the wanted signal that has the potential to degrade or prevent the successful conduct of an observation. The chance for encountering RFI has been increasing steadily because of the ability to observe increasingly weaker signals with more sensitive detection systems that operate over larger bandwidths. In addition, because spectral lines are fixed fundamental (rest) frequencies, and radio astronomers are looking at weaker emission lines from more distant sources in the universe that are redshifted outside science service bands into bands allocated to active radio services. And while the active spectral usage is intensifying and the used active service bands are moving to higher frequency, the chance for finding strong interfering signals in the science data is becoming larger.
Unlike thermal noise, which has stable temporal stochastic properties (white noise) and can be dealt with through radiometric detection (i.e. longer integration times and on-source minus off-source subtraction), an RFI signal is temporally, spatially or spectrally structured and can obscure a wanted signal or produce a false positive detection.
The Radio Regulations issued by the Radiocommunication Sector of the International Telecommunication Union (ITU-R) assign Primary and secondary users to allocated spectral bands, such that secondary users are not allowed to interfere with the Primary use of the band [ITU-R Radio Regulations, 2018]. This regulatory system works well in general, except that some users do not respect these rules. In 'passive' bands that are allocated to the science services are Primary and all emissions are prohibited. However, because this requirement is 'too difficult to adhere to' for neighbouring active services, RFI may need to be tolerated in those bands for 2% of time for a single network or 5% for an aggregate situation. In addition, one may encounter out-of-band and spurious emissions resulting from emissions from adjacent bands and there may also be some illegal transmissions. In bands shared between science services and other services, it depends on the status of the allocated services how RFI issues need to be coordinated. Outside the bands
allocated to the science services, science users have no rights of protection and may operate in these bands on a 'non-interference' basis relative to the Primary users of the band.
Traditionally much of the data corrupted by RFI has been discarded. However, technology and computing advances have made it possible to mitigate - to lessen in intensity and damage - the effect of the RFI signals in scientific data. Yet RFI mitigation has often been an afterthought and retro-active implementation of mitigation options at observatories has been very slow. Often the severity of the anticipated long-term RFI conditions has been underestimated by the users. Furthermore, the complexity of integrating RFI mitigation systems into existing observing systems was too high because these systems were not yet fully digital. Finally, the users preferred (old-fashioned) hands-on flagging methods over the adaptation of black-box procedures.
Changes in technology have made it possible to implement RFI mitigation in existing systems as well as new instrument projects. Rather than simply complaining about the presence of RFI in the data and about the damage it does, it it possible to do something about the situation. In this paper we discuss methods of mitigation that have been tried and successfully tested.
Previous reviews covering many of the methods and issues presented in this paper are found in [14, 15, 16, 17] and ITU-R Report 2126 [RAS Report 2126, 2018]
## 2 Managing radio frequency interference
RFI always degrades the quality of the data and the working assumption for most observations has been that RFI-corrupted data are partially or completely unusable. The most common method for dealing with RFI is to excise spectral or temporal segments of a data set that are known to be corrupted. There are powerful motivations to move beyond throwing away data. Methods that remove or mitigate RFI, thus enabling scientific usage of data that would otherwise be discarded, are becoming more and more essential and feasible. In addition, the dramatic increase in the size of data sets make automated procedures ever more necessary.
In practical terms and from the point of the science user, any external signal that disturbs the wanted scientific signals can be considered as RFI, whether is it strong and showing in the instrument bandpass or weak and only showing in integrated data. The ITU-R has defined the 'power flux density' (pfd) levels that are considered detrimental interference that are well within the noise of the observing instruments. For Radioastronomy they are given in Recommendation ITU-R RA.769 and are based on 10% of the system noise of the instrument. The percentage of permissible data loss in Passive bands resulting from emissions above these thresholds is specified in Recommendation ITU-R RA.1513 [RAS Recommendations, 2018]. In the exclusive Primary bands for the Passive Radio Astronomy Service (RAS) and and the Earth Exploration Service (EESS) are listed in RR No. 5.340, all emissions are prohibited. In the other radio astronomy bands, listed in RR No. 5.149, administrations are urged to take all practicable steps to protect the radio astronomy service from harmful interference [ITU-R Radio Regulations, 2018]. For Remote Sensing (RS) and the EESS the interference levels are discussed in Recommendations ITU-R RS.1029 for passive sensors, ITU-R RS.1166 for active sensors and ITU-R RS.1263 for MetAids [EESS Recommendations, 2018]. An important Resolution 750(WRC-15) presents compatibility studies between EESS(passive) and relevant active services and discusses hard limits for OOB emissions into EESS bands [Res. 750, 2018].
The aim of mitigation efforts is to optimise the increased availability of signal processing hardware and algorithms and to enable science observations to be conducted in densely occupied bands and heavily used radio environments. As a result, RFI mitigation requires more complexity in order to deal with the growing number of wireless communications services and the increased need for observations outside protected/allocated bands.
A guiding principle for the implementation of RFI Mitigation should be to remove interfering signals as early as possible in the data chain and at the highest possible time and spectral resolution. Since the removal of RFI signals is generally limited by the instantaneous signal-to-noise of the signal in the system, the strongest and most damaging RFI signals may be mitigated at the sampling speed in the system. Weaker signals that are still buried in the sampling noise can only be removed after time-integration later
in the signal path. Mitigation of the RFI as early as possible in the data chain minimises the damage to the data and keeps the data loss at a minimum level.
Mitigation at high time resolution is particularly beneficial for time-variable RFI and will reduce data loss due to time-smearing. Higher spectral resolution reduces the occupation of spectral channels by the RFI and limits the damage of RFI to the data due to spectral-smearing.
In the following sections, methods of mitigation are discussed for a more general structure of receiving systems. Most of this discussion is geared towards radio astronomy instruments but many methodologies will also be applicable for EESS systems for both terrestrial and space applications. However, specific applications for space-based Remote Sensing applications will not be discussed in this review.
## 3 Implementation in observing systems - a multi-layer strategy
Effective mitigation of RFI signals needs to be multi-layered using different technical methods at different locations in the signal path of the system and can be subdivided in three types of activity: 1) Preventing radio frequency interference (RFI) signals from entering the science data, including both the reduction of the observatory's vulnerability to RFI signals and the elimination of in-house signals. 2) Removing the strongest and/or time-variable RFI signals from the data in real time at one or more locations in the data chain. 3) Off-line removal or reduction of the impact of RFI.
Multi-layer mitigation can be introduced at different stages in the data path and can be grouped into different categories (Figure 1). Naturally, the ability to apply these mitigation protocols will depend on the structure of observing system, and some methods cannot be applied to all systems. Certain methods can be applied at and before each individual observing station, others in the signal chain of the stations before station data processing or before array central processing, others apply to the individual/central processing site, and lastly some methods may be applied after the data has been processed. The ability of all methods to excise or subtract RFI signals from the incoming data is limited by the Interference to Noise Ratio (INR) and they can reduce the RFI to a level of INR \(\approx\)1. Therefore strong RFI signals can be dealt with early in the data path, while weaker RFI signals can only be addresses after integration.
The multi-layer mitigation scheme presented in Figure 1 includes an "adaptive control' component. After a 'training process' this unit should be able to recognize the specific type of interference encountered and adjust the system to invoke the optimal mitigation algorithm. The long-term goal for this unit is to install an autonomous RFI mitigation system that can handle all kinds of interference environments.
Figure 1: Stages and methods of RFI Mitigation along the signal path in a radio observing instrument.
Another function to be integrated into this control unit is the accurate bookkeeping on 'what has been done where'.
## 4 Station - Pro-active and pre-detection
_Cleaning the Site_ - Pro-active and pre-detection actions are aimed at preventing RFI signals from entering the detector systems or reducing their strength. In the first place, locally generated RFI from inside and outside the observatory needs to be eliminated or reduced to levels that are below RA.769 values at the entry of the sensor or telescope. This form of Electromagnetic Compatibility (EMS) should address all radio frequency noise generated by electrical and electronic equipment and requires shielding/screening or placement in Faraday cages. This effort would also include laptops and mobile communication devices in the building and on the site. In the case of distributed buildings at an observatory site, also the radio noise generated in such buildings should be reduced to RA.769 levels at the location of the telescope or sensor. The observatory should not be its own (worst) enemy.
_Quiet and Coordination Zones_ - Working close with governmental spectrum and regulatory agencies it is necessary to establish a Quiet Zone (QZ) around the observatory. It is generally not possible to make such a quiet zone actually radio quiet, but it will prevent new transmitters to be located inside the zone and help remove some old ones. In addition, a Coordination Zone (CZ) should be established that facilitates coordination with the operators about nearby transmitters in order to reduce or eliminate the signal strength in the direction towards the observatory. Many radio astronomy observatories have established quiet/coordination zones. Large-scale zones have been documented for several sites, such as the Mid West Radio Quiet Zone in Western Australia [ARQZWA, 2007], the National Radio Quiet Zone around Green Bank [NRQZ, 1958] and the Puerto Rico Coordination Zone around the Arecibo Observatory [PRCZ, 1998]. Information about the structure and characteristics have been described in ITU-R Report RA.2259. Quiet zones for Earth exploration systems are realistic for terrestrial applications. However, RS space applications will mostly depend on coordination with terrestrial interferers.
Transmitters existing inside a Quiet Zone at the time of the establishment of the zone or after need to be identified and need to be quieted or relocated with the help of the national spectrum agency. Friendly coordination with transmitting stations existing in a Coordination zone may result in lowering their power or changing their emission pattern or change channel in order to reduce the signal strength at the observing frequencies of the observatory. In general, the operators of existing stations are sensitive to the needs of an observatory. New stations in a coordination zone need to be coordinated before the installation takes place.
## 5 Station Mitigation at the detection system
Mitigation at the detection system should be applied to standalone stations as well as to all stations that are part of an array. The use of robust receivers for observations that can withstand the presence and effects of strong RFI is essential. The suppression of second-order intermodulation products and enough 'headroom' to avoid gain suppression are guiding principles for the construction of such receivers.
Spectral filtering and band-edge filtering have been a standard method to avoid strong signals entering the band. However, filtering comes at a price in gain performance. For in-band filtering, super-conducting notch filters have been developed. These costly devices will slightly degrade the performance of the receiver but their tuning is not very flexible.
Edge filtering or band selection to avoid band sections with strong signals may be applied for continuum studies, but observers enjoy less flexibility with regard to spectral line studies and redshifted surveys. Application of software defined radio techniques will aid in finding clean spectrum and avoid RFI signals.
## 6 Station Mitigation before correlation and processing
### Excision at baseband
_Time and frequency analysis_ - A powerful way of removing strong RFI signals that have entered the
receiver is real-time excision at the Intermediate Frequency (IF) (at baseband) at the sampling speed. Signals that have significant SNR at the sampling frequency are also most damaging to the surrounding spectral data and should be removed from the data before further processing. Successful applications of excising RFI from the data in the IF stage have been based on thresholding and kurtosis schemes. Real-time thresholding simply applies a threshold of \(n\) times the root mean square (rms) of the noise for removing IF signals above this level in the frequency and/or time domains. A successful implementation of this and other methods has been documented for the Westerbork Synthesis Radio Telescope [WSMT; Baan et al., 2004, 2010]. Recently, a real-time application of the Median Absolute Deviation (MAD) estimator [Fridman, 2008] for RFI thresholding has been implemented (pre-correlation) within the GMRT wideband backend in order to target time-variable power-line RFI particularly at frequencies less than 700 MHz [Buch et al., 2018].
Similarly a dynamical excision threshold may be based on a _Kurtosis analysis_ in order to identify and excise statistical outliers. Kurtosis is a measure of the 'tailedness' of the probability distribution using a scaled version of the fourth moment of the data such that a higher kurtosis number results from infrequent extreme deviations. This statistical method has been applied successfully for removing signals with kurtosis \(>3\) from solar data [Nita et al., 2007, Gary et al., 2010]. Kurtosis excision at baseband is also applied to pulsar observations at the Parkes telescope [Hobbs, 2018] and works well for RFI signals that are significantly stronger than the pulsed signals of the pulsar. Caution should be applied for pulsar searching observations because the pulses for these sources are still unknown.
In particular, time-variable interference can be addressed optimally by these methods, although they will result in a certain percentage of data loss that depends on the bandwidth and duty cycle of the RFI signal. However, not removing these signals would result in even more damage and data loss after integration at a later stage. As an example, the out-of-band emissions from the uplink-downlink of a Mobile Satellite System with a 50% duty cycle would result in data loss in the adjacent RAS 1612 MHz band in the range range from 50-100%. Real-time thresholding at baseband with the RFIMS observing system at Westerbork Observatory made it possible to only sample the uplink time segment of Iridium with good results, but still with a data loss of about 50% (Figure 2). For pulsar observations with the new Apertif _phased-array feed_ (PAF) system at WSRT, thresholding in time and frequency is applied on one second integrated samples after the beamforming process together with coincidence detection for all beams [van Leeuwen, 2018].
Thresholding and kurtosis analysis may be used to clip the power of strong RFI signals and replace these with some smaller values. This 'data loss' changes the noise properties in the affected data and may also affect the data contained wanted signals in (sometimes) unknown locations in the spectrum. Accurate bookkeeping of the 'when and where' is therefore an important part of the process.
Implementation of mitigation and excision at baseband requires the (simple) inclusion of processing power in the IF path of the instrument before further processing or correlation. For IF processing in PAFs
Figure 2: WSRT observations of pulsar PSR 0332+5434 under severe RFI conditions at 1625 MHz. (Left) A period folded spectrum without RFI mitigation. (Right) The period folded spectrum with thresholding excision clearly shows the pulsar profile. Figures are courtesy of Petr Fridman and Ben Stappers.
the compute power required to process all elements will be significant, and processing after beamforming can suffice if the RFI does not affect the beamforming process.
_Time Blanking_ - In the presence of strong periodic signals such as radar systems, a blanking scheme may be adopted where the backend and data taking is interrupted starting at the leading edge (in time) of the radar pulse. The blanking window will have to take into account both the pulse and the multi-path signals caused by the intervening terrain. This method has been applied at the Arecibo Observatory for certain radars at the San Juan (SJU) airport, which allows observations inside the radar band with a certain data loss that would not have been possible otherwise. Thresholding at baseband (discussed below) would achieve the same results as blanking the radar at the backend and with a similar amount of data loss.
### Removing the RFI signature
A number of _waveform subtraction_ methods and methods that employ the statistical properties of the data have been used to cancel and remove the added RFI power from the spectral data in the time domain with minimal or no damage to the data itself. However, the effectiveness and the achieved suppression are limited by the quality of the estimate of the interference received by the instrument.
_Autoclean_ - Temporally spread and strongly correlated RFI can be suppressed using cancellation techniques based on estimating the RFI waveform and subtracting it from the _signal + RFI_ mixture. The RFI waveform can be estimated using available filtering technique (spine-smoothing, wavelet analysis, Wiener filtering, parametric estimation). In this _autoclean procedure_ the estimate is then subtracted from the input data in the temporal or frequency domain (see Fig. 3a). In principle, these methods do little or no damage to the data and the signal of interest.
An application of the autoclean procedure has been described employing a WSRT telescope and comparing the result with that of an 'uncleaned' antenna [Fridman & Baan, 2001]. An example of a parametric approach to this type of RFI cancellation has been presented for the strong interfering signal of a GLONASS satellite operating within the OH 1612 MHz band [Ellingson et al., 2001]. The RFI parameters Doppler frequency, phase code and complex amplitude were determined for each data segment, to be used to calculate the RFI waveform for subtraction from the _RFI + noise signal_ mixture. A recent parametric application is
Figure 3: Methods of RFI cancellation using autoclean filtering (frame a) and RFI excision by filtering using a reference channel (frame b). Image obtained from Fridman & Baan (2001).
described for the removal of a broadband digital video broadcast (DVB) signal from the data [Steeb et al., 2018].
_Adaptive Noise Cancellation and Reference antennas_ - A separate dedicated reference channel may be used to obtain an independent estimate of the RFI signal. This technique of _adaptive noise cancellation_ (ANC) is used actively in digital processing for communication and military technology. Two data channels as shown in the block diagram of Figure 3b are a'main channel' radio telescope and an 'auxiliary or reference channel' radio telescope both pointing 'on-source' and containing the RFI signal. Since the RFI signal in both channels is not exactly identical because of different propagation paths and radio receivers, an adaptive filter is used to reduce the error signal and apply the result to the main channel. This basic principle of this procedure both in the temporal domain (_adaptive filtering_) and also the frequency domain is to make an fast Fourier transform (FFT) from the incoming data, perform an adaptation operation on the frequency bins, and then return to the frequency domain via an inverse FFT. This method, based on Wiener filtering, works for interfering signals with a significant INR, i.e. when the RFI dominates the system noise, and the suppression of the interfering signal can be equal to its instantaneous INR. Examples of the use of reference channels are described in [Fridman & Baan, 2001].
A variation on adaptive filtering is to subtract a reference data-channel from a signal data-channel by comparing _on-source plus RFI_ and _off-source plus RFI_ signals. Separate reference antennas pointed at the interfering source have been successfully used at the GBT and at Parkes [Barnbaum & Bradley, 1998, Briggs et al., 2000, 1998].
Adaptive filters are effective for spectral line observations, where the RFI and the signal-of-interest occupy the same frequency domain, and when spectral information is unimportant, such as in pulsar [Kesteven, 2005] and continuum studies. However, these methods should preferable be applied before data processing but may also be applied after depending on the SNR of the external signal. In principle, the signatures for multiple interfering sources may be added before subtraction from the wanted signal.
_Higher-order statistics and Probability Analysis_ - The voltages of the system noise and a radio source generally have a Gaussian signal distribution with a zero mean. Fourier transformation of such an ideal signal also gives real and imaginary components in every spectral bin, that are Gaussian random values with zero means. On the other hand, the instantaneous power spectrum (the square of the magnitude of the complex spectrum) has an exponential distribution that is described as a chi-squared distribution with two degrees of freedom. The presence of an RFI signal modifies the ideal input signal and yields a change of its statistics, giving a power spectrum with a non-central chi squared distribution with two degrees of freedom. Real-time analysis and DSP processing of the distribution before any averaging will allow a separation of the two signal components such as an RFI signal superposed on a known spectral line [Fridman & Baan, 2001].
Higher-order statistics methods and determining the moments of the probability distribution of the signal power can be used both for spectral line and continuum observations, and it will be useful to introduce this procedure into new detection systems [Fridman, 2001]. However, reliable estimates of these higher moments (or cumulants) will require more averaging than for the first moment (the mean), which is necessary for mitigation of the weak RFI. On the other hand, large averaging intervals will smooth the variability of the RFI and will yield estimates with a considerable bias. Therefore, there are limitations on the detection and excision of weak RFI signals, which is equally true for all other RFI mitigation methods.
_Multi-Element and Phased-Array Systems_ - In addition to station processing using thresholding or kurtosis techniques, adaptive filtering is particularly useful for multi-feed single-dish radio telescopes and array instruments, where one array element is being used as a reference channel. For an array instrument each strong and distinct source of RFI requires a separate reference antenna.
_Spatial filtering_ is widely used in _smart antennas_ in (multi-element) radar and communication systems. The applicability of spatial filtering for sparse arrays may be limited because of their sparseness and because of offline processing of long integrations during which RFI sources may also be moving. A variety of specific algorithms including maximum SNR, subspace projection, Wiener filtering, and multiple sidelobe cancelling have been studied for application to radio astronomical observing [Boonstra, 2005, Boonstra & van der Tol,
2005, Ellingson, 2003, Ellingson & Hampson, 2002, Jeffs et al., 2005, Fridman & Baan, 2001, Leshem et al., 2000, van der Tol & van der Veen, 2005].
Multi-layer real-time _adaptive/spatial-nulling_ techniques may be applied for telescopes with phased-array technology but also for densely populated sensor stations, such as for recent low-frequency systems or new generation instruments [van Ardenne et al., 2000, Bentum et al., 2008, Black et al., 2015, Boonstra, 2005, Bregman, 2000]. The use of a reference antenna with a direct look at the interferer with a higher SNR will greatly improve the nulling performance [Briggs et al., 2000, Jeffs et al., 2005, Sardarabadi et al., 2016]
In order to fully exploit these techniques, multi-element systems should have computer control of the antenna phases and their amplitudes [Fridman & Baan, 2001]. Adaptive filtering using a beam-forming algorithm requires a high INR and is limited to a small number of RFI targets to be tracked during an observation. The RFI sources also need to remain stable and predictable through an observation. Spatial filtering in beam-forming mode for a limited number of RFI sources generally does not degrade the image generated by the main beam.
_Subspace Projections for Multi-Element Systems_ - _Subspace projection_ for array null-formation identifies the interference in terms of correlations between array elements, which can be used to determine beamforming coefficients that result in patterns that reject the interference with little or no effect on the main lobe characteristics. Subspace projection using outstanding RFI properties has significant advantages for radio astronomy [Raza et al., 2002] but does not help with poor detection and localisation performance for the interference [Ellingson & Hampson, 2002, Leshem et al., 2000].
Beam pattern distortions when subspace nulling a (rapidly) moving source of interference and for narrowband signals can be reduced with deeper nulls when time-integrated data is stored and processed [Jeffs & Warnick, 2008, Landon et al., 2011]. However, the cost of digital/transport infrastructure and a possible auxiliary antenna is an obstacle for implementation and there are limits in nulling below the noise floor [Jeffs & Warnick, 2013]. There is also the need for a good science case. In general, null-forming is most applicable to mitigation of RFI from satellites, and can be expected to be somewhat less effective against terrestrial RFI with intervening terrain.
## 7 Station and Central - Mitigation at correlation and post-correlation
As part of the correlation process, digitised data are generally integrated over time intervals ranging from the sampling time up to seconds, which significantly raises the INR. In consequence, persistent but weak RFI, that could not be treated in real-time, and weak (spectral) remnants of earlier mitigation operations become accessible for processing. Therefore, a second layer of (automatic) real-time excision can be applied on the accumulated data records that would be complementary to off-line flagging of the data.
Flagging and excising interference in the frequency or time domain has been the standard procedure used by radio scientists during post-correlation processing. Flagging of baselines and antennas in array observations also identifies and eliminates system problems. Post-correlation excising is performed on integrated/averaged and correlated data, and can result in considerable data loss because even for time- and frequency-variable RFI whole time-slots, whole baselines, and/or whole antennas need to be flagged. This differs from antenna-based flagging/excising of IF baseband data, which results in less data loss overall.
_On-line or off-line processing_ - Anti-coincidence protocols may be incorporated at the processing stage in order to identify the RFI components, as well as digital mitigation processing and the integration of a reference antenna during (software) correlation. However, the implementation of these algorithms into pre-existing hardware backends requires the addition of both special hardware and software.
Automated flagging and excision of calibrated/processed and integrated data records has been proposed and implemented for single-dish systems or for each baseline of an interferometry system [Kalberla, 2010, Keating et al., 2010, Middelberg, 2006, Offringa et al., 2010, Sirothia et al., 2009]. Recent developments on Recurrent Neural Network algorithms and Deep Learning systems may provide different options for recognizing/identifying RFI in the data [Akeret et al., 2017, Burd et al., 2017]. Systems incorporating the SumThreshold algorithm have been implemented for the low-frequency LOFAR observatory and higher
frequency MERLIN interferometer (Offringa et al., 2012; Peck & Fenech, 2013). New generation software correlators permit the integration of threshold or kurtosis-based flagging applications before and after FX (Fourier Transform before multiplication) correlation and stacking protocols (Deller, 2010). A reference antenna implemented at the post-correlation stage can remove the signal from a well-defined RFI source using the available closure relations (Briggs et al., 2000).
_Fringe stopping and delay compensation_ - Array instruments employ fringe-stopping and delay-compensation techniques to keep a zero fringe rate at the central observing position during observations. As a result the stationary (terrestrial) and satellite RFI components in data distinguish themselves by fringing faster than components from astronomical sources. This distinctive (relative) motion allows the off-line identification and elimination of stationary RFI sources from both the correlated data and the image plane without causing data loss (Cornwell et al., 2004; Wijnholds et al., 2004). The coding for this operation originating at the GMRT is now incorporated into AIPS task UVRFI (Athreya, 2009).
_Sub-space processing_ - In addition, more sophisticated statistical or sub-space processing can be implemented to remove the RFI component with a minimum of data loss. _Subspace filtering methods_ may also be implemented in a digital correlation system to search for a particular signature in the RFI power component of data in order to identify and remove it. A particularly successful application is the search for cyclo-stationarity within the data, which works well for digitally modulated RFI signals (Feliachi et al., 2009; 7). For array applications these methods depend on the estimation of the RFI spatial signature using the diagonalization of the correlation matrix or the cyclic correlation matrix of the array (Hellbourg et al., 2012). In the case of a reference antenna, the evolution with time for a multiple RFI scenario requires a subspace tracking approach using the covariance data where the reference antenna supports a faster convergence (Hellbourg et al., 2014).
## 8 Conclusions
Both on-line and off-line data processing has been successful in mitigating the RFI environment of radio astronomy observatories. While there are an increasing variety of viable mitigation options, the choice of method depends strongly on the RFI characteristics, the type of radio instrument, and the type of observation. In particular, on-line real-time data processing may be preferred for a variable RFI environment, while special measures such as reference antennas and spatial filtering may be preferred for known and fixed sources of RFI. In addition to these factors, the absence of human involvement may also render automated on-line processing a more attractive option.
No universal method exists for mitigating RFI in astronomical data and no method can identify or remove RFI within the noise of the system. The effective suppression of RFI depends on the INR and its temporal and spectral characteristics. A quantitative evaluation of the method used is not always possible because mitigation algorithms are generally non-linear processes that also affect the noise characteristics and the gain calibration. The toxicity of the method used, i.e. the negative effect invoked on the data by the deployed method, and the amount of data loss resulting from the method are other guiding factors for evaluating the various methods.
Multiple methods need to be applied to deal with a more general RFI environment. Because RFI characteristics change after each mitigation step and with increasing integration of the data, the cumulative effect of RFI mitigation at subsequent stages is not a linear sum of what each method can do, but rather the sum of what is practically possible at each step.
The cost of computing hardware capabilities and digital applications at radio astronomy observatories are rapidly changing parameters. Upgrades of existing observing facilities and newly constructed instrumentation provide ample opportunity to implement and use automated RFI mitigation algorithms. These changing capabilities also result in increased observing bandwidths, higher time resolution, and higher spectral resolution. These increasingly large data volumes will force the introduction of automated data reduction pipelines and automated mitigation procedures. While the traditional user community of radio observatories does not or only reluctantly accepts automated mitigation implementations, they will be forced to do so in the future by the sheer volume of the data being handled.
New telecommunication and broadcasting technologies are reaching the market place, and many of these involve unlicensed mobile devices. Their movement is difficult to control and they will rapidly affect observatory operations. Algorithmic research is needed in order to eliminate the signals of these devices from the data. In particular, spread spectrum (ultra-wide band) devices will pose problems for passive services because their digital modulation schemes do not respect the boundaries of spectrum allocations. Current estimates suggest that the number of transmitting devices used by each person is set to increase dramatically and many of these devices will rely on dynamic spectrum access. The discovery space for radio astronomy is determined to a significant degree by the technical characteristics of the observing system and by limiting factors such as the RFI environment. While new generation instruments seek out the most pristine environments, existing facilities need to coexist with their local environment. In order to prevent the RFI environment becoming the limiting factor for each of the existing observatories, spectrum management, both internal and external, as a method to control this environment must remain a very high priority. Both observatory management and astronomers should be taking RFI issues seriously.
|
2305.00554 | Breaking Blockchain Rationality with Out-of-Band Collusion | Blockchain systems often rely on rationality assumptions for their security,
expecting that nodes are motivated to maximize their profits. These systems
thus design their protocols to incentivize nodes to execute the honest protocol
but fail to consider out-of-band collusion. Existing works analyzing
rationality assumptions are limited in their scope, either by focusing on a
specific protocol or relying on non-existing financial instruments. We propose
a general rational attack on rationality by leveraging an external channel that
incentivizes nodes to collude against the honest protocol. Our approach
involves an attacker creating an out-of-band bribery smart contract to motivate
nodes to double-spend their transactions in exchange for shares in the
attacker's profits. We provide a game theory model to prove that any rational
node is incentivized to follow the malicious protocol. We discuss our approach
to attacking the Bitcoin and Ethereum blockchains, demonstrating that
irrational behavior can be rational in real-world blockchain systems when
analyzing rationality in a larger ecosystem. We conclude that rational
assumptions only appear to make the system more secure and offer a false sense
of security under the flawed analysis. | Haoqian Zhang, Mahsa Bastankhah, Louis-Henri Merino, Vero Estrada-Galiñanes, Bryan Ford | 2023-04-30T19:10:08Z | http://arxiv.org/abs/2305.00554v1 | # Breaking Blockchain Rationality
###### Abstract
Blockchain systems often rely on rationality assumptions for their security, expecting that nodes are motivated to maximize their profits. These systems thus design their protocols to incentivize nodes to execute the honest protocol but fail to consider out-of-band collusion. Existing works analyzing rationality assumptions are limited in their scope, either by focusing on a specific protocol or relying on non-existing financial instruments. We propose a general rational attack on rationality by leveraging an external channel that incentivizes nodes to collude against the honest protocol. Our approach involves an attacker creating an out-of-band bribery smart contract to motivate nodes to double-spend their transactions in exchange for shares in the attacker's profits. We provide a game theory model to prove that any rational node is incentivized to follow the malicious protocol. We discuss our approach to attacking the Bitcoin and Ethereum blockchains, demonstrating that irrational behavior can be rational in real-world blockchain systems when analyzing rationality in a larger ecosystem. We conclude that rational assumptions only appear to make the system more secure and offer a false sense of security under the flawed analysis.
## 1 Introduction
Blockchain systems often rely on rationality assumptions to ensure their security by providing financial incentives for adhering to the honest protocol. For example, in Proof-of-Work, miners are incentivized to work on the longest chain as it increases their expected chances of having their blocks accepted in the blockchain. Similarly, in Proof-of-Stake, such as the one recently adopted by Ethereum [4], validators are disincentivized from malicious behavior, such as signing two blocks with the same height, due to the loss of part of their deposits. These incentive mechanisms seem to secure these systems as any entity deviating from the honest protocol would have a lower or negative expected return.
However, as many previous works demonstrated [6, 12, 1, 8, 5, 10], those mechanisms might not be incentive-compatible, _i.e._, there exists a more profitable alternative strategy that deviates from the honest protocol. For instance, selfish mining is a strategy to increase miners' expected return by deviating from the longest-chain rule expected by the Bitcoin mining protocol [6]. Whale attacks
incentivize miners to fork the chain to include an off-the-blockchain transaction with a substantial transaction fee [12].
Whereas those previous works focus on specific protocols within individual blockchain systems, we question the incentive mechanism at a meta-level: Are those blockchain systems rely on rationality assumptions secure in general? We try to answer this research question by considering attacks beyond their ecosystem taking into account the broader influences of the outside world on the system. What is considered irrational behavior within their ecosystem might be rational when analyzing rationality in the context of a larger ecosystem.
We demonstrate that rationality assumptions can be defeated by attacks driven by rationality. Specifically, an attacker creates an out-of-band bribery smart contract that incentivizes nodes to double-spend the attacker's transactions. In return, the attacker can then share the profits from the double-spending with colluded consensus nodes, offering a financial incentive for them to commit the attack in the first place.
A closely related work by Ford and Bohme [7] also offer a general rational attack on rationality. However, their attack method relies on financial instruments that are either non-existent or not well-established in the cryptocurrency markets. We, on the other hand, eliminate the need for non-existent financial instruments and significantly relaxes the requirements to launch the attack.
To prove that out-of-band collusion breaks blockchain systems' rationality assumptions, we propose a game theory model and use it to analyze a blockchain system before and after launching our attack. We find that in the absence of the attack, following the honest protocol is a strict Nash equilibrium that discourages nodes from deviating; however, in the presence of our attack, the honest protocol becomes a weakly dominated strategy. In particular, we identify a finite sequence of deviations from the honest protocol where each deviating node obtains at least the same reward as before the deviation. This sequence ultimately leads to a state where all the nodes follow our attack. Furthermore, we prove that following our attack is a strict Nash equilibrium, thus disincentivizing further deviation.
We provide an outline of the steps required to break the longest-chain rule in Bitcoin and the deposit-slashing protocol in Ethereum. Our work implies that rationality assumptions only appear to make the system more secure and provide a false sense of security.
## 2 Assumptions Underlying the Attack
This section introduces the following assumptions for our attack model:
**Assumption 1:** We consider the target system \(S\) to be an open financial payment network operating on blockchain rails, where any client can initiate a transaction. \(S\) is maintained by a set of rational nodes \(\mathcal{N}=\{1,2,\ldots,n\}\) who seek to maximize their profits. We assume that each node, \(i\in\mathcal{N}\), has the power of \(v_{i}\), _i.e._, the voting power to decide the next block in the blockchain system. For
example, the voting power in a Proof-of-Work blockchain is the nodes' computational power and the voting power in a Proof-of-Stake blockchain is nodes' stake amount, whereas the voting power in a practical Byzantine Fault Tolerance(PBFT) blockchain is the existence of an approved node. We normalize the power distribution such that the sum of all the nodes' power is equal to 1: \(\sum_{i=1}^{n}v_{i}=1\). For simplicity, we assume that the number of nodes and their power distribution remains constant; however, our model also applies to the dynamic number of nodes with smooth power changes.
#### 2.0.1 Assumption 2:
We assume the existence of an open system \(S^{\prime}\) that supports smart contracts and has access to a perfect oracle mechanism \(\mathcal{O}\) that can access real-time state information on \(S\) without manipulation. To avoid \(S^{\prime}\) and \(\mathcal{O}\) being attacked by the same rational attack, we assume that \(S^{\prime}\) and \(\mathcal{O}\) do not rely on any rationality assumption, and their security assumptions hold. For example, \(S^{\prime}\) could be a PBFT-styled blockchain, where at most \(f\) of \(3f+1\) nodes can fail or misbehave, and \(\mathcal{O}\) can solely rely on trusted hardware [3] to provide truthful information from \(S\).
#### 2.0.2 Assumption 3:
The system \(S\) leverages, in some fashion, rationality assumptions to incentivize nodes to follow the \(S\)-defined honest protocol \(\mathcal{P}_{h}\). Mathematically, we assume there is a well-known power threshold \(t\) such that, within a time period, if \(\mathcal{N}_{h}\subset\mathcal{N}\) with \(\sum_{i\in\mathcal{N}_{h}}v_{i}>t\) follows the honest protocol \(\mathcal{P}_{h}\), for \(i\in\mathcal{N}_{h}\) expects to receive a reward of \(\mathcal{R}_{h,i}>0\), and for \(i\notin\mathcal{N}_{h}\) expects to obtain a reward \(\mathcal{R}_{d,i}\). We assume that \(\forall i\in\mathcal{N},\mathcal{R}_{d,i}<\mathcal{R}_{h,i}\). \(\mathcal{R}_{d,i}\) can be negative, _i.e._, a node receives punishment for deviating from \(\mathcal{P}_{h}\).
#### 2.0.3 Assumption 4:
We assume the existence of a malicious protocol \(\mathcal{P}_{m}\) that differs from the expected behavior such that, within the same time period, if \(\mathcal{N}_{m}\subset\mathcal{N}\) with \(\sum_{i\in\mathcal{N}_{m}}v_{i}>t\) follows the malicious protocol \(\mathcal{P}_{m}\), for \(i\in\mathcal{N}_{m}\) can expect to receive a reward of \(\mathcal{R}_{m,i}\), and for \(i\notin\mathcal{N}_{m}\) can expect to obtain a reward of \(\mathcal{R}_{d^{\prime},i}\). We assume that \(\forall i\in\mathcal{N},\mathcal{R}_{d^{\prime},i}<\mathcal{R}_{m,i}\) and \(\mathcal{R}_{m,i}>\mathcal{R}_{h,i}\) as the malicious protocol is only worthwhile for attackers if it provides them with greater rewards. In Section 5.1, we show that there always exists a malicious protocol capable of double-spending attacks to satisfy this assumption in real-world blockchain systems.
#### 2.0.4 Assumption 5:
We assume that the underlying consensus requires \(t\geq\frac{1}{2}\) to avoid nodes split into two independent functional subsets. We also assume that no single node can abuse the system, meaning that \(\forall i\in\mathcal{N},v_{i}<t\). For simplicity, we assume that if neither \(\mathcal{P}_{h}\) nor \(\mathcal{P}_{m}\) has enough nodes to execute, \(S\) loses liveness, and nobody gets any reward.
```
**Algorithm 1**Bribery smart contract to incentivize collusion
## 3 Rational Attack on Rationality
This section presents our attack on rationality at a high-level. We begin by demonstrating that no rational node would execute \(\mathcal{P}_{m}\) without collusion. We then introduce an attacker who creates a _Bribery Smart Contract_ on \(S^{\prime}\) that incentivizes the nodes on \(S\) to launch the attack.
**Without Collusion:** In the absence of collusion between nodes, each node is incentivized to follow the honest protocol \(\mathcal{P}_{h}\); no single rational node will deviate from \(\mathcal{P}_{h}\) as the expected reward is lower than that of following \(\mathcal{P}_{h}\) (\(\mathcal{R}_{d,i}<\mathcal{R}_{m,i}\) in Assumption 3). Therefore, when there is no collusion, \(S\) is secure under the rational assumption (we present a game theory analysis in Section 4.1). However, one cannot optimistically assume that such collusion will not exist.
**Magnate-Coordinated Collusion:** When an \(S^{\prime}\) exists, an attacker (referred to as a _magnate_) can use it to coordinate collusion between nodes (Assumption 2). To defeat \(S\), the magnate can create a bribery smart contract to attract nodes (referred to as _minions_ and denoted by \(\mathcal{N}_{m}\)).
We use the double spending attack induced by the magnate as an example to illustrate a possible malicious protocol \(\mathcal{P}_{m}\). The magnate needs to use a bribery smart contract to specify the transaction to be reverted, and order minions to work on a fork that allows the magnate to double-spend the transaction. To
ensure the attack's success, the magnate must guarantee that each node can expect a higher reward, _i.e._, \(\mathcal{R}_{m}>\mathcal{R}_{h}\). In the case of this double-spending attack, each node can still expect to receive the rewards that a node executing \(\mathcal{P}_{h}\) would typically get, such as block rewards and transaction fees. However, nodes can now expect to receive a share of the profits obtained by the magnate through double-spending by having the nodes execute \(\mathcal{P}_{m}\). Therefore, the magnate has successfully produced a reward \(\mathcal{R}_{m}\) strictly greater than \(\mathcal{R}_{h}\). Note that the double spending attack is just one example of a malicious protocol. As long as the malicious protocol \(\mathcal{P}_{m}\) produces a higher reward, _i.e._, \(\mathcal{R}_{m}>\mathcal{R}_{h}\), it works in our model to defeat rationality.
We outline the design of the bribery smart contract (Algorithm 1) on \(S^{\prime}\) that would enable the magnate to execute the attack successfully. All parties must be held accountable if any party defects to ensure a successful attack in practice. During the creation of the smart contract, the magnate thus deposits \(\mathcal{D}_{m}\) to be shared among the nodes if the attack is successful. In addition, when joining the bribery smart contract, each minion is required to deposit \(\mathcal{D}_{i}\) to be slashed in case of a defect. When the minions' total voting power exceeds \(t\), the bribery smart contract orders them to execute \(\mathcal{P}_{m}\). The smart contract then can monitor the attack through the oracle \(\mathcal{O}\) (Assumption 2) and upon success, returns the deposits with a share of \(\mathcal{D}_{m}\) to each minion. If the magnate fails to attract enough minions to commit the attack, the deposits are still returned to each minion after an expiration time, making the commitment of the attack by a node risk-free. The magnate can also require a large \(\mathcal{D}_{i}\) as each colluded node expects to get back \(\mathcal{D}_{i}\) eventually (we discuss how to choose \(\mathcal{D}_{i}\) in Section 4.2). However, if a minion does not follow the order from the bribery smart contract, their deposit is burned, thus incentivizing each minion to follow the order.
Given the bribery smart contract, a rational node is incentivized to commit and execute \(\mathcal{P}_{m}\), as, intuitively, every node can benefit. If a node does not participate in the attack, it can, at most, obtain \(\mathcal{R}_{h}\). However, if a node joins the attack, it will receive at least \(\mathcal{R}_{h}\) with the opportunity of increasing its reward to \(\mathcal{R}_{m}\). We offer a game theory analysis on node collusion in Section 4.2. We emphasize that, in this attack, the magnate does not even need to control any part of \(S\) or \(S^{\prime}\), making such an attack doable with a low barrier to launch.
## 4 Game Theoretic Analysis
In this section, we formalize the behavior of \(S\) nodes and examine the possibility of deviation first without any collusion and then with collusion through the bribery smart contract on \(S^{\prime}\).
In the absence of collusion, following the honest protocol \(\mathcal{P}_{h}\) is a strict Nash equilibrium, meaning that no player will deviate as deviation leads to a lower payoff. However, in the presence of the bribery smart contract, following the protocol \(\mathcal{P}_{h}\) is a weakly dominated strategy and thus is no longer a strict Nash equilibrium. In particular, we identify a sequence of deviations from \(\mathcal{P}_{h}\) where each deviant node obtains at least the same payoff as before. We show that this
sequence of deviations ends with following the bribery smart contract orders. Furthermore, we prove that following the bribery smart contract orders is a strict Nash equilibrium, yielding the maximum payoff of the game. As a result, no rational player would deviate from it.
Additionally, we provide a bound on the amount of money that minions should deposit in the bribery smart contract to ensure that they do not deviate from the bribery smart contract's commands.
### Game 0: Without Collusion
We model the behavior of the nodes in the absence of any external factors as a strategic-form game \(\Gamma_{0}=(\mathcal{N},\{\mathsf{S}_{h},\mathsf{S}_{m}\}^{n},\mathrm{Utility }_{i}^{0}(.)_{i\in\mathcal{N}})\). \(\mathcal{N}=\{1,2,\ldots,n\}\) is the set of nodes (players) of the game. Each node \(i\) has power \(v_{i}\) such that \(\sum_{i\in\mathcal{N}}v_{i}=1\). Each player can choose the honest strategy \(\mathsf{S}_{h}\) (corresponding with the protocol \(\mathcal{P}_{h}\)) or the malicious strategy \(\mathsf{S}_{m}\) (corresponding with the protocol \(\mathcal{P}_{m}\)). We denote the chosen strategy of node \(i\) by \(s_{i}\).
We define \(V_{h}\) as the total power of the nodes that choose strategy \(\mathsf{S}_{h}\) and \(V_{m}\) as the total power of the nodes which follow \(\mathsf{S}_{m}\), _i.e._,
\[V_{h}\coloneqq\sum_{i\in\mathcal{N}}v_{i}1_{\{s_{i}=\mathsf{S}_{h}\}}\]
\[V_{m}\coloneqq\sum_{i\in\mathcal{N}}v_{i}1_{\{s_{i}=\mathsf{S}_{m}\}}=1-V_{h}.\]
Finally, we define the utility function of node \(i\), \(\mathrm{Utility}_{i}^{0}(.)\), which is a function of \(i\)'s and other players' strategies as follows:
\[\mathrm{Utility}_{i}^{0}(s_{1},\ldots,s_{n})=\begin{cases} \mathcal{R}_{h,i}&\text{If }s_{i}=\mathsf{S}_{h}&\&\quad V_{h}>t\\ \mathcal{R}_{d^{\prime},i}&\text{If }s_{i}=\mathsf{S}_{h}&\&\quad V_{m}>t\\ \mathcal{R}_{d,i}&\text{If }s_{i}=\mathsf{S}_{m}&\&\quad V_{h}>t\\ \mathcal{R}_{m,i}&\text{If }s_{i}=\mathsf{S}_{m}&\&\quad V_{m}>t\\ 0&\text{If }V_{h},V_{m}\leq t\end{cases}\] with \[\mathcal{R}_{h,i}>\mathcal{R}_{d,i},\mathcal{R}_{m,i}>\mathcal{R}_{h,i}>0, \mathcal{R}_{m,i}>\mathcal{R}_{d^{\prime},i}.\]
Suppose \(V_{h}>t\), _i.e._, majority power is dedicated to the strategy \(\mathsf{S}_{h}\), player \(i\) obtains reward \(\mathcal{R}_{h,i}\) by following \(\mathsf{S}_{h}\) and obtains \(\mathcal{R}_{d,i}\) otherwise. Similarly, when the majority adopts \(\mathsf{S}_{m}\), player \(i\) obtains reward \(\mathcal{R}_{m,i}\) by following \(\mathsf{S}_{m}\) and gets \(\mathcal{R}_{d^{\prime},i}\) otherwise. We assume that \(\mathcal{R}_{m,i}>\mathcal{R}_{h,i}\) (Assumption 4). If both \(V_{h}\) and \(V_{m}\) are smaller than \(t\), all the nodes receive a payoff of \(0\) (Assumption 5).
Theorem 4.1: _In the strategic-form game \(\Gamma_{0}\) if \(\forall i\in\mathcal{N}\), \(\mathcal{R}_{d,i}<\mathcal{R}_{h,i}\) and \(\max_{i\in\mathcal{N}}v_{i}\leq t\), the strategy \(\mathsf{S}_{h}\) is a strict Nash Equilibrium._
Proof: We should prove that when all nodes play strategy, \(\mathsf{S}_{h}\), and an arbitrary node \(i\) deviates to \(\mathsf{S}_{m}\), \(i\) obtains less payoff. We use overline to denote a variable if \(i\) deviates.
When everybody plays \(\mathtt{S}_{h}\), \(V_{h}=1\), and if \(i\) deviates then \(\overline{V_{h}}=1-v_{i}\). One of the following two cases will occur:
* If \(v_{i}<1-t\), \(\overline{V_{h}}>t\); therefore, even if \(i\) deviates, \(\mathcal{P}_{h}\) executes, and \(i\) gets \(\mathcal{R}_{d,i}\) which is strictly less than \(\mathcal{R}_{h,i}\).
* If \(v_{i}\geq 1-t\), \(\overline{V_{h}}\leq t\) and \(\mathcal{P}_{h}\) does not execute with enough power in \(S\) if \(i\) deviates. As we assumed that \(v_{i}\leq t\) and \(i\) is the only player that plays \(\mathcal{P}_{m}\), we will have \(\overline{V_{m}}=v_{i}<t\); therefore, \(\mathcal{P}_{m}\) executes with enough nodes neither and every node, including \(i\), receives utility \(0\). As \(\mathcal{R}_{h,i}>0\), \(i\) gets less payoff if deviates.
Theorem 4.1 implies that in the absence of any external factors, given an initial honest behavior in \(S\), deviating from \(\mathcal{P}_{h}\) has strictly less utility. Therefore, nodes do not deviate from the honest protocol.
### Game 1: Magnate-Coordinated Collusion
We define Game \(\Gamma_{1}=(\mathcal{N},\{\mathtt{S}_{h},\mathtt{S}^{\prime}_{m}\}^{n}, \mathrm{Utility}^{1}_{i}(.)_{i\in\mathcal{N}})\) to describe \(S\) in the presence of an external factor: the bribery smart contract (Algorithm 1). Each node has two strategies \(\mathtt{S}_{h}\), \(\mathtt{S}^{\prime}_{m}\). \(\mathtt{S}_{h}\) is the honest strategy as described before. \(\mathtt{S}^{\prime}_{m}\) denotes the strategy of committing to the bribery smart contract and following its commands. We can interpret \(\mathtt{S}^{\prime}_{m}\) as a colluding version of \(\mathtt{S}_{m}\) which nodes only run \(\mathcal{P}_{m}\) if they are sure that enough voting power is dedicated to \(\mathcal{P}_{m}\).
Similarly, we denote the overall power of players who choose \(\mathtt{S}_{h}\) by \(V_{h}\); furthermore, we denote the overall power of minions (players who choose strategy \(\mathtt{S}^{\prime}_{m}\)) by \(V^{\prime}_{m}\) with relation \(V_{h}+V^{\prime}_{m}=1\). Note that \(V^{\prime}_{m}\) does not necessarily represent the real power dedicated to \(\mathcal{P}_{m}\) because if \(V^{\prime}_{m}\leq t\) then the bribery smart contract orders minions to follow \(\mathcal{P}_{h}\) and no one follows \(\mathcal{P}_{m}\); only when \(V^{\prime}_{m}>t\), the bribery smart contract orders minions to follow the protocol \(\mathcal{P}_{m}\).
To incentivize minions to follow the bribery smart contract's orders unconditionally, the bribery smart contract requires the minions to deposit some money at the time of commitment. Magnate should choose a large enough deposit such that it rules out any order violation. In Theorem 4.1, we find a deposit function that satisfies this necessity.
Theorem 4.2: _If the bribery smart contract sets the deposit for all the minions as described in the equation 1, under no circumstances any minion has the incentive to deviate from the bribery smart contract commands._
\[D>\max_{i\in\mathcal{N}}(\mathcal{R}_{m,i}+\max\left\{|\mathcal{R}_{d,i}|,| \mathcal{R}_{d^{\prime},i}|\right\}) \tag{1}\]
Proof: Consider node \(i\) that has committed to the bribery smart contract and has deposited value \(\mathcal{D}_{i}\). \(i\) receives a payoff \(x\) if it follows the bribery smart contract commands and gets a payoff \(y-\mathcal{D}_{i}\) if it deviates from the commands where \(x,y\) are valid utility values, i.e., \(x,y\in\{\mathcal{R}_{m,i},\mathcal{R}_{h,i},\mathcal{R}_{d,i},\mathcal{R}_{d^{ \prime},i}\}\) and their value depend on the strategy of other players. Our objective is to select \(\mathcal{D}_{i}\) in such a way that deviates from the commands of the bribery smart contract are
always more detrimental than any other strategy, regardless of what strategies other players are pursuing. Hence, the following should hold for any valid \(x,y\):
\[y-\mathcal{D}_{i}<x\rightarrow\mathcal{D}_{i}>y-x\]
We know that as \(\mathcal{R}_{h,i},\mathcal{R}_{m,i}>0\), \((\max\left\{\mathcal{R}_{h,i},\mathcal{R}_{m,i}\right\}+\max\left\{|\mathcal{ R}_{d,i}|,|\mathcal{R}_{d^{\prime},i}|\right\})=\mathcal{R}_{m,i}+\max \left\{|\mathcal{R}_{d,i}|,|\mathcal{R}_{d^{\prime},i}|\right\}\) is an upper bound on \(y-x\); therefore, it suffices to choose \(D>\max_{i\in\mathcal{N}}(\mathcal{R}_{m,i}+\max\left\{|\mathcal{R}_{d,i}|,| \mathcal{R}_{d^{\prime},i}|\right\})\)
The implication of Theorem 2.1 is that if a rational node commits to the bribery smart contract, it always follows the bribery smart contract commands. Therefore there are only two possible strategies for the nodes, either playing the honest strategy or committing all of their power to the bribery smart contract and following its orders. If we use a deposit function that does not satisfy equation 1, in some cases, some minions might benefit by deviating from the bribery smart contract orders and dedicating less power to the specified protocol by the bribery smart contract even if they have committed to the bribery smart contract. Thus Theorem 2.1 is essential for defining \(\Gamma_{1}\). Now we can define the utility function of the game \(\Gamma_{1}\) as follows:
\[\text{Utility}_{i}^{1}(s_{1},\ldots,s_{n})=\begin{cases}\mathcal{R}_{h,i}& \text{If }s_{i}=\mathsf{S}_{h}&\&\quad V_{h}>t\\ \mathcal{R}_{d^{\prime},i}&\text{If }s_{i}=\mathsf{S}_{h}&\&\quad V_{m}^{ \prime}>t\\ \mathcal{R}_{h,i}&\text{If }s_{i}=\mathsf{S}_{m}^{\prime}&\&\quad V_{h}>t\\ \mathcal{R}_{m,i}&\text{If }s_{i}=\mathsf{S}_{m}^{\prime}&\&\quad V_{m}^{ \prime}>t\\ \mathcal{R}_{h,i}&\text{If }V_{h},V_{m}^{\prime}\leq t\end{cases}\]
\[\text{with }\mathcal{R}_{m,i}>\mathcal{R}_{h,i}>0,\mathcal{R}_{m,i}>\mathcal{R}_{d^{ \prime},i}.\]
The key difference between game \(\Gamma_{1}\) and \(\Gamma_{0}\) is that the minions are now colluding and as a result, they will not execute protocol \(\mathcal{P}_{m}\) when \(V_{h}>t\) to avoid the penalty \(\mathcal{R}_{d,i}\).
Theorem 2.2: _In the strategic-form game \(\Gamma_{1}\), the strategy \(\mathsf{S}_{h}\) is not a strict Nash equilibrium, and even further, if any subset of nodes deviates from \(\mathsf{S}_{h}\) to \(\mathsf{S}_{m}^{\prime}\), the deviating nodes always get at least the same payoff as if they were playing strategy \(\mathsf{S}_{h}\)._
Proof: Without the deviation \(V_{h}=1\), \(V_{m}^{\prime}=0\) and every node \(i\) obtains reward \(\mathcal{R}_{h,i}\). We denote the set of nodes that deviate from \(\mathsf{S}_{h}\) to \(\mathsf{S}_{m}^{\prime}\) as \(\mathcal{N}_{m}\), while the rest of the nodes \(\mathcal{N}-\mathcal{N}_{m}\) play strategy \(\mathsf{S}_{h}\). We use the overlined variable to show the value of that variable if deviation takes place.
* If the overall power of \(\mathcal{N}_{m}\) is equal or less than \(t\), i.e., \(\overline{V_{m}^{\prime}}\leq t\), the bribery smart contract will order running protocol \(\mathcal{P}_{h}\); therefore, the members of \(\mathcal{N}_{m}\) will run \(\mathcal{P}_{h}\). As other nodes also run \(\mathcal{P}_{h}\), all the nodes no matter if they are a member of \(\mathcal{N}_{m}\) or not will get the same reward as before, i.e., \(\mathcal{R}_{h,i}\).
* If the overall power of \(\mathcal{N}_{m}\) is greater than \(t\), i.e., \(\overline{V^{\prime}_{m}}>t\), the bribery smart contract will order running protocol \(\mathcal{P}_{m}\); therefore, the members of \(\mathcal{N}_{m}\) will run \(\mathcal{P}_{m}\) and will obtain reward \(\mathcal{R}_{m,i}\), and the rest of the nodes will get the utility \(\mathcal{R}_{d^{\prime},i}\). As \(\mathcal{R}_{d^{\prime},i}<\mathcal{R}_{m,i}\), the nodes that deviate will get a better payoff, and the nodes that do not deviate are better off by deviating.
Theorem 4.1: _In the strategic-form game \(\Gamma_{1}\), if \(\mathcal{R}_{d^{\prime},i}<\mathcal{R}_{m,i}\) and \(\mathcal{R}_{h,i}<\mathcal{R}_{m,i}\), the strategy \(\mathcal{S}^{\prime}_{m}\) is a strict Nash Equilibrium._
Proof: When all the nodes play \(\mathsf{S}^{\prime}_{m}\) we have \(V^{\prime}_{m}=1\), and every node \(i\) obtains reward \(\mathcal{R}_{m,i}\). If player \(i\) deviates to \(\mathsf{S}_{h}\), one of the following two cases will occur:
* If \(v_{i}<1-t\), \(\overline{V^{\prime}_{m}}=1-v_{i}>t\); thus, the bribery smart contract orders to run \(\mathcal{P}_{m}\) and \(i\) will receive \(\mathcal{R}_{d^{\prime},i}<\mathcal{R}_{m,i}\).
* If \(v_{i}\geq 1-t\), \(\overline{V^{\prime}_{m}}=1-v_{i}\leq t\); thus, the bribery smart contract orders to follow \(\mathcal{P}_{h}\) and every node, as well as \(i\), gets the honest reward \(\mathcal{R}_{h,i}<\mathcal{R}_{m,i}\).
**Implication:** In a functional system where nodes execute the honest protocol without any collusion, no node has the incentive to deviate. However, with collusion, strategy \(\mathsf{S}_{h}\) becomes a weakly dominated Nash equilibrium. Specifically, any colluding subset of nodes would receive at least the same payoff as before. Hence, it is rational for them to deviate in order to seek a higher payoff. Once the subset with power larger than \(t\) deviates, the nodes strictly benefit from deviation (as \(\mathcal{R}_{m,i}>\mathcal{R}_{h,i}\)); thus, we expect \(S\) to transition to a state where everybody plays \(\mathsf{S}^{\prime}_{m}\). From this point, as \(\mathsf{S}^{\prime}_{m}\) is a strict Nash equilibrium, no party will deviate from it. In summary, we have identified a sequence of deviations where each node receives at least the same payoff as before, and eventually, the system settles into a strict Nash equilibrium and remains there.
Coming back to the example of a double-spending attack organized by a magnate, Theorem 3.1 states that starting from a healthy system \(S\), if any subset of nodes commit their power to the bribery smart contract and run the double-spending attack if the bribery smart contract orders so, the minions will never get a less payoff than playing the honest strategy. Moreover, Theorem 4.1 suggests that starting from a situation where all the nodes commit to the bribery smart contract and execute the double spending attack, if a node deviates and plays the honest strategy, the deviant node gets strictly less payoff after deviation.
## 5 Sketch to Break Real-World Blockchain Systems
We illustrate a malicious protocol that generally exists in real-world blockchain systems, and then we discuss how we can use it to attack Bitcoin and Ethereum.
### Double-Spending as Malicious Protocol
We present there always exists a malicious protocol \(\mathcal{P}_{m}\) enabling double-spend attacks in \(S\), illustrated in Figure 1. A colluded node executes the \(\mathcal{P}_{m}\) when
the block that contains the target transactions receives enough block confirmations. The protocol aims to revert the block by working a fork, which allows the magnate to double-spend the transactions confirmed previously. When the fork becomes the valid chain, \(\mathcal{P}_{m}\) finishes.
### Breaking the Longest-Chain Rule in Bitcoin
Bitcoin's protocol incentivizes the nodes to adopt the longest-chain rule when mining a new block. This behavior assumption applies to the rationality principle: As long as more than 50% of the nodes follow the longest-chain rule, any rule-deviating node would reduce its expected chance to mine new accepted blocks and thus its expected reward. Therefore, the longest-chain rule is consistent with our Assumption 3.
We now sketch the attacking method based on double-spending. Once a magnate selects a transaction to double spend, they create a bribery smart contract with the malicious protocol \(\mathcal{P}_{m}\) in an attempt to reverse the transaction by creating a fork. The magnate is required to put up a deposit \(\mathcal{D}_{m}\) proportional to the expected reward for double spending this transaction. Similar to an auction contract, the magnate also specifies a time \(T_{e}\) when the contract expires.
Once the bribery smart contract is published, any rational node is incentivized to join the bribery smart contract and, when enough nodes have joined, follow \(\mathcal{P}_{m}\) due to the expected reward increase over following \(\mathcal{P}_{h}\). The bribery smart contract requires nodes to deposit \(\mathcal{D}_{i}\) in case they defect. \(\mathcal{D}_{i}\) needs to be more than the block rewards and transactions fees that can be reverted by the fork. If the bribery smart contract successfully attracts more than 50% of the nodes, then the nodes launches the attack. While launching the attack, each node submits proofs to the bribery smart contract that it is following \(\mathcal{P}_{m}\). Since Bitcoin uses Proof-of-Work as the underlying consensus algorithm, proofs can be hash results that satisfy a difficulty requirement, similar to how miners prove their work to a mining pool [11].
Figure 1: In a real-world blockchain system, given an honest protocol \(\mathcal{P}_{h}\), the magnate can always construct a malicious protocol \(\mathcal{P}_{m}\) with a higher total reward by double-spending transactions through reverting a confirmed block.
### Breaking the Deposit-Slashing Protocol in Ethereum
In the recent upgrade of the Merge [4], Ethereum changed its consensus algorithm to Proof-of-Stake. To incentivize honest nodes and punish malicious ones, Ethereum adopts a deposit-slashing protocol, where each node must deposit some cryptocurrency. A node can withdraw its deposit entirety when exiting the consensus group if no other node can prove that it violated the protocol. Ethereum utilizes the deposit-slashing protocol to punish the double-sign behavior, _i.e._, a node signs two blocks with the same height, thus mitigating the double-spending issues.
The magnate can adopt a similar strategy to break the deposit-slashing protocol. The magnate still tries to double spend transactions to create additional rewards for the colluded nodes. The colluded nodes need to work on the fork indicated by the magnate after the targeted transaction is confirmed. By doing so, each colluded node needs to sign two blocks with the same height, a behavior violating the deposit-slashing protocol. Thus, the colluded node is subject to be slashed if anyone submits the proof to the blockchain. However, as long as all the colluded nodes do not allow the proof to be included on the blockchain in the first place, the slashing will never happen.
To prove that a node has executed the \(\mathcal{P}_{m}\), the bribery smart contract has to verify that it has voted to the fork indicated by the magnate and has not voted for any block with proof potentially slashing other colluded nodes before exiting the consensus group. The second condition effetely delays the verification time; However, as long as the magnate attracts enough nodes, the magnate is in total control of the blockchain before the colluded nodes exit the consensus group.
## 6 Discussion
Our work reveals the weakness of blockchain systems that depend on rationality for security. Despite this weakness, to the best of our knowledge, no major cryptocurrency has suffered from rational attacks [16, 2], even with the usual concentration of voting power in the hands of a few [13].
The absence of such an attack may result from other factors. First, it may be because the attack is hard to communicate and coordinate, _i.e._, every node must be aware of such a bribery smart contract, rendering such attacks hard to be realized in real-world blockchain systems. Second, cryptocurrency stakeholders may be unwilling to conduct such an attack due to the potential loss of faith in the cryptocurrency market, leading to significant price drops; thus, it is irrational to launch such an attack if we consider the monetary value of the cryptocurrency [2]. Finally, some actors may choose not to participate in such an attack out of altruism, even though the strategy does not maximize their profits.
Nevertheless, our theoretical conclusion is that rationality is insufficient for security; thus, its use results in a false sense of security, and such an attack could happen at any moment. Our work implies that to build a secure blockchain system, we have to rely on non-rational assumptions, such as threshold assumptions
(_i.e._, a certain percentage of the nodes are truly honest, even though this would lead to profit loss) and police enforcement (_e.g._, nodes would face legal prosecution if not following the honest protocol).
## 7 Related Work
The earliest work attacking blockchain rationality is selfish mining, demonstrating that the Bitcoin mining protocol is not incentive-compatible [6]. They prove that, in the current Bitcoin architecture, even if the adversary controls less than 50% of the hashing power, it can launch the attack successfully and earn more benefits than honest behavior.
Following the selfish mining attacks, several works attack blockchain incentive mechanisms, such as whale attacks [12], block withholding [5], stubborn mining [15], transaction withholding [1], empty block mining [8], and fork after withholding [10]. However, these previous works only discuss the attacks in a specific protocol.
Ford et al. first outline a general method to attack rationality, arguing that rationality is self-defeating when analyzing rationality in the context of a large ecosystem [7]. Although the attack generally applies to any blockchain system, it builds upon some non-existing financial instruments, indicating the attack is not practical any time soon. To our knowledge, our work is the first practical and general attack on rationality assumptions for various blockchain systems.
Finally, utilizing smart control to incentivize malicious behaviors is a well-known strategy in the blockchain space. McCorry et al. present various smart contracts that enable bribing of miners to achieve a strategy that benefits the briber [14]. Juels et al. propose criminal smart contracts that encourage the leakage of confidential information [9].
## 8 Conclusion
This paper proposes an attacking method that breaks the rationality assumptions in various blockchain systems. The attack utilizes an out-of-band smart contract to establish the collusion between nodes coordinated by a magnate. Unlike previous works which attack rationality for a specific protocol or rely on non-existent financial instruments, our method is more general and practical. Our result indicates that the rationality assumptions do not increase the system's security and might provide a false sense of security under the flawed analysis.
## Acknowledgments
This research was supported in part by U.S. Office of Naval Research grant N00014-19-1-2361, the AXA Research Fund, the PAIDIT project funded by ICRC, the IC3-Ethereum Fund, Algorand Centres of Excellence programme
managed by Algorand Foundation, and armasuisse Science and Technology. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the funding sources.
|
2306.17578 | Role of single particle motility statistics on efficiency of targeted
delivery of micro-robot swarms | The study of dynamics of single active particles plays an important role in
the development of artificial or hybrid micro-systems for bio-medical and other
applications at micro-scale. Here, we utilize the results of these studies to
better understand their implications for the specific application of drug
delivery. We analyze the variations in the capture efficiency for different
types of motion dynamics without inter-particle interactions and compare the
results. We also discuss the reasons for the same and describe the specific
parameters that affect the capture efficiency, which in turn helps in both
hardware and control design of a micro-bot swarm system for drug delivery. | Akshatha Jagadish, Manoj Varma | 2023-06-30T12:00:04Z | http://arxiv.org/abs/2306.17578v1 | Role of single particle motility statistics on efficiency of targeted delivery of micro-robot swarms
###### Abstract
The study of dynamics of single active particles plays an important role in the development of artificial or hybrid micro-systems for bio-medical and other applications at micro-scale. Here, we utilize the results of these studies to better understand their implications for the specific application of drug delivery. We analyze the variations in the capture efficiency for different types of motion dynamics without inter-particle interactions and compare the results. We also discuss the reasons for the same and describe the specific parameters that affect the capture efficiency, which in turn helps in both hardware and control design of a micro-bot swarm system for drug delivery.
**Keywords**: microbots, ABP, Chiral ABP, RTP, capture efficiency, motility statistics
## 1 Introduction
Recent advancements in micro- and nano-fabrication technology is enabling rapid advances in the interdisciplinary field of targeted drug delivery systems [1]. The vision of targeted drug delivery system is captured well in the science fiction movie Fantastic Voyage (1966) in which a team of scientists travel to the site of infection in a shrunken submarine to treat a blood clot. While the current state of the art does not yet allow us to perform such a task, it is a future that many researchers are working towards [2, 3].
Currently, major Drug Delivery Systems (DDSs) are still oral or parenteral (intravenous, subcutaneous, and intramuscular) [4]. This has numerous problems such as the risk of adverse drug reactions, undesired toxic side effects, and low patient compliance, to name a few. These can be conceptually overcome with the help of targeted drug delivery systems. Paul Ehrlich put forward the
seed for this thinking through his magic bullet concept [5]: "Drugs that go straight to the targets" provide better pharmacokinetic properties which result in enhanced bioavailability of drugs by avoiding destruction by the immune system. Targeted DDSs also provide better pharmacodynamic properties by localised targeting and not affecting the healthy tissues and thus minimizing side effects. This is achieved by active manoeuvring in-vivo and controlled drug release. These properties result in a reduction in the intake frequency of drugs to maintain drug efficacy
Development of such targeted DDSs has attracted the attention of not just biological and chemical fields but also the engineering fields such as nano-technology due to its ability to manipulate objects at micro- and nano-scale. Thus, there have been several approaches for designing'microbots' that can act as a magic bullet or a shrunken submarine [19]. A few of these examples are shown below in Table 1.
In Table 1, the type of propulsion is either external or internal, where external propulsion indicates that the entire swarm or group of microbots is globally powered and remotely actuated and guided, whereas internal propulsion indicates that each microbot in the swarm is autonomous (self-contained) and thus self-propelled. The external propulsion mechanism is simple in its implementation but has limited capabilities. The internal propulsion mechanism on the other hand is complex but once implemented will have emergent intelligence. A few patterns that emerge in the above table are as follows. Internally powered microbots are usually chemically propelled. Chemical propulsion occurs by the reaction of microbot's material with the surrounding fluid, thus converting chemical energy into kinetic energy. It can be observed that microbots with size below 3 microns (suitable for bio-applications) are either magnetically or chemically powered.
Whichever the implementation technique, an effective DDS is the one that can achieve efficient navigation towards the target region. Here, efficient navigation is an umbrella term that covers various aspects of optimal behaviour such as being the quickest to reach the target or consuming least power or having the largest capture efficiency and so on. We focus on the aspect of capture efficiency, where refers to the fraction of microbots reaching a specific "target" region, representing an organ to which drugs may need to be delivered. An ideal DDS would possess 100% or unit (normalized to 1) capture efficiency. However, this is not possible in general because the microbots do not follow an exact path unlike their macro-scale counterparts because of the dominance of random Brownian motion at the micro-scale. This renders them hard to be controlled. The stochastic dynamics of the swarm of these delivery agents (microbots) play an important role in determining the efficiency of the DDS.
There has been a lot of research aiming to describe the dynamics of such
microbots in the context of natural and artificial active matter systems [20, 21, 22, 23]. In addition to trying to understand the behaviour of active systems, the dynamical models reported in literature also consider the effects of surroundings, collective phenomena, boundaries, and various other environmental and emergent effects. The most generic of these are Active Brownian Particle (ABP), Run and Tumble Particle (RTP), Chiral ABP and Passive Brownian Particle (PBP). Experimentally observed dynamics of microbots can generally be well described by one of these generic models [20]. Differences in fabrication, mode of propulsion and so on lead to difference in dynamical behaviour at the single agent (microbot) level [24]. For instance, the behaviour of individual microbot may be much better described by a chiral ABP model as opposed to an ABP model (Examples 7 and 8 in table 1). To the best of our knowledge, the role of individual motility dynamics (statistics) on the overall capture efficiency of a swarm of microbots has not been reported in existing literature. Therefore, in this work, we characterize the efficiency of a DDS, an artificial, micro active matter system by examining the four generic motility models mentioned above. We first describe the mathematical details of the four generic models. Then we compare them using the parameter 'capture efficiency', which denotes the fraction of microbots that were successful in reaching the target.
## 2 Description of Models
In this section, we introduce the mathematical details of the four models by providing the dynamical equations and qualitative behaviour. The density map or the position PDF is a good measure to compare the four models and has been studied extensively [21, 25, 22, 26, 27].
### Active Brownian Particle (ABP)
This is an extensively used model to describe a non-interacting self-propelled particle. The ABP model was developed to describe self-phoretic colloids and non-tumbling E-coli bacteria [20, 28, 21]. These particles move with a constant velocity v but their orientation gradually changes due to the rotational diffusion co-efficient \(\mathrm{D_{R}}\). Here, the particle dynamics in 2 dimensions is modelled as in equation 1.
\[\dot{x} =v\cos\phi+\sqrt{2D_{T}}\xi_{x} \tag{1a}\] \[\dot{y} =v\sin\phi+\sqrt{2D_{T}}\xi_{y}\] (1b) \[\dot{\phi} =\sqrt{2D_{R}}\xi_{\phi} \tag{1c}\]
(\(x\),\(y\)) denotes the position of the particle (\(\dot{x}\), \(\dot{y}\) are used to update \(x\),\(y\) over time), \(\phi\) denotes the orientation, \(D_{T}\) is the translational diffusion co-efficient, \(D_{R}\) is the rotational diffusion co-efficient, \(\xi\) denotes the Gaussian white noise in equation 1.
The mean square displacement (MSD) of ABP over time is shown in equation 2.
\[\text{MSD}=4D_{T}t+\frac{2v^{2}t}{D_{R}}-\frac{2v^{2}(1-e^{-D_{R}t})}{D_{R}^{2}} \tag{2}\]
### Run and Tumble Particle (RTP)
This is a well-known model developed to describe the dynamics of predominantly natural micro-organisms like bacteria [29]. The particle dynamics involves periods of runs and tumbles. In the run state, the particle moves with a constant velocity for some duration, and in the tumble state, it changes its orientation while being in the same location. Here, the particle dynamics in 2 dimensions is modelled as in equation 3.
\[\dot{x} =v\cos\phi+\sqrt{2D_{T}}\xi_{x} \tag{3a}\] \[\dot{y} =v\sin\phi+\sqrt{2D_{T}}\xi_{y}\] (3b) \[\dot{\phi}(t) =\sum_{i}\Delta\phi_{i}\delta(t-T_{i}) \tag{3c}\]
The above change in \(\phi\) occurs at a rate \(\alpha\) called the tumble rate. The MSD of RTP is shown in equation 4[30].
\[\text{MSD}=4D_{T}t+\frac{2v^{2}t}{\alpha}-\frac{2v^{2}(1-e^{-\alpha t})}{ \alpha^{2}} \tag{4}\]
### Chiral ABP
These particles are similar to ABPs except for an addition of angular velocity \(\omega\) in their dynamics as described in the equation 5.
\[\dot{x} =v\cos\phi+\sqrt{2D_{T}}\xi_{x} \tag{5a}\] \[\dot{y} =v\sin\phi+\sqrt{2D_{T}}\xi_{y}\] (5b) \[\dot{\phi} =\omega+\sqrt{2D_{R}}\xi_{\phi} \tag{5c}\]
The chirality arises due to a small deviation in the symmetry of the microbot or in the propulsion mechanism and is observed a lot in nature [20]. This deviation is taken care by the additional parameter \(\omega\) which could play an important role in the capture efficiency of targeted DDS as can be seen later.
MSD of the Chiral ABP can be calculated as shown in equation 6 empirically derived from the MSD of vescicles filled with Chiral ABP particles [31].
\[\begin{split}\text{MSD}&=4D_{T}t+\frac{2v^{2}D_{R}t }{D_{R}^{2}+\omega^{2}}\\ &\qquad+\frac{2v^{2}(e^{-D_{R}t}\cos(\omega t+\phi_{0})-\cos( \phi_{0}))}{D_{R}^{2}+\omega^{2}}\\ \cos\phi_{0}&=\frac{D_{R}^{2}-\omega^{2}}{D_{R}^{2 }+\omega^{2}}\end{split} \tag{6}\]
### Passive Brownian Particle (PBP)
These are simple non-propelled particles undergoing Brownian motion and their dynamics is described in equation 7.
\[\dot{x} =\sqrt{2D_{T}}\xi_{x} \tag{7a}\] \[\dot{y} =\sqrt{2D_{T}}\xi_{y}\] (7b) \[\dot{\phi} =\sqrt{2D_{R}}\xi_{\phi} \tag{7c}\]
The active or propelled particles (ABP, PBP, Chiral ABP) definitely have a better performance when compared to PBP [1]. Hence, in our paper, we focus on the relative performance among the different active particles.
Examination of equations 1-7 clearly shows that at long time-scales, i.e., \(t>>1/D_{R}\), all the four models can be mapped to each other. For instance, there have been studies showing the equivalence of ABPs and RTPs at long time scales [32, 33]. In addition, both types of particles accumulate at the boundaries in confined geometries [34, 27, 35]. Despite many such similarities,
there are differences observed between the two types of particles at short time scales [34].
## 3 Methods
We simulated ABP, RTP and Chiral ABP in 2D using the finite difference equations of the dynamic equations of their position described in the equations 1, 3 and 5 [36]. We performed the simulations using a custom Python script (version 3.8.5).
Active particles at the micro-scale have a noisy movement which is quantified by the diffusion constants, which depend on the shape and size of the particle. \(D_{R}\) and \(D_{T}\) are the rotational and translational diffusion constants, respectively and are determined as shown in equation 8 for spherical (circular in 2D) particles. In our simulations, the radius of the particle \(R=0.5\mu m\). \(\alpha\) for RTP is kept equal to \(D_{R}\) to check the difference in capture efficiency when ABP and RTP are equivalent in 2D [32]. \(\omega\) for Chiral ABP is fixed at \(1rads^{-1}\).
\[D_{T} =\frac{k_{B}t}{6\pi\eta R} \tag{8a}\] \[D_{R} =\frac{k_{B}t}{8\pi\eta R^{3}} \tag{8b}\]
Figure 1: Capture efficiency for 1000 particles
In equation 8, \(k_{B}\) denotes the Boltzmann constant and is of value \(1.38064852\times 10^{-23}m^{2}kgs^{-2}K^{-1}\),\(T\) is the absolute temperature in \(Kelvin\), \(\eta\) is the fluid viscosity (we consider water as the fluid here, hence \(=1.0016\times 10^{-3}Pas\)). For \(R=0.5\mu m\), \(D_{T}\) and \(D_{R}\) are calculated to be \(\mu m^{2}s^{-1}\) and \(rad^{2}s^{-1}\) respectively.
The target for our simulated DDS is designed to be circular and absorbing in nature, meaning a particle entering this region is stuck there and is said to be "delivered". The target is taken to be of radius \(a(=5\mu m)\) and is located at a distance \(l\) from the point of the initial location of the particles. The capture efficiency is measured at each time instant \(\delta t\) by noting the number of particles making it to the target.
The code is available in the link, code for 2d and 3d simulations of ABP, RTP and Chiral ABP.
## 4 Results
In literature, the quantity persistence length described by \(P_{l}=v/D_{R}\) gives a measure of how far an active particle maintains the initial orientation on average; and the quantity \(\tau_{R}=1/D_{R}=0.778s\) gives a measure of how long an active particle maintains the initial orientation on average. We thus simulated the trajectory of 1000 particles and ran the simulation 100 times to obtain the statistics of capture efficiency for range of target distance \(l\) both less and more
Figure 2: Error bar of capture efficiency
than \(P_{l}=7.778\mu m\) for \(v=10\mu ms^{-1}\) and for simulation time \(t=5s>>\tau_{R}\).
To determine the effect of motility type (i.e., ABP, RTP and chiral ABP) on the capture efficiency, we plotted the simulated capture efficiencies for the different motility types for different target-distances, as shown in Figure 1. We see that the curves for the different motility types overlap for small or large target distances. However, the chiral ABP provides higher capture efficiency for intermediate target distance compared to the other two motility types. The same observation is illustrated in Figure 2 where we have plotted the mean and standard deviation of capture efficiency for the different motility types.
To understand the particles' behaviour, we compare the time evolution of the Mean Squared Displacement (MSD) for the different motility types as shown in Figure 3. The mean squared displacement (MSD) for the different motility types is shown in Figure 4. The mean squared displacement (MSD) for the different motility types is shown in Figure 5. The mean squared displacement (MSD) for the different motility types is shown in Figure 6. The mean squared displacement (MSD) for the different motility types is shown in Figure 7. The mean squared displacement (MSD) for the different motility types is shown in Figure 8. The mean squared displacement (MSD) for the different motility types is shown in Figure 9. The mean squared displacement (MSD) for the different motility types is shown in Figure 10. The mean squared displacement (MSD) for the different motility types is shown in Figure 11. We see that the mean squared displacement (MSD) for the different motility types is shown in Figure 12. The mean squared displacement (MSD) for the different motility types is shown in Figure 13. The mean squared displacement (MSD) for the different motility types is shown in Figure 14. The mean squared displacement (MSD) for the different motility types is shown in Figure 15. The mean squared displacement (MSD) for the different motility types is shown in Figure 16. The mean squared displacement (MSD) for the different motility types is shown in Figure 17. The mean squared displacement (MSD) for the different motility types is shown in Figure 18. The mean squared displacement (MSD) for the different motility types is shown in Figure 19. The mean squared displacement (MSD) for the different motility types is shown in Figure 19. The mean squared displacement (MSD) for the different motility types is shown in Figure 12. The mean squared displacement (MSD) for the different motility types is shown in Figure 13. The mean squared displacement (MSD) for the different motility types is shown in Figure 14. The mean squared displacement (MSD) for the different motility types is shown in Figure 15. The mean squared displacement (MSD) for the different motility types is shown in Figure 16. The mean squared displacement (MSD) for the different motility types is shown in Figure 17. The mean squared displacement (MSD) for the different motility types is shown in Figure 18. The mean squared displacement (MSD) for the different motility types is shown in Figure 19. The mean squared displacement (MSD) for the different motility types is shown in Figure 19.
shown in figure 3. As pointed out before, at time scales longer than \(1/D_{R}\); the MSD for RTP and ABP particles can be mapped into each other.
In figure 1, we observe that the Chiral ABP has a higher capture efficiency when compared to RTP and ABP at target distances closer to persistence length. This is because the effective diffusion constant of a Chiral ABP is always smaller when compared to that of either ABP or RTP (kept equal) which increases the approximate time spent "near" the target area (roughly \(a^{2}/D\)). This is also evidenced by the MSD plot in figure 3, where chiral ABP has lower MSD and, therefore, smaller effective \(D\).
We also note that at higher target distances, the capture efficiency of ABP gradually improves and matches with that of Chiral ABP as observed in figure 2. This is because, for targets that are far away, the ABP's better initial directed motion makes up for the advantage of Chiral ABP's behaviour.
To understand the effect of angular velocity, we plot capture efficiency of Chiral ABPs by repeating the simulations for different values of \(\omega\) as shown in
Figure 4: Effect of \(\omega\) on Chiral ABP capture efficiency
figure 4. Here, we observed a particular \(\omega\) for which the capture efficiency was maximum. Also, this observation was prominent at times \(>=\tau_{r}\) (figure 5). This phenomenon is explained as follows. For a fixed target distance, capture efficiency reaches saturation at time \(t_{s}=l^{2}/D\). So for a fixed time \(t\), the \(\omega\) corresponding to \(l^{2}/t\) will provide the maximum capture efficiency as shown in the figure on \(\omega\) variation.
We also simulated the ABP, RTP and Chiral ABP in 3D and plotted the capture efficiency as shown in figure 6. Here, we observed that the behaviour in 2D is also reflected in 3D.
## 5 Conclusion
The following conclusions were drawn from the simulation of active particles with different motility characteristics.
* The particle having MSD that reaches the target latest has the best capture efficiency.
* Effect of w for Chiral ABP: there is an optimum w for a particle with certain specifications.
* When target distance is close to the persistence length, RTP and ABP have equal performances while Chiral ABP fares better than both. When
Figure 5: Effect of \(\omega\) on Chiral ABP capture efficiency at specific times
target distance is much higher than persistence length, ABP and Chiral ABP have better and similar performances when compared to RTP. This work presents the impact of a single-particle level property, namely the motility statistics with no inter-particle interactions, on a macroscopic swarm-level parameter, namely delivery or capture efficiency. As such the observations described here will be helpful to tune single particle-level characteristics to maximize swarm-level performance.
|
2309.05921 | Endotrivial modules for the quaternion group and iterated Jokers in
chromatic homotopy theory | The algebraic Joker module was originally described in the 1970s by Adams and
Priddy and is a $5$-dimensional module over the subHopf algebra
$\mathcal{A}(1)$ of the mod $2$ Steenrod algebra. It is a self-dual endotrivial
module, i.e., an invertible object in the stable module category of
$\mathcal{A}(1)$. Recently it has been shown that no analogues exist for
$\mathcal{A}(n)$ with $n>1$. Using iterated doubling this also gives an
iterated double which is an $\mathcal{A}(n)$-module but not stably invertible.
In previous work the author showed that for $n=1,2,3$ these iterated doubles
were realisable as cohomology of CW spectra, but no such realisation existed
for $n>3$.
The main point of the paper is to show that in the height $2$ chromatic
context, the Morava $K$-theory of double Jokers realise an exceptional
endotrivial module over the quaternion group of order $8$ that only exists over
a field of characteristic $2$ containing a primitive cube root of unity. This
has connections with certain Massey products in the cohomology of the
quaternion group. | Andrew Baker | 2023-09-12T02:28:01Z | http://arxiv.org/abs/2309.05921v4 | # Endotrivial modules for the quaternion group and iterated jokers in chromatic homotopy theory
###### Abstract.
The algebraic Joker module was originally described in the 1970s by Adams and Priddy and is a 5-dimensional module over the subHopf algebra \(\mathcal{A}(1)\) of the mod 2 Steenrod algebra. It is a self-dual _endotrivial module_, i.e., an invertible object in the stable module category of \(\mathcal{A}(1)\). Recently it has been shown that no analogues exist for \(\mathcal{A}(n)\) with \(n>1\). In previous work the author used doubling to produce an 'iterated double Joker' which is an \(\mathcal{A}(n)\)-module but not stably invertible. We also showed that for \(n=1,2,3\) these iterated doubles were realisable as cohomology of CW spectra, but no such realisation existed for \(n>3\).
The main point of this paper is to show that in the height 2 chromatic context, the Morava \(K\)-theory of double Jokers realise an exceptional endotrivial module over the quaternion group of order 8 that only exists over a field of characteristic 2 containing a primitive cube root of unity. This has connections with certain Massey products in the cohomology of the quaternion group.
Key words and phrases:Stable homotopy theory, Steenrod algebra, Lubin-Tate spectrum, Morava \(K\)-theory, endotrivial module 2020 Mathematics Subject Classification: Primary 55S25; Secondary 55N34, 20C20 I would like to thank the following for helpful comments: Dave Benson, Ken Brown, Bob Bruner, Hans-Werner Henn, Lennart Meier, Doug Ravenel, John Rognes, Danny Shi, and Vesna Stojanoska. I would like to acknowledge the support of LAGA, l'Universite Sorbonne, Paris Nord where this paper was completed.
## Introduction
Following Adams & Priddy [1], in [1, 2] we considered the Joker \(\mathcal{A}(1)\)-module and its iterated doubles over the finite subHopf algebras \(\mathcal{A}(n)\subseteq\mathcal{A}\). We showed that for small values of \(n\), there were spectra and spaces realising these. From an algebraic point of view, the original \(\mathcal{A}(1)\) Joker module was important because it gave a self inverse stably invertible module, i.e., an element of order 2 in the Picard group of the stable module category of \(\mathcal{A}(1)\). More recently, Bhattacharya & Ricka [1] and Pan & Yan have claimed that no such exotic elements can exist for \(\mathcal{A}(n)\) when \(n\geqslant 2\); their proof makes use of ideas found in the related study of endotrivial modules for group algebras, conveniently described in the recent book of Mazza [19].
The main aim of this paper is to show that at least some of our geometric Joker spectra have Lubin-Tate cohomology which realises a certain lifting of a 5-dimensional endotrivial module over the quaternion group \(Q_{8}\) and the field \(\mathbb{F}_{4}\). Here \(Q_{8}\) is realised as a subgroup of the second Morava stabilizer group chromatic. This example suggests that in the chromatic setting there may be other interesting endotrivial modules associated with finite subgroups of Morava stabilizer groups; Lennart Meier has pointed out that this fits well with results in [13, appendix B].
We collect some useful algebraic ideas and results on twisted group rings and twisted Hecke algebras in the Appendix.
**Conventions and notation:** We will work at the prime \(p=2\) and chromatic height \(2\) when considering stable homotopy theory.
## 1. Homotopy fixed points for finite subgroups of Morava stabilizer groups
We briefly recall the general set-up for homotopy fixed point spectra of Lubin-Tate spectra, where the group involved is finite, although work of Devinatz & Hopkins [1] allows for more general subgroups of Morava stabilizer groups to be used. We will adopt the notation of Henn [1]; in particular, \(\mathbb{G}_{n}\) is the _extended Morava stabilizer group_
\[\mathbb{G}_{n}=\mathbb{D}_{n}^{\times}/\langle S^{n}\rangle\cong\operatorname{ Gal}(\mathbb{F}_{p^{n}}/\mathbb{F}_{p})\ltimes\mathcal{O}_{n}^{\times},\]
where \(S\in\mathbb{D}_{n}\) is the uniformizer satisfying \(S^{n}=p\).
**Example 1.1**.: For any prime \(p\) and \(n\geqslant 1\), there is a unique central subgroup of order \(2\), namely \(C_{2}=\{\pm 1\}\lhd\mathbb{G}_{n}\). When \(n=1\) and \(p=2\), it is well known that \(E_{1}^{C_{2}}\sim K\mathrm{O}_{2}\).
For \(p\) odd, there is a unique central cyclic subgroup \(C_{p-1}\lhd\mathbb{G}_{n}\) of order \(p-1\), and when \(n=1\)\(E_{1}^{hC_{p-1}}\) is the Adams summand of \(K\mathrm{U}_{p}\).
**Example 1.2**.: When \(p=2=n\), \(\mathbb{G}_{2}\) contains a subgroup \(G_{24}\) of order \(24\) whose unique \(2\)-Sylow subgroup is isomorphic to the quaternion group \(Q_{8}\); this is the _binary tetrahedral group_ and double covers \(A_{4}\leqslant\mathrm{SO}(3)\), the group of rotational symmetries of a regular tetrahedron. This group is the semidirect product \(C_{3}\ltimes Q_{8}\) and there is also a split extension
\[G_{48}=\operatorname{Gal}(\mathbb{F}_{4}/\mathbb{F}_{2})\ltimes G_{24} \leqslant\mathbb{G}_{2}\]
of order \(48\) in the extended Morava stabilizer group. The fixed point spectrum \(E_{2}^{hG_{48}}\) is an avatar of the spectrum of topological modular forms; see the article by Hopkins & Mahowald in [1, part III]. A subgroup \(H\leqslant G_{48}\) gives rise to extensions \(E_{2}^{hG_{48}}\to E_{2}^{hH}\to E_{2}\) where the latter is a faithful \(H\)-Galois extension in the sense of Rognes [10]; this depends on work of Devinatz & Hopkins [1].
## 2. A finite group of operations in Lubin-Tate theory of height \(2\)
Our work requires an explicit realisation of \(Q_{8}\) as a subgroup of the height \(2\) Morava stabilizer group. We follow the account and notation of Henn [1, section 2], especially lemma 2.1.
The ring of _Hurwitz quaternions_\(\mathcal{H}\) is the subdomain of \(\mathbb{H}\) additively generated by the elements
\[\frac{(\pm 1\pm i\pm j\pm k)}{2}.\]
It has a unique completely prime maximal ideal \(\mathcal{M}\) which contains \(2\) as well as \(i+1,j+1,k+1\). The quotient ring is a field with \(4\) elements,
\[\mathbb{F}_{4}=\mathcal{H}/\mathcal{M}=\mathbb{F}_{2}(\omega),\]
where \(\omega\) denotes (the residue class of) the primitive cube root of unity
\[\omega=-\frac{(1+i+j+k)}{2}.\]
Routine calculations show that
\[i\omega i^{-1}=\omega+j+k\equiv\omega\bmod\mathcal{M}\]
and also
\[j\omega j^{-1}\equiv\omega\equiv k\omega k^{-1}\bmod\mathcal{M},\]
therefore the quaternion subgroup \(Q_{8}=\langle i,j\rangle\leqslant\mathcal{H}^{\times}\) acts trivially of \(\mathbb{F}_{4}\) and we may form the (trivially twisted) group ring \(\mathbb{F}_{4}\langle Q_{4}\rangle=\mathbb{F}_{4}[Q_{4}]\).
We can complete \(\mathcal{H}\) with respect to \(\mathcal{M}\) or equivalently \(2\), to obtain a model for the maximal order \(\mathcal{O}_{2}\) in the division algebra \(\mathbb{D}_{2}=\mathcal{H}_{\mathcal{M}}\). In fact
\[\mathbb{D}_{2}=\mathbb{Z}_{4}\langle S\rangle/(S^{2}-2),\]
where \(\mathbb{Z}_{4}=\mathrm{W}(\mathbb{F}_{4})=\mathbb{Z}_{2}(\omega)\) is the ring of Witt vectors for \(\mathbb{F}_{4}\) and the uniformizer \(S\) intertwines with \(\mathbb{Z}_{4}\) so that \(S(-)S^{-1}\) is the lift of Frobenius (and so \(S^{2}\) acts trivially). The quotient group
\[\mathbb{G}_{2}=\mathbb{D}_{2}^{\times}/\langle S^{2}\rangle\cong\mathrm{Gal} (\mathbb{F}_{4}/\mathbb{F}_{2})\ltimes\mathcal{O}_{2}^{\times}\]
is the _extended Morava stabilizer group_.
Here is an explicit description for elements of \(Q_{8}\) in terms of Teichmuller expansions as in [1, lemma 2.1]:
\[i=\frac{1}{3}(1+2\omega^{2})(1-aS),\quad j=\frac{1}{3}(1+2\omega^{2})(1-a \omega^{2}S),\quad k=\frac{1}{3}(1+2\omega^{2})(1-a\omega S), \tag{2.1}\]
where we choose \(\sqrt{-7}\in\mathbb{Z}_{2}\) to be the square root of \(-7\) satisfying \(\sqrt{-7}\equiv 5\bmod 8\) and set
\[a=\frac{1-2\omega}{\sqrt{-7}}\in\mathbb{Z}_{4}.\]
Notice that working modulo \(S^{3}=2S\) in \(\mathcal{O}_{2}\),
\[i\equiv 1+S+2\omega,\quad j\equiv 1+\omega^{2}S+2\omega,\quad k\equiv 1+\omega S +2\omega. \tag{2.2}\]
Of course there is a twisted group ring \((E_{2})_{0}\langle Q_{8}\rangle\) which has \(\mathbb{F}_{4}[Q_{4}]\) as a quotient ring.
## 3. Lubin-Tate theory for double Joker spectra
Let \(J=J(2)\) be one of the finite CW spectra constructed in [1]. Its mod \(2\) cohomology is the cyclic \(\mathcal{A}(2)\)-module \(H^{*}(J)\cong\mathcal{A}(2)/\mathcal{A}(2)\{\mathrm{Q}^{0},\mathrm{Q}^{1}, \mathrm{Q}^{2},\mathrm{Sq}^{6}\}\) (here the \(\mathrm{Q}^{i}\) are the Milnor primitives), and there are two possible extensions to an \(\mathcal{A}\)-module with trivial or non-trivial \(\mathrm{Sq}^{8}\)-action giving dual \(\mathcal{A}\)-modules.
The attaching maps in such a CW spectrum are essentially suspensions of \(\eta\) and \(\nu\). Up to homotopy equivalence there are two such spectra which are Spanier-Whitehead dual to each other and realise the two \(\mathcal{A}\)-module extensions.
There is a CW spectrum \(dA(1)\) known as 'the double of \(\mathcal{A}(1)\)' whose cohomology as an \(\mathcal{A}(2)\)-module is \(H^{*}(dA(1))\cong\mathcal{A}(2)/\!/\mathcal{E}(2)\); for a detailed discussion see Bhattacharya et al [1]. In [1, remark 5.1] we outlined how to construct such a spectrum starting with a double Joker and attaching cells. By construction, \(dA(1)\) contains \(J\) as a subcomplex with cofibre
a suspension of the 'upside-down double question mark' complex \(Q^{\dot{\iota}}\) whose cohomology is \(3\)-dimensional and has a non-trivial action of \(\operatorname{Sq}^{2}\operatorname{Sq}^{4}\).
This is stably Spanier-Whitehead dual to the'double question mark' complex \(Q^{\gamma}\) whose cohomology has a non-trivial action of \(\operatorname{Sq}^{4}\operatorname{Sq}^{2}\).
Thus we have the cofibre sequence \(J\to dA(1)\to\Sigma^{6}Q^{\dot{\iota}}\) as shown.
We can apply a complex oriented homology theory to this cofibre sequence, thus obtaining a short exact sequence; in particular we will apply \(BP_{*}(-)\), \((E_{2})_{*}(-)\) or \((K_{2})_{*}(-)\). Our goal is to understand the Lubin-Tate cohomology \(E_{2}^{*}(J)\) as a left \(E_{2}^{*}(Q_{8})\)-module where \(Q_{2}\leqslant\mathbb{G}_{2}\) is a quaternion subgroup. Since \(E_{2}^{*}(J)\) is a finitely generated free module and \(J\) is dualizable, we can instead work with right module \((E_{2})_{*}(J)\) in terms of the corresponding \((E_{2})_{*}(E_{2})\)-comodule structure. Actually we prefer to work directly with the smaller complex \(Q^{\dot{\iota}}\) and use the fact \((E_{2})_{*}(dA(1))\) and \((E_{2})^{*}(dA(1))\) are free \(E_{2}^{*}(Q_{8})\)-modules of rank \(1\): this is well-known and appears in Hopkins & Mahowald [15, part III], but a detailed discussion also occurs in [16, lemma 1.42]. The key point is to use the equivalence \(E_{2}\wedge dA(1)\sim E_{2}\) together with results of Devinatz & Hopkins [14]. So our main calculational result identifies the right \(E_{2}^{*}(Q_{8})\)-module \((E_{2})_{*}(Q^{\dot{\iota}})\); we will do this by first describing the \(K_{2}^{*}[Q_{8}]\)-module \((K_{2})_{*}(Q^{\dot{\iota}})\).
Here is our main result.
**Theorem 3.1**.: _The \(E_{2}^{*}(Q_{8})\)-module \(E_{2}^{*}(J)\) is stably invertible and self dual, and its reduction to \(K_{2}^{*}(J)\) is a \(5\)-dimensional stably invertible \(K_{2}^{*}[Q_{8}]\)-module._
Of course we can reduce to studying \(K_{2}^{0}(J)\) as a \(K_{2}^{0}[Q_{8}]=\mathbb{F}_{4}[Q_{8}]\)-module. The \(5\)-dimensional stably invertible \(\mathbb{F}_{4}[Q_{8}]\)-module \(W_{5}\) is that of [17, theorem 3.8(1)] and this is \(\Omega W_{3}\) for a \(3\)-dimensional stably invertible \(\mathbb{F}_{4}[Q_{8}]\)-module \(W_{3}\) which we will show is isomorphic to \((K_{2})_{0}(Q^{\dot{\iota}})\)
For a suitable choice of basis \(w_{1},w_{2},w_{3}\), the action of \(Q_{8}\) on \(W_{3}\) is given by
(3.1) \[\begin{cases}&iw_{1}=w_{1}+w_{2},\ \ \ \ \ jw_{1}=w_{1}+\omega w_{2},\\ &iw_{2}=w_{2}+w_{3},\ \ \ \ \ \ jw_{2}=w_{2}+\omega^{2}w_{2},\\ &iw_{3}=w_{3},\
**Comodules for some iterated mapping cones.** We begin by recalling that the mapping cones of the Hopf invariant \(1\) elements have the following \(BP\)-homology as \(BP_{*}(BP)\)-comodules, where \(x_{k}\) has degree \(k\) and \(x_{0}\) is coaction primitive. Here
\[BP_{*}(\mathrm{C}(\eta))=BP_{*}\{x_{0},x_{2}\},\quad BP_{*}(\mathrm{C}(\nu))= BP_{*}\{x_{0},x_{4}\},\quad BP_{*}(\mathrm{C}(\sigma))=BP_{*}\{x_{0},x_{8}\},\]
with
\[\psi(x_{2}) =t_{1}\otimes x_{0}+1\otimes x_{2}, \tag{4.3b}\] \[\psi(x_{4}) =(v_{1}t_{1}+t_{1}^{2})\otimes x_{0}+1\otimes x_{4}.\] (4.3c) \[\psi(x_{8}) =(v_{2}t_{1}-3t_{1}^{4}-v_{1}^{3}t_{1}-4v_{1}^{2}t_{1}^{2}-5v_{1} t_{1}^{3}+v_{1}t_{2}+2t_{1}t_{2})\otimes x_{0}+1\otimes x_{8}\] \[\equiv(v_{2}t_{1}+t_{1}^{4})\otimes x_{0}+1\otimes x_{8}\mod(2,v _{1}). \tag{4.3a}\]
Such formulae are well-known and follow from the fact that these homotopy elements are detected by elements that originate in the chromatic spectra sequence on
\[v_{1}/2\in\mathrm{Coex}_{BP_{*}(BP)}^{0,2}(BP_{*},BP_{*}/2^{ \infty}),\quad v_{1}^{2}/4\in\mathrm{Coex}_{BP_{*}(BP)}^{0,4}(BP_{*},BP_{*}/2 ^{\infty}),\] \[(v_{1}^{4}+8v_{1}v_{2})/16\in\mathrm{Coex}_{BP_{*}(BP)}^{0,8}(BP _{*},BP_{*}/2^{\infty});\]
see [10, 11] for details.
We require a computational result that ought to be standard but we do not know a convenient reference.
**Lemma 4.1**.: _For \(BP_{*}(S^{0}\cup_{\nu}e^{4}\cup_{\eta}e^{6})\) there is a \(BP_{*}\)-basis \(x_{0},x_{4},x_{6}\) with \(BP_{*}(BP)\)-coaction given by_
\[\psi(x_{0}) =1\otimes x_{0},\] \[\psi(x_{4}) =(v_{1}t_{1}+t_{1}^{2})\otimes x_{0}+1\otimes x_{4},\] \[\psi(x_{6}) =\left(t_{2}+(2/3)t_{1}^{3}+v_{1}t_{1}^{2}\right)\otimes x_{0}+t_ {1}\otimes x_{4}+1\otimes x_{6}.\]
Proof.: We only need to verify the last coaction and only the term involving \(x_{0}\) is unclear. Suppose that
\[\psi(x_{6})=\theta\otimes x_{0}+t_{1}\otimes x_{4}+1\otimes x_{6}.\]
Then by coassociativity we obtain
\[\psi(\theta)\otimes x_{0}+1\otimes t_{1}\otimes x_{4}+t_{1} \otimes 1\otimes x_{4}+1\otimes 1\otimes x_{6}\] \[=\theta\otimes 1\otimes x_{0}+t_{1}\otimes(v_{1}t_{1}+t_{1}^{2}) \otimes x_{4}+t_{1}\otimes 1\otimes x_{4}+1\otimes\theta\otimes x_{0}+1 \otimes t_{1}\otimes x_{4}+1\otimes 1\otimes x_{6}\]
and so
\[\psi(\theta) =1\otimes\theta+t_{1}\otimes(v_{1}t_{1}+t_{1}^{2})+\theta\otimes 1\] \[=1\otimes\theta+t_{1}(v_{1}+2t_{1})\otimes v_{1}t_{1}+t_{1} \otimes t_{1}^{2}+\theta\otimes 1\] \[=1\otimes\theta+v_{1}t_{1}\otimes t_{1}+2t_{1}^{2}\otimes t_{1}+t _{1}\otimes t_{1}^{2}+\theta\otimes 1.\]
A calculation shows that
\[\psi(t_{2}+(2/3)t_{1}^{3}+v_{1}t_{1}^{2})=1\otimes(t_{2}+(2/3)t_{1}^{3}+v_{1}t_ {1}^{2})+t_{1}\otimes(v_{1}t_{1}+t_{1}^{2})+(t_{2}+(2/3)t_{1}^{3}+v_{1}t_{1}^{2} )\otimes 1.\]
So we obtain the formulae stated.
Now given a map of ring spectra \(BP\to E\), where \(E\) is Landweber exact, these \(BP_{*}(BP)\)-comodules map to \(E_{*}(E)\)-comodules \(E_{*}(\mathrm{C}(\eta))\) and \(E_{*}(\mathrm{C}(\nu))\). Our main interest will focus on the examples \(E=E_{2}\) (the height \(2\) Lubin-Tate spectrum) and \(E=K_{2}\) (the \(2\)-periodic Morava \(K\)-theory spectrum with coefficients in \(\mathbb{F}_{4}\)). In the latter case we have \((K_{2})_{*}=\mathbb{F}_{4}[u,u^{-1}]\) where \(u\in(K_{2})_{2}\) (so \(u^{3}=v_{2}\)) and we set \(\overline{x}_{2k}=u^{-k}x_{2k}\in(K_{2})_{0}(\mathrm{C}(\gamma))\) when \(\gamma=\eta,\nu\). The Hopf algebroid here is
\[(K_{2})_{*}(E_{2})=(K_{2})_{*}[\alpha_{r}:r\geqslant 0]/(\alpha_{0}^{3}-1, \,\alpha_{r}^{4}-\alpha_{r}:r\geqslant 1),\]
where the right unit on \(u\) is \(\eta_{\mathrm{r}}(u)=u\alpha_{0}\) and the image of \(t_{k}\in BP_{2^{k+1}-2}(BP)\) is
\[u^{2^{k}-1}\alpha_{k}\in(K_{2})_{2^{k+1}-2}(E_{2}).\]
It is standard that every element of \(\mathcal{O}_{2}^{\times}\) has a unique series expansion as \(\sum_{r\geqslant 0}a_{r}S^{r}\), where the Teichmuller representatives \(a_{r}\) satisfy
\[a_{0}^{3}=1,\qquad a_{r}^{4}=a_{r}\quad(r\geqslant 1).\]
Then we may identify \((K_{2})_{0}(E_{2})\) with the algebra of continuous maps \(\mathcal{O}_{2}^{\times}\to\mathbb{F}_{4}\) and then \(\alpha_{k}\) is identified with the locally constant function given by
\[\alpha_{k}\biggl{(}\sum_{r\geqslant 0}a_{r}S^{r}\biggr{)}=a_{k}.\]
The left \((K_{2})_{*}(E_{2})\)-coaction on a comodule \(M_{*}\) induces an adjoint right action of \(\mathcal{O}_{2}^{\times}\). For any finite subgroup \(G\leqslant\mathcal{O}_{2}^{\times}\) there is an induced action of the twisted group ring \((K_{2})^{*}\langle G\rangle\). This also gives a right action of \(\mathbb{F}_{4}\langle G\rangle\) on each \(M_{k}\). Of course we are using the \((K_{2})_{*}\)-linear pairing \(M_{*}\otimes_{(K_{2})_{*}}M^{*}\to(K_{2})_{*}\) to define this. Standard linear algebra says that when \(M_{*}\) is finite dimensional over \((K_{2})_{*}\), given a basis for \(M_{*}\) and the dual basis for \(M^{*}=\mathrm{Hom}_{(K_{2})_{*}}(M_{*},(K_{2})_{*})\), the matrices for expressing the action on \(M_{*}\) and its adjoint action on \(M^{*}\) are mutually transpose.
#### \((K_{2})_{0}(\mathrm{C}(\eta))\)
Here we have the coaction formulae
\[\overline{x}_{0}\mapsto 1\otimes\overline{x}_{0},\quad\overline{x}_{2}\mapsto \alpha_{1}\otimes\overline{x}_{0}+\alpha_{0}\otimes\overline{x}_{2},\]
and the right action \(Q_{8}\) has matrix representations with respect to the basis \(\overline{x}_{0},\overline{x}_{2}\) obtained from the above discussion together with (2.1) and (2.2).
#### \((K_{2})_{0}(\mathrm{C}(\nu))\)
The coaction is
\[\overline{y}_{0}\mapsto 1\otimes\overline{y}_{0},\quad\overline{y}_{4}\mapsto \alpha_{1}^{2}\otimes\overline{y}_{0}+\alpha_{0}^{2}\otimes\overline{y}_{4},\]
and the matrix representation is
\[i\colon\begin{bmatrix}1&1\\ 0&1\end{bmatrix},\quad j\colon\begin{bmatrix}1&\omega\\ 0&1\end{bmatrix}.\]
\((K_{2})_{0}(S^{0}\cup_{\boldsymbol{\nu}}e^{4}\cup_{\boldsymbol{\eta}}e^{6})\). Using Lemma 4.1, we can find a basis \(\overline{z}_{0},\overline{z}_{4},\overline{z}_{6}\in(K_{2})_{0}(S^{0}\cup_{ \nu}e^{4}\cup_{\eta}e^{6})\) and
\[\overline{z}_{0}\mapsto 1\otimes\overline{z}_{0},\quad\overline{z}_{4}\mapsto \alpha_{1}^{2}\otimes\overline{z}_{0}+\alpha_{0}^{2}\otimes\overline{z}_{4}, \quad\overline{z}_{6}\mapsto\alpha_{2}\otimes\overline{z}_{0}+\alpha_{0}^{2} \alpha_{1}\otimes\overline{z}_{4}+1\otimes\overline{z}_{6}, \tag{4.4}\]
\[i\colon\begin{bmatrix}1&1&\omega\\ 0&1&1\\ 0&0&1\end{bmatrix},\quad j\colon\begin{bmatrix}1&\omega&\omega\\ 0&1&\omega^{2}\\ 0&0&1\end{bmatrix}.\]
These are the matrices for the adjoint of the representation \(W_{3}\) in terms of the basis in (3.2), i.e., the transposes of the matrices in (3.3).
### \(K_{2}^{0}(c(\sigma))\)
Here the relation \(\alpha_{1}^{4}+\alpha_{1}=0\) gives
\[\overline{x}_{0}\mapsto 1\otimes\overline{x}_{0},\quad\overline{x}_{8}\mapsto( \alpha_{1}^{4}+\alpha_{1})\otimes\overline{x}_{0}+\alpha_{0}\otimes\overline{ x}_{8}=\alpha_{0}\otimes\overline{x}_{8},\]
so \(i,j\) act trivially.
### \((K_{2})_{0}(S^{0}\cup_{\boldsymbol{\sigma}}e^{8}\cup_{\boldsymbol{\nu}}e^{12})\)
Here we have a basis \(\overline{z}_{0},\overline{z}_{8},\overline{z}_{12}\in(K_{2})_{0}(S^{0}\cup_{ \sigma}e^{8}\cup_{\nu}e^{12})\) and
\[\overline{z}_{0}\mapsto 1\otimes\overline{z}_{0},\quad\overline{z}_{8}\mapsto \alpha_{0}\otimes\overline{z}_{8},\quad\overline{z}_{12}\mapsto*\otimes \overline{z}_{0}+\alpha_{1}^{2}\otimes\overline{z}_{8}+\alpha_{0}^{2}\otimes \overline{z}_{12},\]
\[i\colon\begin{bmatrix}1&0&*\\ 0&1&1\\ 0&0&1\end{bmatrix},\quad j\colon\begin{bmatrix}1&0&*\\ 0&1&\omega\\ 0&0&1\end{bmatrix}.\]
Here \(Z(Q_{8})\) acts trivially so the representation factors through the abelianisation, hence this does not give a stably invertible \(Q_{8}\)-module. The precise form of the starred terms can be determined by a similar method to that used in the proof of Lemma 4.1.
## 5. More on modular representations of \(Q_{8}\)-modules
The reader may find it useful to relate the results in this section to Ravenel [10, proposition 3.5].
Recall that for any field \(\mathbf{k}\) of characteristic \(2\), the cohomology of \(\mathbf{k}[Q_{8}]\) has the form
\[\operatorname{Ext}^{*}_{\mathbf{k}[Q_{8}]}(\mathbf{k},\mathbf{k})=\mathbf{k}[ \mathrm{u},\mathrm{v},\mathrm{w}]/(\mathrm{u}^{2}+\mathrm{uv}+\mathrm{v}^{2}, \mathrm{u}^{2}\mathrm{v}+\mathrm{uv}^{2},\mathrm{u}^{3},\mathrm{v}^{3}), \tag{5.1}\]
where \(\mathrm{u},\mathrm{v}\) have degree \(1\) and \(\mathrm{w}\) has degree \(4\). This result can be found in [1, lemma IV.2.10] for example.
Of course \(\operatorname{Ext}^{1}_{\mathbf{k}[Q_{8}]}(\mathbf{k},\mathbf{k})\) can be identified with the group of all homomorphisms \(Q_{8}\to\mathbf{k}\) into the additive group of \(\mathbf{k}\). We will need to make explicit choices for the generators and we define them to be the homomorphisms \(\mathrm{u},\mathrm{v}\colon Q_{8}\to\mathbf{k}\) given by
\[\mathrm{u}(i)=1,\;\mathrm{u}(j)=0,\;\mathrm{v}(i)=0,\;\mathrm{v}(j)=1.\]
The functions \(\alpha_{1},\alpha_{1}^{2}\colon Q_{8}\to\mathbf{k}\) are also homomorphisms and can be expressed as
\[\alpha_{1}=\mathrm{u}+\omega^{2}\mathrm{v},\;\alpha_{1}^{2}=\mathrm{u}+\omega \mathrm{v}.\]
Notice that when \(\mathbf{k}\) does not contain a primitive cube root of unity, \(\mathrm{u}^{2}+\mathrm{uv}+\mathrm{v}^{2}\) does not factor, but if \(\omega\in\mathbf{k}\) is a primitive cube root of unity then
\[\mathrm{u}^{2}+\mathrm{uv}+\mathrm{v}^{2}=(\mathrm{u}+\omega\mathrm{v})( \mathrm{u}+\omega^{2}\mathrm{v}).\]
This means that the Massey product \(\langle{\rm u}+\omega^{2}{\rm v},{\rm u}+\omega{\rm v},{\rm u}+\omega^{2}{\rm v} \rangle\subseteq{\rm Ext}^{2}_{{\bf k}[Q_{8}]}({\bf k},{\bf k})\) is defined and this has indeterminacy \({\bf k}\{{\rm u}^{2}+\omega{\rm v}^{2}\}\).
The Massey product
\[\langle[t_{1}],[v_{1}t_{1}+t_{1}^{2}],[t_{1}]\rangle=\{[v_{1}t_{1}+t_{1}^{2}]^ {2}\}\subseteq{\rm Coext}^{2,8}_{BP_{*}(BP)}(BP_{*},BP_{*})\]
corresponds to the Toda bracket
\[\langle\eta,\nu,\eta\rangle=\{\nu^{2}\}\subseteq\pi_{6}(S).\]
We can exploit naturality in cohomology of Hopf algebroids together with (4.1) and (4.2) to obtain an algebra homomorphism
\[{\rm Coext}^{*}_{BP_{*}(BP)}(BP_{*},BP_{*})\to{\rm Ext}^{*}_{(K_{2}),[Q_{8}]} ((K_{2})_{*},(K_{2})_{*})\xrightarrow{\cong}(K_{2})_{*}\otimes_{\mathbb{F}_{4 }}{\rm Ext}^{*}_{\mathbb{F}_{4}[Q_{8}]}(\mathbb{F}_{4},\mathbb{F}_{4}).\]
Our calculations show that under this
\[[t_{1}]\mapsto u({\rm u}+\omega^{2}{\rm v}),\quad[v_{1}t_{1}+t_{1}^{2}]\mapsto u ^{2}({\rm u}+\omega{\rm v}),\]
hence \(\langle{\rm u}+\omega^{2}{\rm v},{\rm u}+\omega{\rm v},{\rm u}+\omega^{2}{\rm v}\rangle\) must contain \(({\rm u}+\omega{\rm v})^{2}={\rm u}^{2}+\omega^{2}{\rm v}^{2}\). It follows that for any extension field \({\bf k}\) of \(\mathbb{F}_{4}\),
\[\langle{\rm u}+\omega^{2}{\rm v},{\rm u}+\omega{\rm v},{\rm u}+\omega^{2}{\rm v }\rangle={\bf k}\{{\rm u}+\omega{\rm v}\}+({\rm u}^{2}+\omega^{2}{\rm v}^{2}) \varsubsetneq{\rm Ext}^{2}_{{\bf k}[Q_{8}]}({\bf k},{\bf k}).\]
Of course this could also be verified directly using a good choice of resolution of \({\bf k}\) over \({\bf k}[Q_{8}]\).
We remark that the Massey product
\[\langle[v_{1}t_{1}+t_{1}^{2}],[t_{1}],[v_{1}t_{1}+t_{1}^{2}]\rangle\subseteq {\rm Coext}^{2,10}_{BP_{*}(BP)}(BP_{*},BP_{*})\]
corresponds to the Toda bracket
\[\langle\nu,\eta,\nu\rangle=\{\eta\sigma+\varepsilon\}\subseteq\pi_{8}(S),\]
and is related to the Massey product
\[\langle{\rm u}+\omega{\rm v},{\rm u}+\omega^{2}{\rm v},{\rm u}+\omega{\rm v} \rangle={\bf k}\{{\rm u}+\omega^{2}{\rm v}\}+({\rm u}^{2}+\omega{\rm v}^{2}) \varsubsetneq{\rm Ext}^{2}_{{\bf k}[Q_{8}]}({\bf k},{\bf k}).\]
### 5-dimensional endotrivial modules for \({\bf k}[Q_{8}]\)
There are in fact two distinct 5-dimensional endotrivial modules for \({\bf k}[Q_{8}]\) (this was pointed out to the author by Dave Benson) and we discuss some implications of this. We follow the notation of Dade [10, section 1] with minor changes.
In \({\bf k}[Q_{8}]\) we take the elements
\[X=\omega i+\omega^{2}j+k,\quad Y=\omega^{2}i+\omega j+k\]
which are in the augmentation ideal and satisfy the relations
\[X^{2}=YXY,\quad Y^{2}=XYX,\quad XYXY=YXYX=\sum_{g\in Q_{8}}g, \tag{5.2}\]
where the last element is a generator of the socle and so is an integral of the Hopf algebra \({\bf k}[Q_{8}]\). This gives a \({\bf k}\)-basis
\[1,X,Y,YX,XY,XYX,YXY,XYXY=YXYX.\]
The module \(W_{3}\) has a basis \(w_{1},w_{2},w_{3}\) for which the action of \(Q_{8}\) is given by (3.1), so the actions of \(X\) and \(Y\) are given by
\[\begin{cases}&Xw_{1}=0,\qquad\quad Yw_{1}=\omega^{2}w_{2},\\ &Xw_{2}=\omega w_{3},\qquad Yw_{2}=0,\\ &Xw_{3}=0,\qquad\quad Yw_{3}=0.\end{cases}\]
This module is isomorphic to the cyclic quotient module
\[\mathbf{k}[Q_{8}]/\mathbf{k}\{X,YX,XYX,YXY,XYXY\}.\]
There is also the cyclic quotient module
\[\mathbf{k}[Q_{8}]/\mathbf{k}\{Y,XY,XYX,YXY,XYXY\}.\]
These have the module structures shown where solid lines indicate multiplication by \(X\), dotted lines indicate multiplication by \(Y\) and the symbols indicate representatives of residue classes.
In each case the central subalgebra \(\mathbf{k}[Z(Q_{8})]\) acts so that multiplication by \(i^{2}-1\) is given by the dashed line. These are both endotrivial \(\mathbf{k}[Q_{8}]\)-modules by Chouinard's Theorem [10, theorem 2.1]. The 5-dimensional modules \(\Omega M^{\prime}\) and \(\Omega M^{\prime\prime}\) are also endotrivial.
There are two 3-dimensional left ideals of \(\mathbf{k}[Q_{8}]\),
\[L^{\prime}=\mathbf{k}[Q_{8}]\{XY\}=\mathbf{k}\{XY,YXY,XYXY\},\quad L^{\prime \prime}=\mathbf{k}[Q_{8}]\{YX\}=\mathbf{k}\{YX,XYX,YXYX\},\]
with endotrivial quotient modules \(J^{\prime}=\mathbf{k}[Q_{8}]/L^{\prime}\) and \(J^{\prime\prime}=\mathbf{k}[Q_{8}]/L^{\prime\prime}\). Notice that \(L^{\prime}\cong M^{\prime}\) and \(L^{\prime\prime}\cong M^{\prime\prime}\), while \(J^{\prime}\) and \(J^{\prime\prime}\) are both stably self-inverse.
Clearly \(J^{\prime}\) and \(J^{\prime\prime}\) are not isomorphic, and from the known structure of the Picard group of the stable module category \(\operatorname{Pic}(\mathbf{k}[Q_{8}])\cong C_{4}\times C_{2}\) we must have \(J^{\prime\prime}\cong\Omega^{2}J^{\prime}\).
The module \(J^{\prime}\) corresponds to our double Joker complex, but \(J^{\prime\prime}\) seems not to be realisable as \(K_{2}^{*}(Z)\) for a CW spectrum. The corresponding \(\mathcal{A}(2)\)-module is \(\Omega H^{*}(Q^{\dot{\iota}})\) and there is no tmf-module spectrum \(M\) for which \(H^{*}_{\text{tmf}}(M)\cong\Omega H^{*}(Q^{\dot{\iota}})\), in particular there is no CW spectrum \(Z\) for which
\[H^{*}_{\text{tmf}}(\text{tmf}\wedge Z)\cong H^{*}(Z)\cong\Omega H^{*}(Q^{\dot {\iota}})\]
as \(\mathcal{A}(2)\)-modules.
## 6. The action of \(G_{24}\)
In this section we briefly discuss the action of the group \(G_{24}\) of order \(24\) discussed in Example 1.2. This a is a split extension containing \(Q_{8}\) as a normal subgroup, \(G_{24}\cong C_{3}\ltimes Q_{8}\). As a subgroup of the stabilizer group \(\mathbb{G}_{2}\) this is generated by \(i,j,\omega\), and by (2.1),
\[\omega i\omega^{-1}=j,\quad\omega j\omega^{-1}=k,\quad\omega k\omega^{-1}=i.\]
Here we identify \(C_{3}\) with the subgroup generated by \(\omega\).
The right action of \(G_{24}\) on \((K_{2})_{*}(S^{0}\cup_{\nu}e^{4}\cup_{\eta}e^{6})\) in terms of the generators \(z_{k}=u^{k}\overline{z}_{k}\in(K_{2})_{2k}(S^{0}\cup_{\nu}e^{4}\cup_{\eta}e^{6})\) inherited from \(BP_{*}(S^{0}\cup_{\nu}e^{4}\cup_{\eta}e^{6})\) can be deduced using (4.4):
\[\begin{cases}z_{0}\cdot i=z_{0},&z_{0}\cdot j=z_{0},&z_{0}\cdot\omega=z_{0}, \\ z_{4}\cdot i=z_{4}+u^{2}z_{0},&z_{4}\cdot j=z_{4}+\omega u^{2}z_{0},&z_{4}\cdot \omega=z_{4},\\ z_{6}\cdot i=z_{6}+uz_{4}+\omega u^{3}z_{0},&z_{6}\cdot j=z_{6}+\omega^{2}uz_{ 4}+\omega u^{3}z_{0},&z_{6}\cdot\omega=z_{6}.\end{cases} \tag{6.1}\]
Using Brauer characters it is routine to verify that \(\mathbb{F}_{4}[G_{24}]\) has \(3\) simple modules each of which is \(1\)-dimensional with \(8\)-dimensional projective cover. The summands in the corresponding decomposition of the module \((K_{2})_{*}(S^{0}\cup_{\nu}e^{4}\cup_{\eta}e^{6})\) is obtained from the subspace of \(C_{3}\)-invariants on multiplying them by \(1,u,u^{2}\).
The \(C_{3}\)-invariants in \((K_{2})_{*}(S^{0}\cup_{\nu}e^{4}\cup_{\eta}e^{6})\) is isomorphic to \((K(2)\mathbb{F}_{4})_{*}(S^{0}\cup_{\nu}e^{4}\cup_{\eta}e^{6})\), i.e., the original Morava \(K\)-theory with coefficients in \(\mathbb{F}_{4}\):
\[(K(2)\mathbb{F}_{4})_{*}(-)=\mathbb{F}_{4}\otimes_{\mathbb{F}_{2}}K(2)_{*}(-).\]
This is of course a module over the graded field \((K(2)\mathbb{F}_{4})_{*}=\mathbb{F}_{4}[v_{2},v_{2}^{-1}]=\mathbb{F}_{4}[u^{3},u^{-3}]\). Furthermore it has an action of the twisted Hecke algebra \((K_{2})_{*}^{C_{3}}\{C_{3}\backslash G_{24}/C_{3}\}\cong(K(2)\mathbb{F}_{4})_ {*}\{Q_{8}\}\) discussed in Appendix A.
Remembering that the right action of \(\omega\) on \(u^{k}\) satisfies \(u^{k}\mapsto\omega^{-k}u^{k}=\omega^{2k}u^{k}\), we find that the following \(8\) elements form a \((K(2)\mathbb{F}_{4})_{*}\)-basis for \((K_{2})_{*}^{C_{3}}\{C_{3}\backslash G_{24}/C_{3}\}\cong(K(2)\mathbb{F}_{4})_ {*}\{Q_{8}\}\):
\[1H,\,i^{2}H=j^{2}H=k^{2}H,\]
\[iH+jH+kH,\,i^{3}H+j^{3}H+k^{3}H,\]
\[u(iH+\omega^{2}jH+\omega kH),\,u(i^{3}H+\omega^{2}j^{3}H+\omega k^{3}H),\]
\[u^{-1}(iH+\omega jH+\omega^{2}kH),\,u^{-1}(i^{3}H+\omega j^{3}H+\omega^{2}k^{3} H).\]
Their actions on \((K(2)\mathbb{F}_{4})_{*}(S^{0}\cup_{\nu}e^{4}\cup_{\eta}e^{6})\) have the following matrices with respect to the basis \(z_{0},z_{4},z_{6}\):
\[\begin{bmatrix}1&0&0\\ 0&1&0\\ 0&0&1\end{bmatrix},\begin{bmatrix}1&0&u^{3}\\ 0&1&0\\ 0&0&1\end{bmatrix},\] \[\begin{bmatrix}1&0&\omega u^{3}\\ 0&1&0\\ 0&0&1\end{bmatrix},\begin{bmatrix}1&0&\omega^{2}u^{3}\\ 0&1&0\\ 0&0&1\end{bmatrix},\] \[\begin{bmatrix}0&u^{3}&0\\ 0&0&0\\ 0&0&0\end{bmatrix},\begin{bmatrix}0&u^{3}&0\\ 0&0&0\\ 0&0&0\end{bmatrix},\] \[\begin{bmatrix}0&0&0\\ 0&0&1\\ 0&0&0\end{bmatrix},\begin{bmatrix}0&0&0\\ 0&0&1\\ 0&0&0\end{bmatrix}.\]
## Concluding remarks
The main import of this paper is the appearance of unexpected relationships between seemingly disparate topics. It has long been noted that there appear to be connections between the cohomology of some of the \(\mathcal{A}(n)\) and that of finite groups. The case of \(Q_{8}\) is one where such connections have been observed and we provide further evidence of this. However, it is unclear whether there are other examples, perhaps in higher chromatic heights.
## Appendix A Twisted group rings and their modules
The results in this appendix are aimed at the specific circumstances that occur in chromatic homotopy theory. More general statements on twisted (or skew) group rings can be found in Passman [Pas, section 4]; Lam [14, chapter 7] is a good source on local and semilocal rings. We also discuss twisted Hecke algebras which do not seem to be extensively documented.
### Twisted group rings
We begin with a result which includes both [14, theorem 20.6] and [Pas, theorem 4.2] as special cases.
Recall from Lam [14, SS20] that a ring \(A\) is _semilocal_ if \(A/\operatorname{rad}A\) is semisimple, where \(\operatorname{rad}A\) is the Jacobson radical of \(A\).
**Proposition A.1**.: _Suppose that \(A\subseteq B\) is a semilocal subring where \(B\) is finitely generated as a left \(A\)-module. Let \(\mathfrak{a}\lhd A\) be a radical ideal, \(\mathfrak{b}=\operatorname{rad}B\lhd B\) and \(B\mathfrak{a}\subseteq\mathfrak{a}B\). Then_
(a)_\(\mathfrak{a}\subseteq\mathfrak{b}\);_
(b)_\(B\) is semilocal;_
(c) _There is a \(k\geqslant 1\) such that \(\mathfrak{b}^{k}\subseteq B\mathfrak{a}\)._
Proof.: (a) Let \(M\) be a simple left \(B\)-module. Then \(M\) is cyclic and so is finitely generated over \(B\) and therefore over \(A\). Also,
\[B(\mathfrak{a}M)=(B\mathfrak{a})M\subseteq(\mathfrak{a}B)=\mathfrak{a}(BM) \mathfrak{a}M,\]
so \(\mathfrak{a}M\subseteq M\) is a \(B\)-submodule. If \(\mathfrak{a}M\neq 0\) then the \(A\)-module \(M\) satisfies \(\mathfrak{a}M=M\), so by Nakayama's Lemma, \(M=0\). So we must have \(\mathfrak{a}M=0\).
Since \(\mathfrak{a}\) annihilates every simple \(B\)-module, \(\mathfrak{a}\subseteq\mathfrak{b}\).
(b) The finitely generated left \(A/\mathfrak{a}\)-module \(B/B\mathfrak{a}B\) is also a ring which is left Artinian with radical \(\mathfrak{b}/B\mathfrak{a}B\). This implies that the quotient ring \(B/\mathfrak{b}\) is left semisimple.
(c) The finitely generated left \(A/\mathfrak{a}\)-module \(B/B\mathfrak{a}\) is left Artinian. The \(B\)-submodules \(\mathfrak{b}^{k}/B\mathfrak{a}\) form a decreasing chain which must stabilize, so for some \(k\geqslant 1\),
\[\mathfrak{b}^{k}/B\mathfrak{a}=\mathfrak{b}^{k+1}/B\mathfrak{a}=\mathfrak{b}( \mathfrak{b}^{k}/B\mathfrak{a}),\]
By Nakayama's Lemma \(\mathfrak{b}^{k}/B\mathfrak{a}B=0\), hence \(\mathfrak{b}^{k}\subseteq B\mathfrak{a}\).
The special case \(\mathfrak{a}=\operatorname{rad}A\) is particularly important. In practise we will consider the case where \(B\mathfrak{a}=\mathfrak{a}B\) so \(B\mathfrak{a}\lhd B\). This is true when \(R\) is a ring with a group acting on it by automorphisms; then the radical \(\mathfrak{r}\lhd R\) is necessarily invariant so we can apply our results with \(A=R\) and \(B=R\langle G\rangle\), the twisted/skew group ring. This recovers [Pas, theorem 4.2]. We will discuss this special case in detail, making additional assumptions relevant in chromatic stable homotopy theory.
Let \((R,\mathfrak{m})\) be a complete and Hausdorff (i.e., \(\bigcap_{r\geqslant 1}\mathfrak{m}^{r}=0\)) Noetherian commutative local ring with residue field \(\kappa=R/\mathfrak{m}\) of positive characteristic \(p\). Let \(G\) be a finite group which acts on \(R\) by (necessarily local) automorphisms, so that \(G\) also acts on \(\kappa\) by field automorphisms.
We can form the _twisted group rings_\(R\langle G\rangle\) and \(\kappa\langle G\rangle\); if the action of \(G\) on \(R\) or \(\kappa\) is trivial then we have the ordinary group ring \(R[G]\) or \(\kappa[G]\). The subset
\[\mathfrak{M}=R\langle G\rangle\mathfrak{m}=\mathfrak{m}R\langle G\rangle=\{ \sum_{g\in G}x_{g}g:x_{g}\in\mathfrak{m}\}\subseteq R\langle G\rangle\]
is a two-sided ideal with quotient ring \(R\langle G\rangle/\mathfrak{M}\cong\kappa\langle G\rangle\). There is a maximal ideal
\[\mathfrak{n}=\{\sum_{g\in G}y_{g}(g-1):y_{g}\in\kappa\}\lhd\kappa\langle G\rangle\]
with quotient ring \(\kappa\langle G\rangle/\mathfrak{n}\cong\kappa\) defining the trivial \(\kappa\langle G\rangle\)-module as well as the trivial \(R\langle G\rangle\)-module \(R\langle G\rangle/\mathfrak{M}\) where
\[\mathfrak{N}=\mathfrak{M}+\{\sum_{g\in G}z_{g}(g-1):z_{g}\in R\}\lhd R \langle G\rangle.\]
Our next two results follow from our Proposition A.1 as well as being special cases of [Pas, theorem 4.2].
**Lemma A.2**.:
(a)_\(\kappa\langle G\rangle\) is semilocal;_
(b) _The ideal_ \(\mathfrak{M}\lhd R\langle G\rangle\) _is a radical ideal and_ \(R\langle G\rangle\) _is semilocal._
(c) _The simple_ \(R\langle G\rangle\)_-modules are obtained by pulling back the simple modules of_ \(\kappa\langle G\rangle\) _along the quotient homomorphism_ \(R\langle G\rangle\to\kappa\langle G\rangle\)_._
Proof.: (a) This follows from Artin-Wedderburn theory since \(\kappa\langle G\rangle\) is a finite dimensional \(\kappa\)-vector space and hence Artinian.
(b) Use Proposition A.1.
(c) This follows from (b).
A detailed discussion of lifting of idempotents and results on Krull-Schmidt decompositions for complete local Noetherian rings can be found in Lam [1, section 21].
Now we can deduce an important special case.
**Lemma A.3**.: _Suppose that \(G\) is a \(p\)-group. Then_
(a)_\(\kappa\langle G\rangle\) is local with unique maximal left/right ideal \(\mathfrak{n}\) equal to the radical \(\operatorname{rad}\kappa\langle G\rangle\);_
(b)_\(R\langle G\rangle\) is local with unique maximal left/right ideal \(\mathfrak{N}\)._
_Hence \(R\langle G\rangle\) and \(\kappa\langle G\rangle\) each have the unique simple module \(\kappa\)._
Proof.: (a) Suppose that \(S\) is a (non-trivial) simple left \(\kappa\langle G\rangle\)-module. For \(0\neq s\in S\), consider the finite dimensional \(\mathbb{F}_{p}\)-subspace \(\mathbb{F}_{p}[G]s\subseteq S\) whose cardinality is a power of \(p\). It is also a non-trivial finite \(\mathbb{F}_{p}[G]\)-module, so the \(p\)-group \(G\) acts linearly with \(0\) as a fixed point. Since every orbit has cardinality equal to a power of \(p\) there must be at least one other fixed point \(v\neq 0\) and this spans a \(\kappa\langle G\rangle\)-submodule \(\kappa v\subseteq S\). It follows that \(S=\kappa v\cong\kappa\). Of course if the \(G\)-action on \(\kappa\) is trivial, \(\kappa\langle G\rangle=\kappa[G]\) and this argument is well-known.
(b) This is immediate from (a) together with parts (b) and (c) of Lemma A.2.
**Corollary A.4**.: _If \(G\) is a \(p\)-group, then \(\mathfrak{N}\lhd R\langle G\rangle\) is the unique maximal ideal and \(R\langle G\rangle\) is \(\mathfrak{N}\)-adically complete and Hausdorff._
Proof.: This follows from Proposition A.1(c): some power of \(\mathfrak{N}\) is contained in \(\mathfrak{M}=R\langle G\rangle\mathfrak{m}\), and for \(k\geqslant 1\), \(\mathfrak{M}^{k}=R\langle G\rangle\mathfrak{m}^{k}\subseteq\mathfrak{N}^{k}\). Therefore the \(\mathfrak{N}\)-adic, \(\mathfrak{M}\)-adic and \(\mathfrak{m}\)-adic topologies agree.
We recall that for a local ring, every projective module is free by a theorem of Kaplansky [1, theorem 2], so in statements involving local rings, projective modules can be taken to be free.
**Lemma A.5**.:
(a) _Let \(P\) be a projective \(R\langle G\rangle\)-module. Then \(P\) is a projective \(R\)-module._
(b) _Let \(Q\) be a finitely generated projective \(\kappa\langle G\rangle\)-module. Then there is a projective \(R\langle G\rangle\)-module \(\widetilde{Q}\) for which \(\kappa\langle G\rangle\otimes_{R\langle G\rangle}\widetilde{Q}\cong Q\)._
Proof.: (a) Every projective module is a retract of a free module and \(R\langle G\rangle\)-module is a free \(R\)-module.
(b) By the Krull-Schmidt theorem, we may express \(Q\) as a coproduct of projective indecomposable \(\kappa\langle G\rangle\)-modules, so it suffices to assume \(Q\) is a projective indecomposable, hence cyclic. Viewing \(Q\) as an \(R\langle G\rangle\)-module we can choose a cyclic projective module \(\widetilde{Q}\) with an epimorphism \(\pi\colon\widetilde{Q}\to Q\).
We will make use of the following result.
**Lemma A.6**.: _Suppose that \(M\) is an \(R\langle G\rangle\)-module which is finitely generated free as an \(R\)-module. If \(\kappa\otimes_{R}M\) is an endotrivial \(\kappa\langle G\rangle\)-module, then \(M\) is an endotrivial \(R\langle G\rangle\)-module._
Proof.: Let \(\operatorname{End}_{R}(M)=\operatorname{Hom}_{R}(M,M)\) with its usual left \(R\langle G\rangle\)-module structure. If \(\kappa\otimes_{R}M\) is endotrivial then as \(\kappa\langle G\rangle\)-modules,
\[\kappa\otimes_{R}\operatorname{End}_{R}(M)\cong\operatorname{End}_{\kappa}( \kappa\otimes_{R}M,\kappa\otimes_{R}M)\cong\kappa\oplus P\]
where \(P\) is a projective \(\kappa\langle G\rangle\)-module. Recall that the units give monomorphisms \(R\to\operatorname{End}_{R}(M)\) and \(\kappa\to\kappa\otimes_{R}\operatorname{End}_{R}(M)\), where the latter is split.
Now choose a projective \(R\langle G\rangle\)-module \(\widetilde{P}\) with an epimorphism \(\pi\colon\widetilde{P}\to P\) and \(\kappa\otimes_{R}\widetilde{P}\cong P\). There is a commutative diagram of solid arrows with exact rows
and the composition \(\sigma\circ\pi\) lifts to \(\pi^{\prime\prime}\colon\widetilde{P}\to\operatorname{End}_{R}(M)\). On applying \(\kappa\otimes_{R}(-)\) to the composition
\[\widetilde{P}\xrightarrow{\pi^{\prime\prime}}\operatorname{End}_{R}(M) \xrightarrow{\pi}P\]
we obtain the composition
\[P\xrightarrow{\sigma}\kappa\otimes_{R}\operatorname{End}_{R}(M)\to P\]
which is an epimorphism. Using Nakayama's Lemma we now see that \(\operatorname{End}_{R}(M)\to\widetilde{P}\) is an epimorphism, hence \(\operatorname{End}_{R}(M)\cong R\oplus\widetilde{P}\) and so \(M\) is endotrivial.
Although we don't really make use of this, we note that an appropriate dual of a twisted group ring over a commutative ring admits the structure of a Hopf algebroid; a discussion of this appears in the appendix of [1].
### Twisted Hecke algebras
Hecke algebras are commonly encountered in the study of modular forms and representation theory and they also appear as stable operations in elliptic cohomology and topological modular forms. A general algebraic introduction can be found in Krieg [10]. Here we describe a twisted/skew version.
To simplify things we will assume that \(G\) is a _finite_ group acting on a commutative \(\Bbbk\)-algebra \(A\) by algebra automorphisms. If \(H\leqslant G\) we may form the twisted group algebra \(A\langle G\rangle\). We will indicate the action of \(g\in G\) on \(a\in A\) by writing \({}^{g}a\).
The free left \(A\)-module \(A\,G/H\) is also a left \(A\langle G\rangle\)-module and we may define a _twisted Hecke algebra_ by
\[A^{H}\{H\backslash G/H\}=\operatorname{End}_{A\langle G\rangle}(A\{G/H\})^{ \operatorname{o}}=\operatorname{Hom}_{A\langle G\rangle}(A\{G/H\},A\{G/H\})^{ \operatorname{o}},\]
the opposite of the endomorphism algebra of the \(A\langle G\rangle\)-module \(A\{G/H\}\).
By standard adjunction results, there are isomorphisms of \(\Bbbk\)-modules (in fact of \({}^{H}A\)-modules),
\[A^{H}\{H\backslash G/H\} \cong\operatorname{Hom}_{A\langle G\rangle}(A\{G/H\},A\{G/H\})^{ \operatorname{o}}\] \[\cong\operatorname{Hom}_{\Bbbk[G]}(\Bbbk\{G/H\},A\{G/H\})\] \[\cong\operatorname{Hom}_{\Bbbk[H]}(\Bbbk,A\{G/H\}).\]
The last term can be identified with the \(H\)-fixed point set
\[{}^{H}(A\{G/H\})=\bigg{\{}\sum_{x\colon G/H}r_{x}\,xH:\forall x,\forall h\in H,\;^{h}r_{x}=r_{hx}\bigg{\}}.\]
Here we adopt notation from [10]: the summation
\[\sum_{x\colon\,G/H}\]
is taken over a complete set of coset representatives \(x\) for \(G/H\). If the \(G\)-action on \(A\) is trivial, \({}^{H}(A\{G/H\})\) is the free \(A\)-module on the set of double cosets \(H\backslash G/H\) which agrees with the classical notion of Hecke algebra. Of course we can view \(A^{H}\{H\backslash G/H\}\) as an \({}^{H}A\)-algebra where the unity comes from the double coset \(H1H\) and is the element \(1H\in A\{G/H\}\).
To make the multiplication \(*\) on \(A^{H}\{H\backslash G/H\}\) explicit, we identify \(\alpha\in\operatorname{End}_{A\langle G\rangle}(A\{G/H\})^{\circ}\) with the corresponding element of \({}^{H}(A\{G/H\})\),
\[\alpha(1H)=\sum_{x\colon\,G/H}a_{x}\,xH\]
where \(a_{x}\in A\). Then for \(\beta\in\operatorname{End}_{A\langle G\rangle}(A\{G/H\})^{\circ}\) with
\[\beta(1H)=\sum_{x\colon\,G/H}b_{x}\,xH\]
we obtain
(A.1) \[\alpha*\beta=\sum_{x,y\colon\,G/H}a_{x}{}^{x}b_{y}\,(xy)H=\sum_{x,y\colon\,G/H }a_{x}{}^{x}b_{x^{-1}y}\,yH.\]
Now for a left \(A\langle G\rangle\)-module \(M\), its \(H\)-fixed point set
\[{}^{H}M\cong\operatorname{Hom}_{\Bbbk[H]}(\Bbbk,M)\cong\operatorname{Hom}_{ A\langle G\rangle}(A\{G/H\},M)\]
is naturally a _right_\(\operatorname{End}_{A\langle G\rangle}(A\{G/H\})\)-module and therefore a _left_\(A^{H}\{H\backslash G/H\}\)-module. For \(m\in{}^{H}M\) and \(\alpha\in A^{H}\{H\backslash G/H\}\) the action is given by
(A.2) \[\alpha*m=\sum_{x\colon\,G/H}a_{x}{}^{x}m.\]
When \(H\lhd G\), as sets \(H\backslash G/H=G/H\) and
\[A^{H}\{H\backslash G/H\}\cong({}^{H}A)\langle G/H\rangle.\]
A more interesting situation that we encounter in Section 6 involves a semidirect product \(G=HN\cong H\ltimes N\). Each double coset in \(H\backslash G/H\) has the form \(HnH\) where \(n\in N\) is uniquely determined up to \(H\)-conjugacy. So as left \({}^{H}A\)-modules,
\[A^{H}\{H\backslash G/H\}\cong A^{H}\{N\}.\]
For a left \(A\langle G\rangle\)-module \(M\), the action of the element corresponding to \(n\in N\) on \({}^{H}M\) is given by
(A.3) \[nH*m=\sum_{h\colon\,H/C_{H}(n)}hnh^{-1}m,\]
where the sum is really taken over the set of \(H\)-conjugates of \(n\). |
2303.18095 | Quantum computing quantum Monte Carlo with hybrid tensor network for
electronic structure calculations | Quantum computers have a potential for solving quantum chemistry problems
with higher accuracy than classical computers. Quantum computing quantum Monte
Carlo (QC-QMC) is a QMC with a trial state prepared in quantum circuit, which
is employed to obtain the ground state with higher accuracy than QMC alone. We
propose an algorithm combining QC-QMC with a hybrid tensor network to extend
the applicability of QC-QMC beyond a single quantum device size. In a two-layer
quantum-quantum tree tensor, our algorithm for the larger trial wave function
can be executed than preparable wave function in a device. Our algorithm is
evaluated on the Heisenberg chain model, graphite-based Hubbard model, hydrogen
plane model, and MonoArylBiImidazole using full configuration interaction QMC.
Our algorithm can achieve energy accuracy (specifically, variance) several
orders of magnitude higher than QMC, and the hybrid tensor version of QMC gives
the same energy accuracy as QC-QMC when the system is appropriately decomposed.
Moreover, we develop a pseudo-Hadamard test technique that enables efficient
overlap calculations between a trial wave function and an orthonormal basis
state. In a real device experiment by using the technique, we obtained almost
the same accuracy as the statevector simulator, indicating the noise robustness
of our algorithm. These results suggests that the present approach will pave
the way to electronic structure calculation for large systems with high
accuracy on current quantum devices. | Shu Kanno, Hajime Nakamura, Takao Kobayashi, Shigeki Gocho, Miho Hatanaka, Naoki Yamamoto, Qi Gao | 2023-03-31T14:38:40Z | http://arxiv.org/abs/2303.18095v3 | Quantum computing quantum Monte Carlo with hybrid tensor network toward electronic structure calculations of large-scale molecular and solid systems
###### Abstract
Quantum computers are expected to solve the problems for quantum chemistry and materials science with higher accuracy than classical computers. Quantum computing quantum Monte Carlo (QC-QMC) is a method that can be combined with quantum algorithms such as variational quantum eigensolver (VQE) to obtain the ground state with fewer quantum resources and higher accuracy than either VQE or QMC alone. In this study, we propose an algorithm combining QC-QMC with hybrid tensor network (HTN) to extend the applicability of QC-QMC for the system beyond the size of a single quantum device, and we named the algorithm HTN+QMC. For HTN with the structure of a two-layer quantum-quantum tree tensor, the proposed algorithm for an \(\mathcal{O}\big{(}n^{2}\big{)}\)-qubit reference wave function (trial wave function) in QMC can be performed by using only a \(n\)-qubit device excluding ancilla qubits. Full configuration interaction QMC is adopted as an example of QMC, and the proposed algorithm is applied to the Heisenberg chain model, the graphite-based Hubbard model, the hydrogen plane model, and MonoArylBiImidazole (MABI). The results show that the algorithm can achieve energy accuracy several orders of magnitude higher than either VQE or QMC alone. In addition, the energy accuracy of HTN+QMC is as same as QC-QMC when the system is appropriately decomposed. These results pave the way to electronic structure calculation for large systems with high accuracy on current quantum devices.
## I Introduction
Computationally accurate prediction of physical properties enables us to speed up the development of functional materials such as batteries [1; 2], catalysis [3], and photochemical materials [4; 5]. The physical properties are mainly governed by electrons in materials, and the computational cost of calculating electronic structures increases exponentially with system size in general, which often prevents classical computers from achieving the required accuracy to predict the properties.
Quantum computers are expected to solve classically intractable large electronic structure problems in quantum chemistry and materials science [6]. Current quantum computers are called noisy intermediate scale-quantum (NISQ) devices [7], which have limitations to the numbers of both qubits and quantum gates due to physical noise, and various approaches for the NISQ devices are proposed [8; 9; 10; 11; 12; 13]. One of the most popular algorithms is variational quantum eigensolver (VQE) [14] which obtains the ground state energy by minimizing energy cost function using a variational quantum circuit, where its parameters are updated by a classical computer. In contrast to traditional quantum algorithms such as the quantum phase estimation [15; 16; 17], VQE requires much smaller hardware resources. However, VQE still suffers from issues such as lacking accuracy [18] and vanishing parameter gradients called barren plateau [9; 19; 20].
While there are various studies to avoid the issues [21; 22; 23; 24; 25], quantum algorithms based on quantum Monte Carlo (QMC) are proposed for further relaxing the hardware requirement [26; 27; 28; 29; 30; 31; 32; 33]. QMC is a computational method that uses stochastic sampling techniques to solve large quantum many-body problems such as molecular systems containing hundreds of electrons [34; 35]. To date, various types of QMC have been proposed including variational Monte Carlo (VMC) [36; 37], diffusion Monte Carlo (DMC) [38], Green's function Monte Carlo [39], auxiliary field quantum Monte Carlo (AFQMC) [40; 41], and full configuration interaction quantum Monte Carlo (FCIQMC) [42], where we adopt FCIQMC in this study. FCIQMC is useful for quantum chemistry and a stochastic imaginary-time evolution is executed in the space of all the standard basis states (such as Slater determinants) that can be constructed from a given spatial orbital basis. Quantum computing QMC (QC-QMC) as in Refs. [26; 29; 30; 33] can introduce quantum computation such as VQE to improve the accuracy of energy evaluation in ground-state calculations by mitigating the sign problem which otherwise causes an exponential statistical error in the energy evaluation [43]. Specifically, the wave function distribution, i.e., the walker distribution, is classically generated, and energy is evaluated by using
a reference wave function (commonly called a trial wave function in the QMC context) prepared by VQE. In contrast to VQE where the accuracy depends on parameter optimization, QC-QMC has no such optimization while the accuracy relies on the quality of the reference wave function in the energy evaluation. The ground state computation without optimization is expected to lower the hardware requirements, and in fact, there was a demonstration on a 16 qubit diamond, the largest hardware experiment run nearly within the chemical accuracy [26].
While QC-QMC is expected to provide highly accurate ground-state calculations, applying the method to large-scale systems is one of the remaining challenges. Although the electronic correlations outside the active space can be obtained by classical post-processing [26], the size of the available active space is limited by that of the reference wave function. The proposals for constructing wave functions larger than the size of a quantum device include the divide-and-conquer algorithm [44; 45], embedding theory [46; 47; 48], circuit cutting [49; 50; 51; 52], perturbation [53], and tensor network [54; 55]. Hybrid tensor network (HTN) [55] is a general framework of the tensor network that can be implemented on a reduced number of qubits and gates by decomposing the wave function in the original system into smaller-size tensors, and the tensors are processed by either quantum or classical computation, i.e., quantum/classical hybrid computation. The decomposition can reduce both the effective circuit width and depth of the execution, which is more robust to noise than the execution on the original circuit. There are several quantum algorithms depending on the tensor network structure, which include the matrix product states (MPS) [56; 57], projected entangled pair states (PEPS) [58; 59], and tree tensor networks (TN) [54; 60]. In a quantum algorithm for TN of the two-layer structure, which is adopted in this study, the quantum states of the subsystem in the first layer are integrated with the second layer either by quantum or classical computation. We refer to the former and latter types as quantum-quantum TN (QQTN) and quantum-classical TN (QCTN), respectively. Deep VQE [61; 45; 62] and entanglement forging [63; 64] can be broadly classified into the QQTN and QCTN, respectively, and QQTN has been theoretically and numerically verified in the HTN works [55]. On the one hand, the QCTN is more robust to noise than QQTN due to avoiding the accumulation of errors in the integration whereas the QQTN is expected to efficiently compute correlations that cannot be captured by the QCTN. In terms of obtaining a reference wave function that takes the most advantage of quantum computation, QQTN is considered the best choice.
In this study, we propose an algorithm of QC-QMC in combination with HTN, specifically, a two-layered QQTN, whose conceptual diagram is illustrated in Fig. 1. In the two-layer QQTN as shown in Fig. 1(a), subsystems decomposed from the original system (dashed line) are depicted below the lower tensors (orange), where the lower tensors and legs at the bottom of the tensor are related to the subsystems and the physical sites, respectively. The tensor variables \(i_{m}\in\{0,1\}\) (\(m=1,2,\ldots,k\)) are indices shared by \(\varphi\) and \(\psi\), and the calculations for tensors and communications between the two tensors are treated as quantum and classical computations, respectively. Here these calculations and communications are where the quantum and classical computations are hybridized in the tensor network of this paper. The tensors are represented by using unitary matrices \(U_{Lm}\) and \(U_{U}\) in the quantum circuit as shown in the right panel of Fig. 1(a). The first step of the algorithm is to prepare the reference wave function by VQE with HTN by optimizing the parameters in \(U_{U}\) and \(U_{Lm}\), which we call HTN+VQE. This reference wave function, as denoted by \(|\xi\rangle\), is passed to QMC to accurately estimate a ground state energy evaluated by the mixed energy \(E_{mix}\), the energy estimator. We call this second step HTN+QMC shown in Fig. 1(b). Specifically, by using a reference state \(|\xi\rangle\) constructed in the HTN+VQE, evaluation of \(E_{mix}\) is performed by a quantum computation for each wave function generated by QMC (\(|\psi_{QMC}\rangle\)), where \(|\psi_{QMC}\rangle\) can be represented as a superposition of basis states such as the Slater determinant. In this formalism, we can construct \(\mathcal{O}\big{(}n^{2}\big{)}\)-qubit reference wave function in QMC by using only \(n\)-qubit device excluding ancilla qubits. We mention that the proposed algorithm can be applied to other types of HTN and QMC.
The performance of HTN+QMC for the ground-state calculation is benchmarked on the four models selected in terms of physical/chemical (solid/molecule) and basic/applied, we adopt the models shown in Fig. 2: the Heisenberg chain model, the hydrogen (H\({}_{4}\)) plane model, the graphite-based Hubbard model, and MonoAryIBiImidazole (MABI). The Heisenberg chain model and hydrogen plane model are commonly used as the benchmark models in quantum algorithms for quantum chemistry (for example, Refs. [26; 65; 33]) due to the flexibility of controlling electron correlation. Graphite is used as an anode material for lithium-ion batteries and a two-dimensional layered material consisting of carbon sheets. Various electron correlation phenomena such as ferromagnetism and antiferromagnetism are induced depending on the edge shape of the sheet structure [66; 67], and thus, highly accurate electronic structure calculations are expected to lead to the development of anode materials of lithium-ion batteries. MABI is a model system of the photochromic radical dimer PentaArylBimidazole (PABI) [68], whose open (colored)-form has a resonance structure or thermally-equilibrium structures between biradical (open-shell singlet) and quinoid (closed-shell) forms in ground state character and is thermally isomerized within microsecond order to its closed (colorless)-form, which is photochemically isomerized to the open-form within picosecond order. This ground-state electronic structure calculation of MABI using our proposed quantum algorithms is a first step toward a full understanding of the photochromic reaction mechanism of PABI.
The rest of this paper is organized as follows. The proposed algorithm is described in Sec. II. The components of the proposed method, VQE, QC-QMC, and HTN, are explained in Sec. II.1, Sec. II.2, and Sec. II.4, respectively, and then HTN+QMC is introduced in Sec. II.4. The models are used for benchmarking the HTN+QMC performance and the details of the calculation conditions are described in Sec. III. The calculation results are shown in Sec. IV, an example that shows convergence behaviors in VQE and QMC is shown in Sec. IV.1, benchmark results are presented in Sec. IV.2, and decomposition dependencies on the system for HTN are discussed in Sec. IV.3. Section V gives a conclusion and future prospects.
## II Methods
### Variational quantum eigensolver
VQE aims to obtain the ground state of a given Hamiltonian. VQE uses quantum circuits with variational parameters \(\vec{\theta}\) to generate the wave function \(\left|\psi(\vec{\theta})\right\rangle\) and the expectation value of the Hamiltonian \(\left\langle\psi(\vec{\theta})\right|H\left|\psi(\vec{\theta})\right\rangle\). We update the parameters \(\vec{\theta}\) using the classical computer to obtain a lower expectation value, and the process is repeated until a termination condition is satisfied. The expressibility of \(\left|\psi(\vec{\theta})\right\rangle\) depends on the quantum circuit design, i.e., ansatz. The problem-inspired ansatz such as the unitary coupled cluster ansatz [71] and Hamiltonian variational ansatz [72] tend to have high performance for obtaining the ground state but require deeper quantum gates, and thus hardware-efficient ansatz [73] is often used in real device experiments.
The Hamiltonian \(H\) in the electronic structure problems can be represented as
\[H=\sum_{a}c_{a}\bigotimes_{b}P_{ab}, \tag{1}\]
where \(c_{a}\) is the \(a\)-th coefficient of \(H\), \(P_{ab}\in\{X,Y,Z,I\}\) is the \(a\)-th Pauli or identity operator on the \(b\)-th site. We consider the Hamiltonian in Eq. (1) in this study, and \(c_{a}\in\mathbb{R}\) is assumed for all the models in this study (although \(c_{a}\in\mathbb{C}\) in general). We can obtain the above representation for a fermionic Hamiltonian by applying a fermion-qubit mapping, such as the Jordan-Wigner mapping [74] and Bravyi-Kitaev mapping [75], to the Hamiltonian, and the number of terms becomes \(\mathcal{O}\big{(}N_{so}^{4}\big{)}\) where \(N_{so}\) is the number of spin-orbitals. Hereinafter, we assume the Jordan-Wigner mapping as the fermion-qubit mapping.
### Quantum computing quantum Monte Carlo
QC-QMC as in Ref. [26] obtains the ground state by iteratively performing stochastic operations such as imaginary-time evolution on the discretized coefficients of the wave function. Two approaches exist for using the quantum computation in the previous studies of QC-QMC: one is using it only for the energy evaluation [29], and the other is using it for the energy evaluation and walker control [26; 29; 30]. Although the latter approach is expected to be more effective in mitigating the sign problem, it can be computationally expensive [29], and thus the former approach is adopted in this study. The following mixed energy \(E_{mix}\) is used as a common energy estimator in QMC:
\[E_{mix}=\frac{\left\langle\xi\right|H\left|\psi_{QMC}\right\rangle}{\left\langle \xi\right|\psi_{QMC}\right\rangle}, \tag{2}\]
Figure 1: Conceptual diagram for the calculation procedure. (a) HTN+VQE. (b) HTN+QMC. WF in the figure is an abbreviation of Wave Function.
where \(\left|\xi\right\rangle\) is a reference wave function and \(\left|\psi_{QMC}\right\rangle\) is a wave function generated by QMC. \(\left|\psi_{QMC}\right\rangle\) can be represented as
\[\left|\psi_{QMC}\right\rangle=\sum_{h}w_{h}\left|\phi_{h}\right\rangle, \tag{3}\]
where \(w_{h}\) is the \(h\)-th coefficient and \(\left|\phi_{h}\right\rangle\) is the \(h\)-th standard basis state. In QMC, \(w_{h}\) is expressed using discretized units called walkers. The basis state \(\left|\phi_{h}\right\rangle\) is prepared by a classical computer; the Slater determinant is often used. The procedure of generating the wave function \(\left|\psi_{QMC}\right\rangle\) depends on the method of QMC. In this work, we take FCIQMC, which updates the walker having a positive or negative sign in each iteration based on the stochastic imaginary-time evolution of the imaginary-time increment \(\Delta\tau\) and is executable in the wave function of \(w_{h}\in\mathbb{R}\). See Appendix A for the details of the procedure. \(E_{mix}\) is not the expectation value of the Hamiltonian \(\left\langle\psi_{QMC}|H|\psi_{QMC}\right\rangle/\left\langle\psi_{QMC}|\psi_ {QMC}\right\rangle\) (that is, not the pure estimator), but we can obtain the exact ground state energy \(E_{g}\) when \(\left|\psi_{QMC}\right\rangle\) approaches to the ground state \(\left|\psi_{g}\right\rangle\), i.e., \(\left|\psi_{QMC}\right\rangle\sim\left|\psi_{g}\right\rangle\) as
\[E_{mix}\sim\frac{\left\langle\xi\left|H\left|\psi_{g}\right\rangle\right\rangle }{\left\langle\xi\right|\psi_{g}\right\rangle}=E_{g}, \tag{4}\]
where \(H\left|\psi_{g}\right\rangle=E_{g}\left|\psi_{g}\right\rangle\) is used. When the sign problem occurs, the statistical error for evaluating \(E_{mix}\) becomes exponentially large [43]. For example, in FCIQMC, the sign of \(w_{h}\) fluctuates in the bases with small \(w_{h}\) (i.e., competing positive and negative walker counts) [76; 77]. However, by elaborating a reference wave function \(\left|\xi\right\rangle\) that efficiently incorporates the electron correlations, we can mitigate the sign problem. A trivial case is that, if the reference wave function coincides with the ground state, i.e., \(\left|\xi\right\rangle=\left|\psi_{g}\right\rangle\), then we can obtain \(E_{g}\) for any \(\left|\psi_{QMC}\right\rangle\) with zero variance [78], see Appendix B for a detail derivation. In contrast, the reference wave functions that are conveniently available in classical computations, such as the Hartree-Fock state, the linear combination of mean-field states, and the Jastrow-type states [34] may easily bring about the sign problem. Therefore, if we can effectively search a reference wave function in the exponentially large Hilbert space, we can expect to reduce the errors in the energy estimation in QMC.
In QC-QMC, the reference wave function \(\left|\xi\right\rangle\) is prepared by using a quantum computer. Specifically, \(E_{mix}\) can be described using Eq. (2) as
\[E_{mix}=\frac{\sum_{h^{\prime}h}w_{h^{\prime}}\left\langle\xi|\phi_{h} \right\rangle\left\langle\phi_{h}\right|H\left|\phi_{h^{\prime}}\right\rangle }{\sum_{h}w_{h}\left\langle\xi|\phi_{h}\right\rangle}. \tag{5}\]
Figure 2: Models for benchmarking HTN+QMC. The structures of graphite and MABI are drawn by VESTA [69] and Jmol [70], respectively. The qubit indices are labeled in (a) and (b), see Sec. III for details. (a) The Heisenberg chain model. (b) The graphite-based Hubbard model. (c) The hydrogen plane model. (d) MonoArylBilmidazole (MABI).
The matrix element \(\left\langle\phi_{h}\right|H\left|\phi_{h^{\prime}}\right\rangle\) can be obtained by using a classical calculation, such as the case of the Slater determinant. In contrast, a quantum computer is used to prepare \(\left|\xi\right\rangle\), which enables us to apply the Hadamard test or the classical shadow [79; 80; 81] to effectively calculate the overlap \(\left\langle\xi|\phi_{h}\right\rangle\). We discuss in Appendix B that the variance of the mixed energy will decrease when the fidelity of the reference wave function prepared by the quantum algorithm is higher than that prepared by the classical algorithm in a simple case. In this study, fidelity is defined using the target and ground states, and specifically, the fidelity is the square of the overlap of those states in the absence of noise.
### Hybrid tensor network
We adopt QQTN as the structure of HTN as shown in Fig. 3(a), where the original \(nk\)-qubit system is decomposed into \(k\) subsystems of \(n\) qubit. More precisely, using a \(n\)-qubit state \(\left|\varphi^{i_{m}}\right\rangle\) and a \(k\)-qubit state \(\left|\psi\right\rangle\), we define and \(\psi_{\vec{i}}=\left\langle\vec{i}\right|\!\left|\psi\right\rangle\), where the vector indices \(\vec{j}_{m}=j_{m1}j_{m2}\ldots j_{mn}\) and \(\vec{i}=i_{1}i_{2}\ldots i_{k}\) are \(n\)-qubit and \(k\)-qubit binary strings, respectively. Then the wave function \(\left|\psi_{HTN}\right\rangle\) is defined using the tensor products of \(\left|\varphi^{i_{m}}\right\rangle\) with coefficients \(\psi_{\vec{i}}\) as
\[\left|\psi_{HTN}\right\rangle=\sum_{\vec{i}}\psi_{\vec{i}}\bigotimes_{m=1}^{k }\left|\varphi^{i_{m}}\right\rangle, \tag{6}\]
where \(\left|\varphi^{i_{m}}\right\rangle\) shares the index \(i_{m}\) with the \(k\)-qubit tensor \(\psi_{\vec{i}}\), and the index is used to construct \(\psi_{\vec{i}}\) which serves as a set of the coefficients. The number of coefficients is \(2^{Lk}\), where \(L\) is the number of legs connecting between the lower part and the upper part of the tree tensor (hereafter lower and upper tensor, respectively), and \(L=1\) is adopted in this study. If \(L=n\), \(\left|\psi_{HTN}\right\rangle\) coincides with the general \(nk\)-qubits wave function. Therefore, when \(L\ll n\), \(\left|\psi_{HTN}\right\rangle\) lives in a subspace much smaller than the entire \(2^{nk}\) dimensional Hilbert space, although it can be larger than the subspace consisting only of the classical tensor due to the exponentially large rank of \(\psi_{\vec{i}}\) and \(\varphi_{\vec{j}_{m}}^{i_{m}}\); the performance of the QQTN depends on the decomposition setting of the system.
There is freedom of implementing the tensor in a quantum circuit. For example, \(\left|\varphi^{i_{m}}\right\rangle=U_{Lm}\left|i_{m}\right\rangle\left|0 \right\rangle^{\otimes n-1}\), \(U_{Lm}\left|i_{m}\right\rangle^{\otimes n}\), \(U_{Lm}^{i_{m}}\left|0\right\rangle^{\otimes n}\), etc., where the first two assumptions index the initial state and the third assumption index the unitary matrix, and \(U_{Lm}\) and \(U_{Lm}^{i_{m}}\) represent unitary matrices. In the present study, we choose \(\left|\varphi^{i_{m}}\right\rangle=U_{Lm}\left|i_{m}\right\rangle\left|0 \right\rangle^{\otimes n-1}\), whose quantum circuit is shown in Fig. 3(b). We also assume \(\left|\psi\right\rangle=U_{U}\left|0\right\rangle^{\otimes k}\) (Fig. 3(c)).
In our QQTN formulation, an observable is defined as a tensor product \(O=\bigotimes_{m,r}O_{mr}\), where \(O_{mr}\) is observable on the \(r\)-th qubit of the \(m\)-th subsystem (lower tensor). We consider the expectation value of transition amplitude defined as
\[T=\left\langle\psi_{HTN}^{(1)}\Big{|}O\Big{|}\psi_{HTN}^{(2)}\right\rangle, \tag{7}\]
where \(\left|\psi_{HTN}^{(l)}\right\rangle=\sum_{\vec{i}}\psi_{\vec{i}}^{(l)} \bigotimes_{m}\left|\varphi^{i_{m}(l)}\right\rangle(l=1,2)\) is the two types of \(\left|\psi_{HTN}\right\rangle\). As explained in the next section, \(T\) can not only include the expectation value of the observable used in VQE but also that of the overlap used in QMC. We first calculate on the lower tensors of \(N^{i^{\prime}_{m}(1)i_{m}(2)}=\left\langle\varphi^{i^{\prime}_{m}(1)}\Big{|} \bigotimes_{r}O_{mr}\Big{|}\varphi^{i_{m}(2)}\right\rangle\) in quantum computations, classically construct \(N_{m}=\begin{pmatrix}N^{00}&N^{01}\\ N^{10}&N^{11}\end{pmatrix}\), and then integrate the results as \(T=\left\langle\psi^{(1)}\bigotimes_{m}N_{m}|\psi^{(2)}\right\rangle\) on the upper tensor in quantum computations. Appendix C shows the details of the procedure for calculating \(T\), which is based on the Hadamard test as in Ref. [82]. \(T\) can be calculated using a quantum circuit of only \(\max(n,k)\) qubits except for ancilla qubits. In this study, only \(8k+2\) terms are measured for \(T\) calculation, i.e., the overhead for calculating the expectation value is a linear scale for the system size \(nk\). Note that the number of measurements is \(2\times 4^{L}k+2\) for general \(L\).
### Proposed algorithm: HTN+QMC
The procedure of HTN+QMC consists of two steps.
1. HTN+VQE: Perform VQE by minimizing
Figure 3: Two-layer QQTN and the schematic diagrams of the quantum circuit for each tensor. (a) The two-layer QQTN. (b) The circuit diagram for the lower tensor. (c) The circuit diagram for the upper tensor.
\(\left\langle\psi_{HTN}\right|H\left|\psi_{HTN}\right\rangle\) to obtain the reference wave function \(\left|\xi\right\rangle=\left|\psi_{HTN}\right\rangle\).
2. HTN+QMC: Perform QC-QMC by using the obtained reference wave function \(\left|\psi_{HTN}\right\rangle\); that is, the quantum computer is used to compute \(E_{mix}\) to accurately estimate the ground state energy.
HTN+QMC can be performed for \(nk\)-qubit system by using only \(\mathcal{O}(\max(n,k))\) qubits except for ancilla qubits. Specifically, if \(k=n\), an \(\mathcal{O}\big{(}n^{2}\big{)}\)-qubit reference wave function is prepared by only an \(\mathcal{O}(n)\)-qubit circuit. In both steps, the Hamiltonian and mixed energy can be evaluated through the calculation of \(T\) in Eq. (7).
In the first step, by expressing the index \(b\) in Eq. (1) by the two indices \(m\) and \(r\), we can rewrite \(H\) and its expectation value as
\[H=\sum_{a}c_{a}\bigotimes_{m,r}P_{amr}, \tag{8}\]
and
\[\left\langle\psi_{HTN}\right|H\left|\psi_{HTN}\right\rangle=\sum_{a}c_{a} \left\langle\psi_{HTN}\right|\bigotimes_{m,r}P_{amr}\left|\psi_{HTN}\right\rangle, \tag{9}\]
respectively. The expectation value is evaluated through Eq. (7) by setting \(\left|\psi_{HTN}^{(1)}\right\rangle=\left|\psi_{HTN}^{(2)}\right\rangle= \left|\psi_{HTN}\right\rangle\) and replacing \(O_{mr}\) with \(P_{amr}\), where the coefficient index \(a\) is regarded as implicitly included in the expression of \(O_{mr}\). The wave function \(\left|\psi_{HTN}\right\rangle\) is prepared by parameterized quantum circuits equivalent to the unitary matrices \(U_{Lm}\) (Fig. 3(b)) and \(U_{U}\) (Fig. 3(c)).
In the second step, the overlap \(\left\langle\xi\right|\phi_{h}\right\rangle=\left\langle\psi_{HTN}\right| \phi_{h}\)) in Eq. (5) is calculated by substituting \(\left|\psi_{HTN}^{(1)}\right\rangle=\left|\psi_{HTN}\right\rangle\), \(\left|\psi_{HTN}^{(2)}\right\rangle=\left|\phi_{h}\right\rangle\), and \(O=I^{\otimes nk}\) in \(T\) of Eq. (7), where the circuit parameters are fixed to the obtained values in the first step. More specifically, we can prepare an arbitrary basis state \(\left|\phi_{h}\right\rangle=\bigotimes_{m,r}\left|j_{mr}(h)\right\rangle= \left(\bigotimes_{m,r}X^{j_{mr}(h)}\right)\left|0\right\rangle^{\otimes nk}\) by setting \(U_{U}^{(2)}=I^{\otimes k}\) and \(U_{Lm}^{(2)}=\bigotimes_{m}X^{j_{mr}(h)}\), where \(j_{mr}(h)\) is a function of \(h\), \(j_{mr}(h)\) takes a value of \(\{0,1\}\), and \(X\) is a Pauli \(X\) operator. Having the overlaps \(\left\langle\psi_{HTN}\right|\phi_{h}\right\rangle\) of all \(\left|\phi_{h}\right\rangle\) corresponding to the walkers appearing during the QMC execution at hand, we can perform QMC through iterative evaluations of the mixed energy (Eqs. (2) and (3)).
## III Models and calculation conditions
We first explain the models for the benchmark and then the calculation conditions for the quantum algorithm. The performance of HTN+QMC is benchmarked with the Heisenberg chain model, the graphite-based Hubbard model, the hydrogen plane model, and MABI. The Heisenberg chain model in Fig. 2(a) is a model with \(k\) clusters in a chain with four sites as one cluster, which defined as
\[H=\sum_{p=1}^{k}H_{p}+J_{inter}\sum_{p^{\prime}=1}^{k-1}H_{p^{\prime}}, \tag{10}\]
\[\begin{split} H_{p}&=\sum_{f=1}^{3}X_{4(p-1)+f}X_ {4(p-1)+f+1}\\ &+Y_{4(p-1)+f}Y_{4(p-1)+f+1}\\ &+Z_{4(p-1)+f}Z_{4(p-1)+f+1},\end{split} \tag{11}\]
and
\[H_{p^{\prime}}=X_{4p^{\prime}}X_{4p^{\prime}+1}+Y_{4p^{\prime}}Y_{4p^{\prime} +1}+Z_{4p^{\prime}}Z_{4p^{\prime}+1}, \tag{12}\]
where \(J_{inter}\) is the interaction parameter between the neighboring clusters. We consider \(k=2\) and \(3\), i.e., \(8\)- and \(12\)-qubit models and \(J_{inter}=0.2,0.4,\ldots,2.0\) Hartree in the benchmark.
The graphite-based Hubbard model (the graphite model hereafter) is in Fig. 2(b). Graphite is a layered material where four carbon (two layers of two carbons per layer) exist in a unit cell. We assume an \(8\)-qubit model from the four carbon in the unit cell, which is defined as
\[\begin{split} H&=3t_{1}\sum_{q=1,2,5,6}(a_{q+2}^{ \dagger}a_{q}+a_{q}^{\dagger}a_{q+2})\\ &+2t_{2}\sum_{q=1,2}(a_{q+4}^{\dagger}a_{q}+a_{q}^{\dagger}a_{q+4 })\\ &+U\sum_{q=1,3,5,7}n_{q+1}n_{q},\end{split} \tag{13}\]
where \(q\) is the spin-orbital index for the \(p_{z}\) orbital in the carbon, \(q=1,2,3\) and \(4\) and \(q=5,6,7\) and \(8\) correspond to the first and second layer, respectively, \(a_{q}^{\dagger}\) (\(a_{q}\)) is the creation (annihilation) operators on the \(q\)-th site, and \(n_{q}\) is the number operator defined as \(n_{q}=a_{q}^{\dagger}a_{q}\). \(t_{1}\) and \(t_{2}\) are the hopping energy between the first and second nearest neighbor sites corresponding to the intra- and interlayer interaction energy, respectively, and \(U\) is the on-site Coulomb energy. The prefactors for \(t_{1}\) and \(t_{2}\) are due to periodic boundary conditions, e.g., the prefactor for \(t_{2}\) is \(2\) because two inter-layer interactions exist per carbon (one inside the unit cell and the other outside the unit cell). The reason for \(t_{2}\) being only on two indices, \(q=1\) and \(2\), is that graphite is AB stacking. We determine the value of \(t_{1}\), \(t_{2}\), and \(U\) by using the electronic structure calculation of graphite. We first calculated the band structure using density functional theory in the Quantum ESPRESSO package [83; 84; 85]. We adopted the generalized gradient approximation by Perdew-Burke-Ernzerhof (PBE) [86] as the exchange-correlation functional and optimized norm-conserving Vanderbilt (ONCV) pseudopotential [87; 88]. The wave function cutoff, k-point grids, and the number of bands were \(64\) Rydberg, \(8\times 8\times 3\)
and 30, respectively. Then we calculated \(t_{1}\), \(t_{2}\), and \(U\) for target orbitals using the maximally localized Wannier function [89; 90] and constrained random phase approximation [91] in the RESPACK package [92; 93; 94; 95; 96; 97]. The target orbitals were the four \(p_{z}\) orbitals in each carbon atom in the unit cell. The polarization-function cutoff was 6.4 Rydberg. We obtained the value of \(t_{1}=-1.05\times 10^{-1}\), \(t_{2}=1.03\times 10^{-2}\), and \(U=3.00\times 10^{-1}\) Hartree.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline Models & \(nk\) & \(k\) & Decomposition & \(d_{H}(d_{N})\) & Type & \#seeds & Description \\ \hline \hline \multirow{6}{*}{Heisenberg chain} & \multirow{2}{*}{8} & \multirow{2}{*}{2} & \multirow{2}{*}{Cluster} & \(1,2,\ldots,6\) & \multirow{2}{*}{\(J_{inter}=0.2,0.4,\ldots,2.0\)} & \multirow{2}{*}{1} & Sec. IV.2 (\(J_{inter}=0.2,1.0,2.0\)), & \multirow{2}{*}{2} \\ & & & & & & & Appendix E \\ & & & & & & & Sec. IV.3 \\ & & & & & & & Sec. IV.3 \\ & & & & & & & Sec. IV.4 \\ & & & & & & & Sec. IV.3 \\ & & & & & & & Sec. IV.3 \\ & & & & & & & \\ \hline \multirow{6}{*}{Graphite} & \multirow{2}{*}{8} & \multirow{2}{*}{2} & Horizontal & \(1,2,\ldots,6\) & \multirow{2}{*}{1} & \multirow{2}{*}{Sec. IV.2 (\(d_{H}=4\)), Appendix E} \\ & & & & & & 10 & Appendix F \\ & & & Vertical & 4 & & 10 & Appendix F \\ & & - & No-decomposition & \(2,4,\ldots,10\) & & 10 & Appendix F \\ \hline \multirow{6}{*}{Hydrogen plane} & \multirow{2}{*}{8} & \multirow{2}{*}{2} & HOMO-LUMO & \(1,2,\ldots,6\) & \multirow{2}{*}{\(\text{ID}=0,1,2,3\)} & \multirow{2}{*}{1} & Sec. IV.2 (\(d_{H}=4\)), Appendix E} \\ & & & & & & 10 & Sec. IV.3 \\ & & & & & & 10 & Appendix F \\ & & & & & & 10 & Appendix F \\ \hline \multirow{6}{*}{MABI} & \multirow{2}{*}{12} & \multirow{2}{*}{2} & \multirow{2}{*}{Alpha-Beta} & \multirow{2}{*}{4} & \multirow{2}{*}{-} & \multirow{2}{*}{10} & \multirow{2}{*}{Appendix F} \\ & & & & & & 10 & Appendix F \\ \cline{1-1} & & & & & & 10 & Appendix F \\ \cline{1-1} & & & & & & 10 & Appendix F \\ \hline \end{tabular}
\end{table}
Table 1: Calculation condition of the execution example in Sec. IV.1, the benchmark in Sec. IV.2, and the analysis in Sec. IV.3. The unit of \(J_{inter}\) in the “Type” column is Hartree.
Figure 4: Decomposition settings and orbitals for the models. (a) The decomposition settings for the Heisenberg chain model. (b) The decomposition settings for the graphite-based Hubbard model. (c) The orbitals for the hydrogen plane model for ID 2. (d) The orbitals for MABI.
next we show the benchmark results, and finally, we discuss the dependency of HTN+QMC performance on the decomposition settings.
### Execution example
Figure 6(a) shows the result of executing HTN+VQE for the Heisenberg chain model with \(d_{H}=4\), the cluster setting, \(k=2\), and \(J_{inter}=1.0\) Hartree. In the inset of Fig. 6(a), the energy difference from the exact ground state energy (black dashed line) of \(4.4\times 10^{-1}\) Hartree remains after the optimization. Figure 6(b) shows the result for HTN+QMC by using the reference wave function obtained in HTN+VQE. In the HTN+QMC result (orange line), the energy difference from the exact value (black dashed line) is \(5.3\times 10^{-3}\) Hartree, which is an improvement of two orders of magnitude from the result for HTN+VQE. In addition, compared to the QMC result (blue line), the HTN+QMC results show higher accuracy and less energy fluctuation. Specifically, the energy differences (standard deviations) in QMC and HTN+QMC are \(5.5\times 10^{-1}\) and \(5.3\times 10^{-3}\) Hartree (\(7.7\times 10^{-1}\) and \(4.8\times 10^{-2}\) Hartree), respectively. Therefore, the energy accuracy would be improved in HTN+QMC compared to either HTN+VQE or QMC alone, and we will discuss the details of the improvement in the following subsection.
### Benchmark results
Figure 7(a), (b), and (c) show the energy difference, standard deviation, and fidelity versus circuit depth \(d_{H}\), respectively, in the Heisenberg chain model with \(k=2\) and in the cluster setting. The plotted color corresponds to the value of \(J_{inter}\), and we show the results for \(J_{inter}=0.2,1.0\) and \(2.0\) Hartree, see Appendix E for the results of other \(J_{inter}\). For HTN+VQE (circle) in Fig. 7(a), on the one hand, the energy difference is more than \(10^{-1}\) Hartree when \(J_{inter}\) is large, e.g., for \(J_{inter}=1.0\) Hartree, the difference is \(4.3\times 10^{-1}\) Hatreee even with \(d_{H}=6\). On the other hand, all the value of the difference for HTN+QMC (cross) is less than \(10^{-1}\) Hatreee with \(d_{H}\geq 2\). In addition, the difference for HTN+QMC with \(d_{H}\geq 3\) is more than one order of magnitude less than that of QMC (dashed line). In Fig. 7(b), the decrease in the standard deviation from QMC is also confirmed in HTN+QMC. From the results, we find that HTN+QMC can evaluate the energy more accurately than the previous classical and quantum algo
Figure 5: Real amplitude ansatz in this study. \(d_{H}\) (\(d_{N}\)) is the depth in HTN (the no-decomposition setting), i.e., the block surrounded by the dotted line is repeated \(d_{H}\) (\(d_{N}\)) times.
Figure 6: Results of the energy in the Heisenberg chain model with \(k=2\), \(J_{inter}=1.0\), and \(d_{H}=4\). (a) The result for HTN+VQE. The inset is an enlarged view of the y-axis. (b) The result for HTN+QMC.
rithms.
Figure 7(c) shows the fidelity of HTN+VQE, and the fidelity for the single reference state used as the QMC reference wave function is also shown. The energy accuracy worsens as the fidelity decreases, for example of \(d_{H}=4\), the fidelity with \(J_{inter}=0.2,1.0\), and \(2.0\) Hartree is \(1.00\), \(0.92\), and \(0.49\), respectively, and the energy difference with \(J_{inter}=2.0\) for HTN+QMC in Fig. 7(a) is one order of magnitude larger than that of \(J_{inter}=0.2\) and \(1.0\). Nevertheless, the energy difference with \(d_{H}=4\) and \(J_{inter}=2.0\) for HTN+QMC is one order of magnitude less than that for QMC, i.e., the accuracy of HTN+QMC is still higher than that of QMC even when the fidelity is low. Therefore, even when the highly accurate reference wave function is not prepared in HTN+VQE, HTN+QMC can be expected to provide better energy accuracy than the classical calculation. Note that we also calculate the bipartite entropy for the models and confirm the increase in the entropy when increasing \(J_{inter}\), see Appendix E in detail.
Here we briefly comment on the other models while the results of the models are described in detail in Appendix E. Table 2 show the results for HTN with \(d_{H}=4\) of the Heisenberg chain models with \(k=2\) and \(3\), and \(J_{inter}=1.0\) in the cluster setting, the graphite model in the horizontal setting, the hydrogen plane model for ID 3 in the HOMO-LUMO setting, and MABI in the HOMO-LUMO setting. \(\Delta\) in the table represents the energy difference. In all the models, the difference for HTN+QMC is one to several orders of magnitude smaller than that for either HTN+VQE or QMC alone. Especially for the hydrogen plane model, the difference for HTN+QMC is one order of magnitude smaller than that for QMC, even though the fidelity for HTN+VQE is \(0.46\), almost the same as that for the single reference state, \(0.45\). The result indicates that the performance of QMC may be improved even with not high fidelity of the reference wave function, and it is important to confirm the result of the energy also when evaluating the performance of the reference wave function.
### Peformance analysis on the decomposition settings
We consider the eight-qubit Heisenberg chain model with \(J_{inter}=1.0\), each in the cluster, even-odd, and no-decomposition settings. Henceforth, we assume that the HTN is constructed with two four-qubit subsystems (\(k=2\) and \(n=4\)) and the real amplitude ansatz with \(d_{H}=4\). For the no-decomposition setting, \(d_{N}\) takes any of \(2,4,\ldots,10\). Prior to the results, we introduce an average interaction strength between the subsystems \(G_{mr}\), which is defined as
\[G_{mr}=\frac{1}{k}\sum_{a}|c_{a}|\delta_{amr}, \tag{14}\]
where \(\delta_{amr}=1\) if Pauli \(X,Y\), or \(Z\) operator is included in \(\bigotimes_{r}P_{amr}\) of Eq. (8) in more than two subsystem, and \(\delta_{amr}=0\) otherwise. \(G_{mr}\) in the cluster setting and even-odd setting are \(1.5\) and \(10.5\) Hartree, respectively, i.e., the cluster setting seems to be suitable for the reference wave function preparation. In Appendix F, we also show the values of \(G_{mr}\) for the graphite model, the hydrogen plane model (ID 3), and MABI, and we have confirmed that the results, especially in the physical models, are almost consistent with intuition. For example, in the graphite model, \(G_{mr}\) of the horizontal setting has an order of magnitude smaller than that of the vertical setting. However, the chemical models, especially in the hydrogen plane model, give results that differ from intuition, and thus it may need to use more sophisticated measures such as the reduced density matrix and mutual information [106, 107, 108, 109].
Figure 8 shows the decomposition dependencies of the fidelity, the energy difference, and the standard deviation for the Heisenberg chain model. The average and error bar is obtained over \(10\) different random seeds used for the initial parameters in VQE or HTN+VQE; the fidelity of the single reference state is plotted by the black bar in Fig. 8(a), and the energy difference and standard deviation by the black cross markers in Fig. 8(b) and (c), respectively; the green circle and cross markers denote the results for the decomposition setting in HTN+VQE and HTN+QMC, respectively; the blue circle and cross markers for the no-decomposition setting in VQE and QC-QMC, respectively. Figure 8(a) shows the decomposition dependencies of the fidelity. In cases of the decomposition settings, the fidelity in the cluster setting, \(0.92\), is four times higher than in the even-odd setting, \(0.22\), as expected from the values of \(G_{mr}\). We also calculated the bipartite entanglement entropy of the exact ground state when decomposing the cluster and even-odd, which is \(0.66\) and \(3.46\), respectively, see Appendix F for the entropy of the wave function for HTN. It is evident that an appropriate choice of the decomposition setting is crucial
\begin{table}
\begin{tabular}{c c c c c c} \hline Models & \(\Delta\)HTN+VQE & \(\Delta\)QMC & \(\Delta\)HTN+QMC & Fidelity & (HTN+VQE) & Fidelity (single reference state) \\ \hline Heisenberg chain (\(k=2\)) & \(4.4\times 10^{-1}\) & \(5.5\times 10^{-1}\) & \(5.3\times 10^{-3}\) & \(0.92\) & \(0.13\) \\ Heisenberg chain (\(k=3\)) & \(1.0\times 10^{0}\) & \(3.5\times 10^{-1}\) & \(4.0\times 10^{-2}\) & \(0.80\) & \(0.06\) \\ Graphite & \(6.4\times 10^{-4}\) & \(4.6\times 10^{-4}\) & \(8.1\times 10^{-6}\) & \(1.00\) & \(0.09\) \\ Hydrogen plane & \(3.3\times 10^{-2}\) & \(1.2\times 10^{-3}\) & \(5.4\times 10^{-5}\) & \(0.46\) & \(0.45\) \\ MABI & \(2.4\times 10^{-2}\) & \(6.6\times 10^{-5}\) & \(7.9\times 10^{-6}\) & \(0.94\) & \(0.86\) \\ \hline \end{tabular}
\end{table}
Table 2: Energy differences in VQE, QMC, and HTN+QMC, and fidelity for HTN+VQE and the single reference state for the five models in \(d_{H}=4\). The energy unit is Hartree.
to the performance of the reference wave function.
In addition, the fidelity for the cluster setting is equal to or slightly higher than that for the no-decomposition setting with \(d_{N}=10\) whereas the fidelity for the even
Figure 8: Results of the analysis for the Heisenberg chain model with \(k=2\) and \(J_{inter}=1.0\). (a) The fidelity. (b) The energy difference. (c) The standard deviation.
Figure 7: Results of Heisenberg chain model with \(k=2\) and \(d_{H}=1,2,\ldots,6\). (a) The energy difference. (b) The standard deviation. (c) The fidelity.
odd setting is lower than for the no-decomposition setting with \(d_{N}=2\). Here, the numbers of the parameters for the cluster setting with \(d_{H}=4\) and the no-decomposition setting with \(d_{N}=10\) are \(nk(d_{H}+1)+k(d_{H}+1)=50\) and \(nk(d_{N}+1)=88\), respectively. That is, the cluster setting with fewer parameters shows comparable performance to the no-decomposition setting. The increase in the fidelity is obviously related to the decrease in the energy difference in HTN+VQE and HTN+QMC as in Fig. 8(b), and that in the standard deviation in HTN+QMC as in Fig. 8(c). A similar tendency for the fidelity, energy difference, and standard deviation can be found in the graphite model, the hydrogen plane model, and MABI, see Appendix F for the details. From the results, we can expect that if the system is appropriately decomposed, i.e., if the interaction between the subsystems is small, the HTN can prepare a reference wave function that performs as well as or better than the wave function generated by the quantum circuit of the original system size. Note that the fidelities for the hydrogen plane model are almost the same or lower than that of the single reference state as shown in Appendix F. One of the ways for increasing the fidelity is to improve the initial wave function used in the VQE or HTN+VQE. As shown in the appendix, the fidelity can increase in some cases by using the initial state which is close to the Hartree-Fock state. In addition, while the parameters of all the tensors are sequentially optimized in this study, separate optimization of one tensor by another could be an alternative [111].
The high performance in the cluster setting seems to arise from that the wave function represented by the QQTN in Fig. 3(a) has a problem-inspired structure while the ansatz actually used for each tensor is hardware efficient one. In order to analyze the wave function in detail, we first compare the distribution of the absolute coefficient of the standard basis state, which was calculated by using one of the 10 random seeds in Fig. 8. As shown in Fig. 9(a), the cluster setting (red) exhibits a distribution very close to that of the exact ground state (black), in comparison to the even-odd setting (green). Note that for the even-odd setting, the distribution was calculated with the basis state encoding, where the qubit index was reordered from \(8,7,6,5/4,3,2,1\) to \(8,6,4,2/7,5,3,1\) as shown in Fig. 4(a) Even-Odd. This qubit index was then changed back to that of Cluster in the plotting, allowing for direct comparison with the even-odd setting.
We then examine the wave function, which is defined as a linear combination of tensor products of the subsystems. In the case of the cluster setting for the Heisenberg chain model, we can approximately describe the exact ground state \(|\psi_{g-C}\rangle\) by four dominant terms with respective coefficients of \(c_{1}=-0.37,c_{2}=0.24,c_{3}=-0.23\), and \(c_{4}=0.18\), as
\[\begin{split}|\psi_{g-C}\rangle&\simeq c_{1}(|0101 \rangle\otimes|0101\rangle+|1010\rangle\otimes|1010\rangle)\\ &+c_{2}(|0101\rangle\otimes|0110\rangle+|1001\rangle\otimes|0101 \rangle\\ &\quad+|1010\rangle\otimes|1001\rangle+|0110\rangle\otimes|101 0\rangle)\\ &+c_{3}(|1010\rangle\otimes|0101\rangle+|0101\rangle\otimes|101 0\rangle)\\ &+c_{4}(|0101\rangle\otimes|1001\rangle+|0110\rangle\otimes|01 01\rangle\\ &\quad+|1001\rangle\otimes|1010\rangle+|1010\rangle\otimes|01 10\rangle),\end{split} \tag{15}\]
where each term consists of the eigenstates for the spin and spacial inversion operations and is energetically favored because of the (sub)antiferromagnetic spin configurations, where there is at most one spin pair with the same parity (e.g, 00 and 11). Remember that all terms in the Hamiltonian of Eqs. (10), (11), and (13) are positive. The HTN+VQE results in Fig. 9(a) (red) show that the coefficients for the above 12 basis states are of the same magnitude as \(c_{1},c_{2},\ldots,c_{4}\). In contrast, the even-odd setting gives a much less accurate wave function as in Fig. 9(a) (green); half of the basis states in Eq. (15) was negligibly small in magnitude, and the coefficient of \(|0101\rangle\otimes|0101\rangle\) is 0.78, which is so large leading to quite different distribution from the exact (black).
Next, Fig. 9(b) shows the distribution of the no-decomposition setting (blue), where the distributions of the ground state and cluster settings are reproduced from the figure (a) for comparison. The depth is set to \(d_{N}=6\) such that the numbers of parameters in the no-decomposition and cluster settings become comparable, that is, 50 and 48, respectively. The distribution for the no-decomposition setting deviates from that for the exact wave function, and half of the basis states in Eq. (15) were negligibly small in magnitude. In general, the no-decomposition setting would achieve higher performance than HTN. However, in the cases where the number of ansatz parameters is restricted, HTN may perform better if we find an ansatz and decomposition suited to the structure of the system. In fact, the coefficient of the basis state \(|00101101\rangle\), which has the 10-th largest magnitude in the no-decomposition setting, is 0.14, whereas it is only -0.057 in the ground state and is -0.00018 in the cluster setting. Thus, the cluster setting can efficiently prepare a reference wave function that incorporates the correlations of the system with fewer parameters by eliminating the basis states that have a small contribution to the ground state of the system. Note that these observations are verified by the calculation of the 10 random seeds.
## V Conclusion
We proposed an algorithm HTN+QMC that combines QC-QMC with HTN for accurately calculating problems in quantum chemistry beyond the size of a quantum device. QC-QMC can perform electronic structure calculation with higher accuracy and lower hardware require
ments than either VQE or QMC alone by using a reference wave function that can be prepared by quantum computation. HTN enables us to construct a large wave function beyond the size of a quantum device by decomposing the wave function into smaller-size tensors and performing hybrid quantum/classical calculations for the tensors. By combining them, QC-QMC requiring the \(\mathcal{O}\big{(}n^{2}\big{)}\)-qubit reference wave function can be performed by using only \(\mathcal{O}(n)\) qubit, where the two-layer QQTN is assumed as HTN structure in this study.
On the execution of the algorithm, after the reference wave function is prepared by VQE with HTN (HTN+VQE), we execute HTN+QMC by using the obtained reference wave function, where FCIQMC is adopted as an example of QMC. HTN+QMC was applied to the Heisenberg chain model, the graphite-based Hubbard model, the hydrogen plane model, and MABI. The Hamiltonians of the graphite-based Hubbard model, the hydrogen plane model, and MABI are prepared by using the electronic structure calculations of the classical computation. We found that HTN+QMC exhibits energy accuracy that is several orders of magnitude higher than either HTN+VQE or QMC alone. In addition, when compared to the results of QC-QMC (i.e., without HTN), the results of HTN+QMC were as accurate as or more accurate than QC-QMC when the interaction between the decomposed subsystems is small. Therefore, we found that with appropriate decomposition, calculations on a scale beyond that of a quantum device can be performed with high accuracy.
While this study assumed that the size of the target system is larger than that of the quantum device, there may be cases in which the proposed algorithm should be used even if the size of the target system is the same as that of the quantum device. For example, a quantum computer with thousands of qubits will appear in the near future [112; 113], but due to noise in the quantum device, it is possible that an accurate solution may not be obtained in the calculation of a chemical model of that size. In such a case, we can use the proposed algorithm by decomposing the system into subsystems of about hundreds of qubits in order to obtain a more accurate result than that obtained by directly executing a calculation with a thousand qubits.
The research on QC-QMC has not yet been extensive, and the application of techniques developed for NISQ devices, to QC-QMC is a possible future research item, as in HTN in this study. Another interesting direction is the application of QC-QMC to the fields outside the electronic structure calculation such as machine learning for designing advanced materials.
Figure 9: Distribution of the wave function for the Heisenberg chain model with \(J_{inter}=1.0\). The basis index is the value when the standard basis state is expressed in the decimal number, e.g., \(|10000010\rangle\) corresponds to 130. (a) The comparison of the exact ground state, cluster setting, and even-odd setting. (b) The comparison of the exact ground state, cluster setting, and no-decomposition setting (\(d_{N}=6\)).
Acknowledgments
This work is supported by MEXT Quantum Leap Flagship Program Grants No. JPMXS0118067285 and No. JP-MXS0120319794, JSPS KAKENHI Grant No. JP20K05438, and COI-NEXT JST Grant No. JPMJPF2221. The part of calculations was performed on the Mitsubishi Chemical Corporation (MCC) high-performance computer (HPC) system "NAYUTA", where "NAYUTA" is a nickname for MCC HPC and is not a product or service name of MCC.
|
2305.19763 | Numerical investigation of viscous fingering in a three-dimensional
cubical domain | We perform three-dimensional numerical simulations to understand the role of
viscous fingering in sweeping a high-viscous fluid (HVF). These fingers form
due to the injection of a low-viscous fluid (LVF) into a porous media
containing the high-viscous fluid. We find that the sweeping of HVF depends on
different parameters such as the Reynolds number ($Re$) based on the inflow
rate of the LVF, the P\'eclet number ($Pe$), and the logarithmic viscosity
ratio of HVF and LVF, $\mathfrak{R}$. At high values of $Re$, $Pe$, and
$\mathfrak{R}$, the fingers grow non-linearly, resulting in earlier tip
splitting of the fingers and breakthrough, further leading to poor sweeping of
the HVF. In contrast, the fingers evolve uniformly at low values of $Re$, $Pe$,
and $\mathfrak{R}$, resulting in an efficient sweeping of the HVF. We also
estimate the sweep efficiency and conclude that the parameters $Re$, $Pe$ and
$\mathfrak{R}$ be chosen optimally to minimize the non-linear growth of the
fingers to achieve an efficient sweeping of the HVF. | Garima Varshney, Anikesh Pal | 2023-05-31T11:47:35Z | http://arxiv.org/abs/2305.19763v1 | # Numerical investigation of viscous fingering in a three-dimensional cubical domain
###### Abstract
We perform three-dimensional numerical simulations to understand the role of viscous fingering in sweeping a high-viscous fluid (HVF). These fingers form due to the injection of a low-viscous fluid (LVF) into a porous media containing the high-viscous fluid. We find that the sweeping of HVF depends on different parameters such as the Reynolds number (\(Re\)) based on the inflow rate of the LVF, the Peclet number (\(Pe\)), and the logarithmic viscosity ratio of HVF and LVF, \(\Re\). At high values of \(Re\), \(Pe\), and \(\Re\), the fingers grow non-linearly, resulting in earlier tip splitting of the fingers and breakthrough, further leading to poor sweeping of the HVF. In contrast, the fingers evolve uniformly at low values of \(Re\), \(Pe\), and \(\Re\), resulting in an efficient sweeping of the HVF. We also estimate the sweep efficiency and conclude that the parameters \(Re\), \(Pe\) and \(\Re\) be chosen optimally to minimize the non-linear growth of the fingers to achieve an efficient sweeping of the HVF.
## I Introduction
Finger-like protrusions [1; 2] form when a low viscous fluid (LVF) displaces a high viscous fluid (HVF) in a porous medium owing to hydrodynamic instability along the interface of these two fluids. This type of hydrodynamic instability of multi-phase flow is often referred to as viscous fingering and appear in many engineering and scientific processes such as oil recovery from underground reservoirs [3; 4], chromatography [5], \(CO_{2}\) sequestration [6; 7], fluid mixing in microfluidics [8], and oceanography [9; 10]. The association of the viscous fingering phenomenon in multiple areas has motivated researchers to investigate its dynamics theoretically [11; 12; 13; 14; 15; 1], experimentally [16; 17; 18] and numerically [19; 20; 21; 22; 23; 24].
The pioneering work on viscous fingering encountered during sugar refining operations, was carried out by [1]. They referred to these instabilities as "channelling" when water displaces sugar liquors from columns of granular bone charcoal. The next significant development, that occurred in the late 1950s, established [25; 2] that adverse mobility (when an LVF displaces an HVF) generates an unstable interface. A review of the experiments and the numerical simulations performed to study the mechanisms of viscous fingering in homogeneous porous materials using different physical models and geometries (rectilinear displacement, radial source flow, and the five-spot pattern) is provided in [26]. A description of the development of the Saffman-Taylor instability in a two-dimensional (2D) porous media (also represented by a Hele-Shaw cell due to the opaque nature of porous media) owing to the convection-diffusion phenomenon for miscible fluid interaction is also provided in this review. Moreover, it was deduced that the Peclet number was one of the primary parameters that govern the fingering scale for miscible fluids. Viscous fingering in the miscible and immiscible fluids was also experimentally investigated in a Hele-Shaw cell with smooth and etched plates to study the influence of plate roughness on the fingering mechanism by [27]. They reported that owing to interfacial tension, the immiscible finger patterns are less ramified than their miscible counterparts, are more sensitive to the flow rate, and become compact as the flow rate decreases. Experiments in a real three-dimensional (3D) porous medium were performed by [28] to study the essential features of viscous fingering and its dependence on the viscosity ratio and the flow rate. It was observed that the single-finger configuration was strongly affected by adding a small perturbation in the gap of the Hele-Shaw cell [29]. [30] performed experiments on the miscible displacements in porous media to study the effects of mobility gradients in viscous instability and corroborated their findings with an analytical analysis. A rectilinear Hele-Shaw cell was also used to examine the miscible flow displacements of a reference Newtonian fluid (glycerol solution) or shear-thinning solutions of Alcofofhood polymers of different molecular weights by water through experimental measurements [31]. Similarly, a radial Hele-Shaw cell was used [17] to explore the variety of patterns characterized by the viscosity ratio of the two interacting fluids. Recently, an experimental study [32] was carried out using the Hele-Shaw cell to explore the techniques to suppress the viscous fingering problem encountered.
Several numerical studies using the Hele-Shaw cell model, either in radial or rectilinear shape, have been conducted in addition to the experimental work. Steam-assisted gravity drainage, a thermal oil recovery method,
is explored [19] by integrating mass balance and energy balance using the commercial solver COMSOL for a two-dimensional domain and comparing the outcomes with those from another reservoir simulator, STARS. Subsequently, COMSOL's two-phase Darcy's law physics is employed in a 2D Eulerian frame to study the instability in chromatographic columns and aquifers for various injection speeds and mobility [20]. A similar setup was also used to simulate the suppression, decrease, or increase of miscible viscous fingering in radial displacements in a 2D homogeneous porous medium by varying the mobility ratio, injection speed, and diffusion coefficient [22]. The commercial solver ANSYS has been used to perform 2D simulations [21] to study the viscous fingering phenomenon associated with oil production in a homogeneous heavy oil reservoir. An open-source solver UTCHEM was used by [23] to perform 2D simulations of miscible and immiscible viscous fingering encountered during polymer flooding chemical enhanced oil recovery processes. To capture the dynamic evolution of these viscous fingers, they performed a Fourier analysis of the saturation or the concentration contours and the rate of change of root-mean square (RMS) of the saturation/concentration contours. Recently, a 2D simulation was carried out by [24] to investigate the effect of the period and amplitude of the initial boundary perturbation on the growth rate of viscous fingers. A comprehensive review [33] is provided for the experimental and the computational investigations of miscible and immiscible fluid interaction.
As evident from the previous discussion, majority of the numerical studies performed to understand the dynamics of viscous fingering are two-dimensional. The first 3D simulation of miscible viscous fingering is reported by [34] at a high Peclet number. It was concluded that the mechanism of nonlinear interactions of viscous fingers in 3D is in accord with the observations made from 2D simulations. Similar to [34], [35] also reported that the essential features of fingering in a homogeneous porous medium or a porous medium with modest, randomly distributed heterogeneities obtained from a 3D simulation could also be represented by a 2D calculation. However, permeability variations and connectivity in the third dimension may influence the fluid distribution in flow through heterogeneous domains with significantly correlated scales. Therefore, 3D modeling will be imperative for reproducing experimental results. Moreover, 3D modeling will also be inevitable if there is a density difference between the displacing and resident fluid in a homogeneous porous medium and the mean flow direction is not vertical. The impact and importance of 3D effects in viscous fingering for increasing density difference between the displacing and resident fluid in water alternating gas (WAG) injection in oil recovery are also reported [36]. They concluded that for a range of WAG ratios the 3D computation exhibits a lower recovery and an earlier oil breakthrough than 2D simulations. Another numerical simulation was performed by [37] to analyze the three-dimensional miscible displacements with gravity override in a homogeneous porous medium in the quarter five-spot geometry. They demonstrated that the enhanced interaction of the disturbances in 3D alters the character of the flow in a manner that could not be captured by 2D simulations. The present investigation examines the evolution of viscous fingering, owing to the interaction of low-viscosity and high-viscosity miscible fluids in a three-dimensional homogeneous porous medium, under the influence of different parameters such as the flow rate of displacing fluid, different encapsulated fluid with variation in the viscosity, and ease of diffusion through the porous domain. The problem formulation, governing equations, numerical methodology, and case set-up are presented in section II. The results obtained from the numerical simulations for the various parameters are discussed in section III, and conclusions are drawn in section IV.
## II Problem formulation
We define the fundamental parameters to understand the characteristics of 3D fingering instability and their non-linear interaction [12; 33]. The logarithmic mobility ratio (\(\mathfrak{R}\)) is the logarithmic ratio between the viscosity of the displaced fluid and the displacing fluid, \(\mathfrak{R}=ln\left(\frac{\mu_{2}}{\mu_{1}}\right)\); porosity (\(\epsilon_{p}\)) is the storing capacity of fluid in the porous material, which represents the ratio of the volume occupied by the fluid, and the total volume of the porous material; breakthrough defines the moment when LVF reaches the downstream of the domain; the sweep efficiency is (\(\eta_{sw}\)) defined as the ratio of the volume of the LVF injected at the breakthrough time to the volume of the domain; shielding is the tendency for one finger to dominate the displacement due to the finite amount to the injected fluid; spreading is the widening and flattening of the tip and body of the figures as it grows; tip splitting is the creation of two small fingers at the tip due to the instability of the tip of a larger finger [12]; coalescence is the merging of the tip of one finger into the body of an adjacent finger [31]; Peclet number (\(Pe\)): is defined as the ratio of the rate of advection to diffusion rate of a fluid \(Pe=\frac{\mathcal{U}L_{x}}{\mathfrak{D}}\) where \(\vec{U},L_{c}\), and \(\mathfrak{D}\) are the average velocity of flow, characteristic length (the diameter of hole through which LVF is injected into the domain is taken as the characteristics length in this study), and diffusion coefficient; Reynolds number (\(Re\)) is the ratio of inertia forces to the viscous forces, defined as \(Re=\frac{\rho\mathcal{U}L_{x}}{\mu}\), volume flow rate (\(\dot{Q}\)), and permeability (\(\kappa\)) regulates the capacity of the fluids to flow through porous media.
### Governing Equations and Numerical Method
COMSOL Multiphysics 6.0 [38] is used to model the viscous fingering phenomenon. We explore the characteristics of these fingers by varying the mass flow rate of the displacing fluid, the diffusion coefficient, and the mobility of the fluid encapsulated in the domain. The associated non-dimensional parameters are the Reynolds number(\(Re\)), the Peclet number(\(Pe\)), and the log mobility ratio(\(\mathfrak{R}\)). The two-phase Darcy's law model of porous media and subsurface fluid flow module is used, which couples the steady Darcy flow equation with the time-dependent convection-diffusion equation for concentration. The governing equations associated with this module are:
\[\frac{\partial\varepsilon_{p}\rho}{\partial t}+\nabla\cdot\rho\vec{u}=0. \tag{1}\]
\[\vec{u}=-\frac{\kappa}{\mu}\vec{\nabla}p. \tag{2}\]
\[\rho=s_{1}\rho_{1}+s_{2}\rho_{2}. \tag{3}\]
\[\frac{1}{\mu}=s_{1}\frac{\kappa_{r1}}{\mu_{1}}+s_{2}\frac{\kappa_{r2}}{\mu_{2}}. \tag{4}\]
\[s_{1}+s_{2}=1. \tag{5}\]
\[\frac{\partial\varepsilon_{p}c_{1}}{\partial t}+\vec{\nabla}\cdot(c_{1}\vec{u })=\vec{\nabla}\cdot(\mathfrak{D}_{c}\vec{\nabla}c_{1}). \tag{6}\]
\[c_{1}=s_{1}\cdot\rho_{1}. \tag{7}\]
\[\mu(s)=\mu_{2}\cdot e^{-\mathfrak{R}\cdot s_{1}}. \tag{8}\]
\[s_{1}=0.5(1+\zeta(f(x,y,z))). \tag{9}\]
Here suffixes 1 and 2 denote the characteristics of fluids 1 (LVF) and 2 (HVF). We consider two fluids of different saturation values, \(s_{1}\) and \(s_{2}\), related by equation (5). Additionally, the saturation value \(s_{1}\) and the concentration \(c_{1}\) are related via equation (7). This module solves the Darcy's law for total pressure and the convection-diffusion equation for fluid transport. We discretize the governing equations for both variables velocity and pressure using the finite element method with second-order quadratic Lagrange elements [19; 20; 39]. We have used the implicit backward differentiation formula (BDF), with the maximum and minimum degrees of the interpolating polynomial being 5 and 1, respectively. The time-step is controlled nonlinearly owing to its effectiveness over the free-step sizing. We use the fully coupled, constant Newton technique to solve nonlinear systems, and an iterative solver is used to solve the linear systems. We use successive over-relaxation (SOR) as a pre-and post-smoother, and parallel direct solver (PARDISO) is used as a coarser solver when the generalized minimal residual method (GMRES) is employed in the multigrid solver.
Figure 1: Schematic of a cubical porous computational domain having an encapsulated fluid 2 (HVF) and a cylindrical hole to inject fluid 1 (LVF).
Figure 2: Definition of different length scales \(R_{I},R_{O}\) for quantifying finger growth.
The computational domain used for this investigation is a 3D cubical homogeneous porous region of porosity \(\epsilon_{p}\) and permeability '\(\kappa\)' as shown in figure 1. We inject LVF of viscosity, \(\mu_{1}\), with a vertical velocity of \(u=U_{0}\). This LVF sweeps a HVF of viscosity, \(\mu_{2}\), encapsulated in the porous domain. The cylindrical inlet from where the injection of LVF occurs has a diameter and height of 0.25 mm and 0.06 mm, respectively. The size of the cubical domain is \(2.5mm^{3}\). To avoid the fingering caused by density variations, we assume that both fluids are of the same density, Newtonian in nature, incompressible, and do not react with each other. Therefore, the dynamics of these miscible fluids will primarily be affected by the concentration and the pressure gradients, while mobility differences will govern the dynamics of the viscous fingering. We use tetrahedral mesh with a minimum element size of 0.5 \(\mu\)m and approximately three million total elements across the domain. The inflow velocity and concentration are \(U_{0}\) and \(s_{1}=1\). For the pressure at inlet, we use the homogeneous Neumann boundary condition. At the outlet, homogeneous Neumann boundary conditions are used for the velocity, whereas, for the pressure, we use homogeneous Dirichlet boundary condition (\(p=0\)). Also, to ensure no flux at other transverse boundaries Neumann boundary condition, \(-n\cdot\rho u=0\), is employed. The initial values of \(s_{1}=0\) in the cubical domain. A linear perturbation profile in the form of an arbitrary three-dimensional function \(f\) (x, y, z) and amplified using \(\zeta\) (equation 9) is imposed at the junction of the cubical domain and cylindrical inlet. This setup is relevant to the oil industry, where a fluid (LVF) is injected to sweep another fluid (HVF) trapped in a porous domain. The viscosity mismatch between LVF and HVF results in viscous fingering, and the flow will be unstable. We define the length scales as shown in figure 2[17] to quantify the fingering patterns. The largest circle that encloses the region where injected fluid sweeps out the encapsulated fluid is known as the inner radius, \(R_{I}\). The outer radius, \(R_{O}\), is the smallest circle that encompasses the most distantly displaced fluid, and \(R_{F}=R_{O}-R_{I}\) represents the maximum expansion of the fingers at that time. The list of parameters used in the study is given in table 1.
## III Results and Discussion
Hydrodynamic instability in energy resource production reduces sweep efficiency. This results in decreased oil production. Permeability differences between the fluids, viscous forces, gravity forces, capillary forces, and diffusion caused by concentration differences are the key factors that control such complicated phenomena. To study the role of such regulating factors for the instabilities in more diversified ways, we simulate cases for miscible fluids with different non-dimensional numbers as indicated in table 2. A comparison of the saturation iso-surfaces at different LVF injection velocities while keeping all other factors the same is shown in figures 3(a) and (b) at 40 ms and 60 ms, respectively. For case C2, the fingers are approximately uniform in shape and size. However, with increasing \(Re\) (cases C1 and C3), the fingers become wider and grow non-uniformly. Early breakthroughs will occur for higher injection velocity, along with early tip splitting. The fingers eventually converge due to the coalescing phenomenon when the instabilities approach the domain walls. The structure of the fingers has a substantial variation for different flow rates at the same time instances, as observed in figures 3(a) and (b).
The evolution of the flow occurs due to pressure difference and diffusion, governed by the Darcy and the convection-diffusion equation, respectively. The relationship between temperature, pressure, and the properties of the diffusing substance, such as the size of its molecules, affect the diffusion coefficient. To investigate the impact of changing the diffusion coefficient and keeping all the other parameters the same, we simulate cases by varying the values of \(Pe\) (see table 1). Figures 4(a) and (b) illustrate the iso-surface for LVF's saturation for the cases with variation in \(Pe\) (Case C1, C4, and C5) at 40 and 60 ms, respectively. When \(Pe\) is low (case C4), the diffusion of LVF dominates the viscous effects, and the fingering instabilities cease to evolve. The absence of finger-like patterns at the cross-sections of the domain signifies that the effect of the mobility difference is negligible for C4. The central finger displaces the encapsulated HVF because of shielding and spreading at low \(Pe\) (Case 4). Eventually, this central finger reaches downstream. For this case, our results are qualitatively comparable to the results of [40] for the same order of Peclet number but a higher viscosity ratio.
Figure 5 compares the evolution of the instability for case C6 for the logarithmic viscosity ratio \(\mathfrak{R}=3\) at times 20, 40, and 60 ms while figure 6 compares the patterns of instability for case C7 (\(\mathfrak{R}=4\)) at times 5, 10 and 20 ms. With an increase in \(\mathfrak{R}\), tip splitting of the fingers occur, which results in thinner fingers that travel downstream. For C1 (where \(\mathfrak{R}=2\)), the fluid expands more uniformly and sweeps the majority of the encapsulated fluid, whereas in C6 and C7 (\(\mathfrak{R}=3,4\) respectively), owing to the disordered finger growth lesser amount of fluid is swept out, before the breakthrough. For larger values of \(\mathfrak{R}\), the chaotic and nonlinear growth of the fingers occurs with earlier tip splitting, making the sweeping of the HVF by the LVF more challenging.
For the quantitative analysis of the instabilities, we measure the extremities of the fingers by marking the circles \((R_{I},R_{O})\) at the farthest and nearest point of fluid interaction (see figure 2). Using these dimensions, we calculated the finger length (\(R_{F}\)=\(R_{O}-R_{I}\)) and
the extent of the maximum fluid displaced from the domain (\(R_{I}\)). For the cases mentioned in table 2, we examine the degree of instability, sweep efficiency, and breakthrough characteristics. The fingering growth (\(R_{F}\)) over time is shown in different plots as a function of the non-dimensional parameters such as the Reynolds number (figure 7), the Peclet number (figure 8), and the mobility ratio (figure 9).
As shown in figure 7, an increase in the influx generates early and pronounced instability patterns that intensify with time(see case C3(\(2Re,Pe\),2)). Fingers repeatedly split owing to local instability resulting in narrow fingers that move downstream with coalescence. The slow-growing fingers for lower influx (case C2) eventually sweep out the encapsulated fluid more efficiently. This finding concurs with the qualitative findings of [27] that a low flow rate displaces encapsulated fluid more efficiently. At a later time, the rate of growth of the fingers for cases C1, C2, and C3 becomes approximately similar.
The saturation iso-contour plots (figure 4(a) and (b)) for Case C1, C4, and C5 at different times demonstrate that the instabilities have a noticeable dependency on the Peclet number. For diffusion coefficients of the order \(10^{-8}\) or higher (where \(Pe\geq O(4)\)), the length of the finger remains the same (see figure 8). Also, the finger growth increases monotonically for C4(\(Re,0.1Pe\),2), while for cases C1(\(Re,Pe\),2) and C5(\(Re,10Pe\),2), it rapidly increases initially but becomes uniform eventually. We find an early onset of finger patterns for flow with higher \(Pe\) (case C1 and C5). The evolution of these cases over time shows extensive tip splitting and coalescence before the breakthrough. However, after coalescence, instabilities' expansion is delayed by shielding and spreading. As a result, the flow becomes less chaotic. In contrast, C4 has fewer fingers owing to higher diffusion. It is also reported [41] that the instability is more prominent with larger \(Pe\) due to slow fluid transport for rectilinear Hele-Shaw cells. Additionally, linear stability analysis is used by [11] to explain the stability criterion based on \(Pe_{c}\) (critical Peclet number) for radial source flow and concluded that the flow becomes unstable when \(Pe>Pe_{c}\). Since we found that the flow is stable for C4 (lower Pe than other cases), whereas C1 and C5 exhibit instabilities, it indicates that for the range of our considered \(Pe\) values, there will be some critical value of \(Pe\) at which the stability behavior will change. Despite having an order of magnitude difference in \(Pe\), cases C1 and C5 demonstrate similarity in finger growth. This observation signifies that the variation in the fingering instability is negligible beyond a certain value of the diffusion coefficient provided the \(Re\) and
\begin{table}
\begin{tabular}{c c c c}
**Cases** & **Re** & **Pe** & \(\mathfrak{N}\) & **Nomenclature** \\ \hline Case 1 & _Re_=0.30625 & _Pe_=1.25\(\times 10^{4}\) & 2 & C1(\(Re,Pe\),2) \\ Case 2 & 0.5_Re_=0.153125 & _Pe_=1.25\(\times 10^{4}\) & 2 & C2(\(0.5Re,Pe\),2) \\ Case 3 & 2_Re_=0.6125 & _Pe_=1.25\(\times 10^{4}\) & 2 & C3(\(2Re,Pe\),2) \\ Case 4 & _Re_=0.30625 & 0.1_Pe_=1.25\(\times 10^{3}\) & 2 & C4(\(Re,0.1Pe\),2) \\ Case 5 & _Re_=0.30625 & 10_Pe_=1.25\(\times 10^{5}\) & 2 & C5(\(Re,10Pe\),2) \\ Case 6 & _Re_=0.30625 & _Pe_=1.25\(\times 10^{4}\) & 3 & C6(\(Re,Pe\),3) \\ Case 6* (Double sized domain) & _Re_=0.30625 & _Pe_=1.25\(\times 10^{4}\) & 4 & C7(\(Re,Pe\),4) \\ Case 7 & _Re_=0.30625 & _Pe_=1.25\(\times 10^{4}\) & 4 & C7(\(Re,Pe\),4) \\ \end{tabular}
\end{table}
Table 2: Cases for the study
\begin{table}
\begin{tabular}{c c c}
**Parameters** & **Detail of Parameters** & **Values** \\ \hline \(L_{x}=L_{y}=L_{z}\) & Length, Width, and Height of cubical domain & \(2.5mm,5mm\) \\ r and h & Radius and Height of Injected hole & 0.125 mm, 0.06 mm \\ \(\mathfrak{N}\) & Logarithmic mobility ratio & 2,3,4 \\ \(U_{o}\) & Injection speed & 0.5, 1, 2 m/s \\ \(\mu_{1}\) & Viscosity of displacing fluid & 0.001Pa-s \\ \(\rho\) & Density of both the fluids & \(1.225Kg/m^{3}\) \\ \(\epsilon_{p}\) & Porosity & 0.5 \\ \(\kappa\) & Permeability & \(10^{-6}m^{2}\) \\ \(\mathfrak{D}\) & Diffusion coefficient & \(2\times 10^{-8},2\times 10^{-9},2\times 10^{-7}m^{2}/s\) \\ \(\mathfrak{D}_{c}\) & Capillary Diffusion coefficient & \(\epsilon_{p}\times\mathfrak{D}\) \\ \(\zeta\) & Amplitude of the disturbance & 0.01 \\ \(\kappa_{r1}=\kappa_{r2}\) & Relative Permeability of Fluids & 1 \\ \(Re,2Re,0.5Re\) & Reynolds Number & 0.30625,0.6125,0.153125 \\ \(Pe,10Pe,0.1Pe\) & Péclet Number & \(1.25\times 10^{4},1.25\times 10^{5},1.25\times 10^{3}\) \\ \end{tabular}
\end{table}
Table 1: List of parameters
the logarithmic viscosity ratio remain the same. [40] also reported similar unchanged fingertip extensions for a fixed viscosity ratio and an increasing Peclet number.
To maximize the sweep efficiency, researchers have explored the effect of different LVF to HVF ratios [17; 40]. By adjusting the non-dimensional number \(\mathfrak{R}\) in figure 9, we assess the impact of this variation on finger length. The initial growth rates for cases C1 and C6 are similar, but for case C7, a chaotic growth of the fingers is initially observed. Subsequently, due to the erratic expansion of the fingers for high viscosity difference in LVF and HVF (case C6 and C7), there is a drastic increment in the finger length. However, for case C6 the growth of the fingers ceases later, probably owing to the smaller size of the domain. To ensure that the growth of the fingers is indeed constrained by the size of the domain for C6, another case (C6\({}^{*}\)) is simulated with the same flow parameters and properties as in C6, except that the size of the cubical domain is twice that of C6. The rate of growth of the fingers initially increases and becomes asymptotic at later times for C6\({}^{*}\). Surprisingly, we found that the size of the domain barely affects the sweep efficiency (\(\eta_{sw}\)) that remains similar for both C6 and C6\({}^{*}\) (see figure 11). For the rest of the cases, the finger length keeps increasing (figures 7, 8, 9), signifying that the domain effects are negligible on the growth of the fingers. The growth of fingers for C7 is highly non-linear and requires a significant amount of computational time. Therefore, we show the finger growth for C7 in figure 9 till the first breakthrough.
These fingering patterns grow with time and move downstream by making channels without extracting the complete encapsulated HVF. \(R_{I}\) (figure 2) signifies the complete extraction of fluid. Enhanced oil recovery (EOR) techniques aim to maximize the sweep effectiveness (increasing \(R_{I}\))[32],[37],[40],[42]. Figure 10 compares the extent of finger growth to the maximum total sweep-out of HVF (\(R_{F}/R_{I}\)) to most distantly displaced fluid (\(R_{O}\)). It helps to explore the ideal combination of all the parameters that may result in optimal HVF extraction. For a given value of \(R_{O}\), C7\((Re,Pe,4)\) demonstrates the highest value of \(R_{F}/R_{I}\), indicating the least sweeping of HVF. The lower value of \(R_{F}/R_{I}\) signifies a lesser number of instabilities and a higher value of \(\eta_{sw}\) (see figure 11). The flow is less stable if the LVF injection rate is high or the diffusion
Figure 3: Iso-contours of LVF saturation (\(s_{1}\)) at times (_a_) 40 ms, and (_b_) 60 ms for the cases with different \(Re\).
coefficient is low (see cases C3 and C5). In all the cases, the slope of the \(R_{F}/R_{I}\) is initially high, indicating that at that instant, the finger length is more than the complete sweeping(\(R_{I}\)). Coalescence of the fingers makes the fingers shorter and enhances the spreading of LVF in the domain.
Researchers have made several technological advances to smoothly and quickly sweep HVF to achieve breakthrough [24, 43, 44, 45]. The instability patterns depend on tip splitting, coalescence, displacing fluid injection rate, and the dominant force (viscous or diffusive
Figure 4: Iso-contours of LVF saturation (\(s_{1}\)) at times (_a_) 40 ms, and (_b_) 60 for cases C1, C4, and C5 having variation in \(Pe\) as \(Pe,0.1Pe,10Pe\) respectively.
Figure 5: Iso-contours of LVF saturation (\(s_{1}\)) for case C6(\(Re,Pe\),3) at times 20, 40, and 60 ms respectively.
force). These factors also influence the downstream flow and hence the breakthrough characteristics. The first breakthrough time for all the cases is given in table 3. Early breakthrough can occur due to highly chaotic patterns, slow diffusion of fluid, or high LVF injection rate. Case C2 has a low-pressure gradient and a slow rate of
Figure 8: Finger growth versus time for different \(Pe\).
Figure 7: Finger growth versus time for different \(Re\).
Figure 6: Iso-contours of LVF saturation (\(s_{1}\)) for case C7(\(Re,Pe\),4) at times 5, 10, and 20 ms respectively.
Figure 9: Finger growth versus time for different of \(\Re\).
injection of LVF, while Case C4 has better mixing and a higher diffusion rate, which means that the flow will take longer to reach downstream. For C3, breakthrough occurs comparatively early due to the rapid movement of the high-velocity LVF, whereas for C6 and C7, chaotic finger growth reaches downstream relatively quickly. A balance between breakthrough and sweep characteristics is required to achieve optimal HVF sweeping. Therefore, we have calculated the sweep efficiency (\(\eta_{sw}\)) for all the cases at the time of their first breakthrough. We define the sweep efficiency (\(\eta_{sw}\)) as the ratio of the volume of the LVF injected at the time of breakthrough to the volume of the cubical domain. To evaluate the volume of the injected LVF, we take the interface saturation value to 0.15 [40]. Figure 11 demonstrates the sweep efficiency (\(\eta_{sw}\)) for all the cases at their first breakthrough time. The case with a higher diffusion coefficient (Case C4) or slow sweeping (Case C2) takes more time to sweep out the encapsulated HVF. The flow is stable for these cases and therefore has comparatively higher \(\eta_{sw}\). The \(\eta_{sw}\) of C4 with \(Pe=1250\) and \(\mathfrak{R}=2\) is comparable with that reported by [37] for \(Pe\)=800 and \(\mathfrak{R}=2\). Additionally, we find a reduction in \(\eta_{sw}\) with increasing \(Pe\) similar to that demonstrated by [40; 41]. Moreover, C1 and C5 have similar values of \(\eta_{sw}\), indicating that the effect of the variation in \(Pe\) after a certain value is negligible [11]. The logarithmic mobility ratio, \(\mathfrak{R}\), is an important parameter for \(\eta_{sw}\), as with a larger difference in viscosity, it drastically decreases (Case C1 to C6 and C7). Therefore, for optimum oil extraction or similar applications, tuning all these parameters is imperative.
## IV Conclusions
We perform numerical simulations to study the influence of different parameters such as the \(Re\), the \(Pe\), and the logarithmic mobility ratio, \(\mathfrak{R}\), on the dynamics of viscous fingering in a three-dimensional cubical domain. Additionally, we assess the role of the fingers in sweeping the encapsulated high-viscosity fluid out of the domain. This investigation has applications in the oil industry, where oil is extractde by injecting a low-viscosity fluid. We inject a low-viscosity fluid from a cylindrical hole into a cubical domain with a porous media. The \(Re\) associated with this problem is defined based on the diameter of this hole, the injecting velocity, and the viscosity of LVF. Similarly, we define \(Pe\) as the rate of advection to the rate of diffusion of the LVF to the HVF. We present a qualitative and quantitative assessment of the growth of the fingering instability in terms of the iso-surface contours and the extremities of the fingers. At high \(Re\), the fingers evolve non-uniformly, resulting in earlier tip splitting and breakthrough compared to low \(Re\). Owing to the uniform growth of the fingers at low \(Re\), the LVF displaces the encapsulated HVF more efficiently. Similarly, for lower values of the \(Pe\), the diffusion of LVF is high resulting in the ceasing of the fingering pattern and an efficient sweeping of HVF compared to the cases with larger \(Pe\). We also observe that for lower values of \(\mathfrak{R}\), the fluid expands uniformly and sweeps HVF more efficiently, as compared to higher values of \(\mathfrak{R}\), which causes chaotic and non-linear growth of fingers. We further evaluate and report the sweep efficiency of all the cases. We observe that the sweep efficiency is high for the cases with a low injection velocity or high diffusion coefficient. This finding is attributed to a stable flow with fewer fingers developed within the domain. In contrast, the sweep efficiency drastically reduces with an increase in the ratio of the viscosity of the displaced to the displacing fluid owing to the development of finer fingers that interact and grow
Figure 11: Comparison of Sweep Efficiency (\(\eta_{sw}\)) for different cases
Figure 10: Comparison of the extent of finger growth to the maximum total sweep-out fluid (\(R_{F}/R_{I}\)) versus most distantly displaced fluid \(R_{O}\).
non-linearly. Therefore, to achieve an efficient sweeping of the high-viscosity fluid out of the domain, all the associated parameters should be combined such that the non-linear growth of the fingers is prohibited.
|
2309.16785 | Optical and spin coherence of Er$^{3+}$ in epitaxial CeO$_2$ on silicon | Solid-state atomic defects with optical transitions in the telecommunication
bands, potentially in a nuclear spin free environment, are important for
applications in fiber-based quantum networks. Erbium ions doped in CeO$_2$
offer such a desired combination. Here we report on the optical homogeneous
linewidth and electron spin coherence of Er$^{3+}$ ions doped in CeO$_2$
epitaxial film grown on a Si(111) substrate. The long-lived optical transition
near 1530 nm in the environmentally-protected 4f shell of Er$^{3+}$ shows a
narrow homogeneous linewidth of 440 kHz with an optical coherence time of 0.72
$\mu$s at 3.6 K. The reduced nuclear spin noise in the host allows for
Er$^{3+}$ electron spin polarization at 3.6 K, yielding an electron spin
coherence of 0.66 $\mu$s (in the isolated ion limit) and a spin relaxation of
2.5 ms. These findings indicate the potential of Er$^{3+}$:CeO$_2$ film as a
valuable platform for quantum networks and communication applications. | Jiefei Zhang, Gregory D. Grant, Ignas Masiulionis, Michael T. Solomon, Jasleen K. Bindra, Jens Niklas, Alan M. Dibos, Oleg G. Poluektov, F. Joseph Heremans, Supratik Guha, David D. Awschalom | 2023-09-28T18:22:56Z | http://arxiv.org/abs/2309.16785v1 | # Optical and spin coherence of Er\({}^{3+}\) in epitaxial CeO\({}_{2}\) on silicon
###### Abstract
Solid-state atomic defects with optical transitions in the telecommunication bands, potentially in a nuclear spin free environment, are important for applications in fiber-based quantum networks. Erbium ions doped in CeO\({}_{2}\) offer such a desired combination. Here we report on the optical homogeneous linewidth and electron spin coherence of Er\({}^{3+}\) ions doped in CeO\({}_{2}\) epitaxial film grown on a Si(111) substrate. The long-lived optical transition near 1530 nm in the environmentally-protected 4f shell of Er\({}^{3+}\) shows a narrow homogeneous linewidth of 440 kHz with an optical coherence time of 0.72 \(\mu s\) at 3.6 K. The reduced nuclear spin noise in the host allows for Er\({}^{3+}\) electron spin polarization at 3.6 K, yielding an electron spin coherence of 0.66 \(\mu s\) (in the isolated ion limit) and a spin relaxation of 2.5 ms. These findings indicate the potential of Er\({}^{3+}\):CeO\({}_{2}\) film as a valuable platform for quantum networks and communication applications.
## 1 Introduction
Rare-earth (RE) ions in dielectric solid-state hosts provide a promising platform for developing quantum memories [1, 2, 3] in quantum repeaters [1, 4] for use in quantum communication networks because of their unique combination of stationary matter qubits and flying photon qubits provided by transitions in their uniquely environmentally protected 4f shell electrons. This spin-photon interface [1, 5] is characterized by long-lived spin states with a long coherence time [5] and narrow optical homogeneous linewidth [6, 7]. These unique properties of RE ions have been explored to demonstrate quantum memories in atomic vapor using the DLZC (Duan, Lukin, Cirac and Zoller) protocol, [8] and in solids, using electromagnetically induced transparency [2, 3], photon-echo [2, 3] and atomic frequency comb [9, 10]. Progress has also been made in realizing entanglement distribution [11] and quantum transduction [12, 13]. All these demonstrations are enabled by a combination of RE ion and crystalline host properties.
The implementation of quantum networks for quantum communication applications demands the realization of entanglement distribution of quantum information over long distances. Thus, it is desirable for the matter qubits/nodes to be able to interface with telecommunication C-band photons to leverage the existing optical fiber networks providing minimum optical loss. Trivalent erbium ions (Er\({}^{3+}\)) embedded in rare-earth oxides have \({}^{4}I_{13/2}\) to \({}^{4}I_{15/2}\) optical transitions in the telecom C-band and, thus, have gained attention as a candidate system aimed at developing telecom-compatible quantum memories needed in quantum repeaters. Approaches have been explored to store information in Er\({}^{3+}\) long-lived optical transitions using photon-echo techniques [2, 4, 14] with retrieval efficiencies up to 40%, [15] but with fidelities of recalled states well below the classical limit of 1/2 and no-cloning limit of 2/3 [16]. These figures-of-merit are critical to prevent eavesdropping in quantum communication networks and enable acceptable levels of quantum error correction in distributed fault-tolerant quantum computing. Therefore, long storage times and efficient retrieval of states with high fidelity are essential for quantum memories. The optical storage times are limited by the optical coherence time, typically less than 1 \(\mu\)s [2, 4, 14], below the proposed requirement for quantum repeater based long-distance quantum networks [17]. The use of Er\({}^{3+}\) spin states has emerged as a promising alternative to store photon information with much longer storage times, [18, 19] where a collective spin relaxation in the atomic ensemble is used as a local memory and reconverted to a photon through a collective interference effect [4]. Therefore, it is critical to realize long-lived spin states with coherence times orders of magnitude longer than the optical excited state lifetime for efficient optical control of the spin state and for subsequently suitable long-term quantum state storage [5, 18].
As a Kramer's ion, the intrinsic non-zero electronic magnetic moment of Er\({}^{3+}\) poses an intrinsic limit on the electronic spin relaxation and, therefore, the spin coherence. In addition, the presence of fluctuating magnetic field noise induced by the intrinsic electronic and nuclear spins in host materials further reduces the spin coherence [20]. In some spin qubit-host systems, approaches have been to taken to engineer the nuclear spin bath density through isotopic purification of the host material [5, 21, 22, 23], reduction of unintended spin defects during synthesis, and optimization of defect creation [5] and even isotopic doping processes using \({}^{167}\)Er [24] to improve upon coherence properties of the targeted spin qubits. Alternatively, finding host materials with low natural abundant isotopes of non-zero nuclear spins is another viable pathway towards improving spin properties by minimizing spin noise in the host for network applications [5, 25]. An electron spin coherence time of 23 ms has been reported for Er\({}^{3+}\) at 10 mK in CaWO\({}_{4}\), which has a low nuclear spin environment with only 14% of \({}^{183}\)W isotope of natural abundant tungsten with nuclear spin of \(I=\frac{1}{2}\) contributing to the spin noise in the host [26].
To this end, cerium dioxide (CeO\({}_{2}\)) with cerium contributing zero nuclear spin and oxygen carrying only 0.04% (\({}^{17}\)O), is a promising potential host for quantum spin \(S=\frac{1}{2}\) systems with a theoretically predicted coherence time up to 47 ms [5, 25]. Recently, we demonstrated the molecular beam epitaxy of single-crystal Er-doped CeO\({}_{2}\) films on Si(111) substrates and the doping dependence on Er\({}^{3+}\) optical and spin linewidths [27]. In this work, we make use of these films with low doping levels (3 parts-per-million (ppm)) to explore the intrinsic optical homogeneous linewidths and electron spin coherence of Er\({}^{3+}\) in CeO\({}_{2}\). Using two pulse photon-echo measurements, we demonstrate that the Er\({}^{3+}\) ions have long-lived optical states with a narrow homogeneous linewidth of \(\sim\) 440 kHz and optical coherence of \(\sim\) 0.72 \(\mu\)s at 3.6 K. Temperature dependent data suggests that the homogeneous linewidth could be \(<\) 200 kHz at millikelvin temperatures with optical coherence \(>\) 1.6 \(\mu\)s, indicating the promising potential of Er\({}^{3+}\) in CeO\({}_{2}\) providing a usefully long optical quantum memory. Moreover, the reduced magnetic field noise from a low nuclear spin environment in the CeO\({}_{2}\) film enables electron spin polarization with a slow spin-lattice relaxation, thereby enabling access to the electron spin dynamics even at 3.6 K, which is not observable in other, well studied, host materials including Y\({}_{2}\)SiO\({}_{5}\), YVO\({}_{4}\), CaWO\({}_{4}\)[28, 29, 30]. As demonstrated here, the Er\({}^{3+}\) ions in CeO\({}_{2}\) show a spin coherence time \(T_{2}\sim 0.66\)\(\mu s\) at the isolated ion limit with a spin relaxation time \(T_{1}\sim\) 2.5 ms, indicating the potential for millisecond scale spin coherence.
The narrow optical homogeneous linewidth could enable a path towards integration with nanophotonic cavities to drive Er\({}^{3+}\) optical transitions coherently at even individual ion level to explore time-dependent spectral diffusion [7], critical for entanglement distribution for quantum repeaters. Therefore, the combined narrow optical homogeneous linewidth and long spin relaxation time indicates the potential of such an Er-doped CeO\({}_{2}\) platform in providing attributes necessary for efficient optical control of long-lived and coherent spin states for the development of quantum memories.
## 2 Results
### Er\({}^{3+}\) energy structure: crystal field split levels
The Er\({}^{3+}\) doped CeO\({}_{2}\) sample was epitaxially grown on Si (111)\(\pm\)0.5\({}^{\circ}\) substrate using molecular beam epitaxy (MBE, details in Materials and Methods). A total thickness of 936 nm of single crystal CeO\({}_{2}\) with a fluorite structure unit cell was grown and doped with a natural abundance of Er\({}^{3+}\) isotopes, comprising 77% nuclear-spin-zero even isotopes \({}^{166}\)Er\({}^{3+}\) and 23% of the odd isotope \({}^{167}\)Er\({}^{3+}\) with nuclear spin \(I=\frac{7}{2}\). The total Er\({}^{3+}\) concentration is estimated to be 3 ppm based on Er beam flux, and detailed information on growth and structural characterization can be found in prior work [27].
Er\({}^{3+}\) ions have 11 electrons in the 4f shell that lead to the first two spin-orbit split multiplets as \({}^{4}I_{15/2}\) and \({}^{4}I_{13/2}\). These multiplets are further split into multiple levels due to the presence of a crystal field. Given the cubic symmetry of the crystal field in CeO\({}_{2}\)[31, 27], the \({}^{4}I_{15/2}\) and \({}^{4}I_{13/2}\) multiplets split into 5 levels, labeled respectively as \(Z_{1}\) to \(Z_{5}\) and \(Y_{1}\) to \(Y_{5}\), in the order from the lowest to highest energy, as shown in Fig. 1(a) [31]. \(Z_{1}\) and \(Z_{2}\) are two-fold degenerate states with effective spin \(S=\frac{1}{2}\) that transform into irreducible representations \(\Gamma_{6}\) and \(\Gamma_{7}\). The higher three Z levels (\(Z_{3}\) to \(Z_{5}\)) are four-fold degenerate with effective spin \(S=\frac{3}{2}\) that transform into irreducible representation \(\Gamma_{8}\).
The crystal field split levels of Er\({}^{3+}\) are probed through temperature and power dependent photoluminescence (PL) measurements with Er\({}^{3+}\) ions excited by a 1473 nm laser, with photon energy higher than \({}^{4}I_{13/2}\rightarrow^{4}I_{15/2}\) transition, and a spectrometer resolution of 20 GHz resolution (0.16 nm, 84 \(\mu\)eV). This resolution is sufficient to resolve crystal field split transitions that are typically in the hundreds of GHz to THz range [32]. At 3.6 K, PL occurs primarily from \(Y_{1}\) to all the Z levels due to the rapid non-radiative relaxation of electrons from higher Y levels to \(Y_{1}\) level. Four emission peaks are observed in Fig.1(b). These are identified to be \(Y_{1}\) to \(Z_{1}-Z_{4}\) transitions as marked with black arrows. The lack of emission from the \(Z_{5}\) level may be due to its small transition dipole moment. The higher Y levels are probed by altering the Boltzmann distribution of electrons through increasing the sample temperature from 3.6 K to 150 K shown in Fig. 1(b). Higher Y level transitions are thus identified based on their temperature dependent behavior and their energy separation between each other. With continued increase of temperature, higher Y levels get populated. We observe clearly the \(Y_{1}\) to \(Y_{4}\) levels. The \(Y_{5}\) level transitions may be shorter than 1500 nm and thus not collected in the measurement setup, see the Supplementary Information Section S1 (SI.S1). The intensity of emission from these identified \(Y_{1}\) to \(Y_{4}\) levels also matches the expected behavior from a Boltzmann distribution of electrons at these temperatures (see SLS2). The table shown in Fig. 1(c) summarizes the energy structure of the \(Z_{1}\) to \(Z_{5}\) and \(Y_{1}\) to \(Y_{5}\) levels.
The \(Y_{1}\to Z_{1}\) transition is found to be at 1530.74 nm (195.84 THz). The \(Y_{1}\) level is separated from the \(Y_{2}\) level by 1.13 meV (281.9 GHz). Similarly, the \(Z_{1}\) level is separated from the \(Z_{2}\) level by 1.51 meV (357.5 GHz). In this study, we focus on the \(Y_{1}-Z_{1}\) transition at 1530.74 nm because this transition allows for optical control of the electronic \(S=\frac{1}{2}\) spin ground state (\(Z_{1}\) level). As shown later, this transition also has a narrow optical homogeneous linewidth. All studies on optical homogeneous linewidth, on electron spin coherence and relaxation, are carried out at 3.6 K. Given the energy separation between the \(Z_{1}\) and \(Z_{2}\), there is \(\approx 0.8\%\) electron population of the \(Z_{2}\) level due to Boltzmann statistics. Thus, in all optical measurements resonantly addressing the \(Y_{1}-Z_{1}\) transition, one can treat the system as an effective two-level system involving only \(Z_{1}\) level and ignore the population of electrons at higher Z levels. However, for spin coherence, the \(Y_{1}\) level is not separated from the \(Z_{2}\) level by 1.51 meV (357.5 GHz).
Figure 1: Crystal field split energy levels of the \({}^{4}I_{15/2}\) and \({}^{4}I_{13/2}\) multiplets of Er\({}^{3+}\) ions in CeO\({}_{2}\). (a) Schematic of the crystal field splitting of the \({}^{4}I_{15/2}\) and \({}^{4}I_{13/2}\) multiplets, with 5 levels each, labeled as \(Z_{1}\) to \(Z_{5}\) and \(Y_{1}\) to \(Y_{5}\). (b) Temperature dependent PL spectra of Er\({}^{3+}\) ion emission with Er\({}^{3+}\) ion excited by a 1473 nm laser with excitation power of 1000 \(\mu\)W on the sample surface, 5 times the power needed to saturate \(Y_{1}-Z_{j}\) level transition (details in SI.S2). (c) Table summarizing the crystal field split energy levels.
measurements, the population of the \(Z_{2}\) level becomes more significant and magnifies in the study of spin relaxation dynamics of \(Z_{1}\) level, as will be discussed later.
### Optical coherence of Er\({}^{3+}\) emission
With the identification of the crystal field split levels of the \({}^{4}I_{13/2}\rightarrow^{4}I_{15/2}\) transition, we focus on the \(Y_{1}\to Z_{1}\) transition to probe the inhomogeneous and homogeneous linewidths. The inhomogeneous linewidth is probed using photoluminescence excitation spectroscopy (PLE), the measured spectrum is shown in Fig.2(a). It is obtained using the optical pulse sequence schematically shown in the inset of Fig.2(a) (details see Materials and Methods) while scanning the laser frequency across the \(Y_{1}-Z_{1}\) transition with a step size of 0.625 GHz. The Lorentzian fit to the data indicates an inhomogeneous linewidth of \(\Gamma_{\rm{inh}}=9.0\pm 0.2\) GHz (0.07 nm or 37 \(\mu\)eV), comparable to other MBE grown Er\({}^{3+}\) doped in other rare-earth oxide films such as Y\({}_{2}\)O\({}_{3}\) and TiO\({}_{2}\)[33, 34]. Compared with Er\({}^{3+}\) ions in other bulk low nuclear spin bath host materials, such as YSO (Y\({}_{2}\)SiO\({}_{5}\)), Y\({}_{2}\)O\({}_{3}\), and CaWO\({}_{4}\), the observed linewidth is around a factor of ten higher [35, 36]. This is likely due to the relative high density of threading dislocations and unintended defects in the epitaxial CeO\({}_{2}\) film on Si originating from the 0.5% lattice mismatch strain [27]. The observed signal is dominantly from \({}^{166}\)Er ions. The emission from the 23% of \({}^{167}\)Er is buried under the observed broad inhomogeneous peak. Therefore, we are unable to resolve the hyperfine splitting from \({}^{167}\)Er.
Besides inhomogeneous linewidth, the homogeneous linewidth is another important figure-of-merit for an optical transition. One can extract the homogeneous linewidth through the measurement of optical coherence (\(T_{2}\)) where \(\Gamma_{\rm{hom}}=\frac{1}{\pi T_{2}}\)[37], and we do so via two-pulse photon-echo (PE) measurement to probe the homogeneous linewidth. Fig.2(b) shows the measured integrated echo intensity as a function of \(\tau\) at 3.6 K using the pulse sequence schematically shown in the inset (details in Materials and Methods and SI.S3). The data show a single exponential decay envelope of the photon echo amplitude modulated with an oscillating beat pattern. The beating pattern indicates that we are coherently addressing of a superposition of two transitions in a three-level system with the energy separation of two of the levels being within the bandwidth of the optical pulse. The red line is a fit to the data considering a single exponential decay with an added frequency of oscillation \(f=1/T_{\rm{osc}}\). The data indicate an optical coherence \(T_{2}=720.0\pm 33.1\) ns with homogeneous linewidth \(\Gamma_{\rm{hom}}=\frac{1}{\pi T_{2}}=442.1\pm 20.3\) kHz and a beating period of \(T_{\rm{osc}}=300.2\pm 11.8\) ns (\(f_{\rm{osc}}=3.33\pm 0.23\) MHz). The observed beating frequency is within the bandwidth of the optical pulse and also found to be consistent with the Zeeman splitting of the \(Z_{1}\) level due to earth's magnetic field at around 0.35 G. This suggests that the beating might be from the effect of earth's magnetic field lifting the degeneracy of \(Z_{1}\) level.
The observed homogeneous linewidth, \(\Gamma_{\rm hom}=442.1\pm 20.3\) kHz, at 3.6 K without any externally applied magnetic field is orders of magnitude higher compared to the lifetime-limited \(\Gamma_{\rm hom}\sim 94\) Hz given the \(\sim\)3.4 ms radiative lifetime [27] of the \(Y_{1}-Z_{1}\) transition. To probe the dephasing processes, temperature dependent measurements of \(\Gamma_{\rm hom}\) is carried out using two pulse PE to gain insights into the dephasing mechanisms occurring in the material, with temperatures ranging from 3.6 K to 5.5 K. Fig.2(c) shows the extracted \(\Gamma_{\rm hom}\) as a function of temperature. The beat frequency extracted from all temperatures is shown in Fig. Fig.2(d). The beat frequency is independent of temperature which is consistent with its origin being from the Zeeman splitting of the \(Z_{1}\) level induced by the earth's magnetic field. At these measured temperatures, two phonon processes contribute to dephasing: (a) coupling to two-level systems (TLS) [38, 39, 40] (b) Orbach process phonon effects [41, 42] with a homogeneous linewidth \(\Gamma_{\rm hom}(T)\) of the following form:
\[\Gamma_{hom}(T)=\Gamma_{0}+\alpha_{\rm TLS}\cdot T+\alpha_{\rm phonon}\cdot exp (\frac{-\Delta E}{K_{B}T}) \tag{1}\]
where \(\Gamma_{0}\) is the linewidth at 0 K, \(\alpha_{TLS}\) is the coefficient for coupling to TLS, \(K_{B}\) is the Boltzmann constant.
Figure 2: Optical homogeneous linewidth and optical coherence of \(Y_{1}-Z_{1}\) transition. (a)PLE fine scan of \(Y_{1}-Z_{1}\) transition with a step size of 0.625 GHz (2.6 \(\mu\)eV) at 3.6 K with data shown as open circles. The Lorentzian fit to the data is shown as the solid curve, indicating an inhomogeneous linewidth of \(\Gamma_{\rm inh}=9.0\pm 0.2\) GHz. The inset shows the schematic of the employed pulse sequence. (b) Two-pulse photon echo (PE) decay for the \(Y_{1}-Z_{1}\) transition with the pulse sequence shown in the inset with data shown as open circles. The solid line is the fit to the data, indicating optical coherence \(T_{2}=720.0\pm 33.1\) ns and homogeneous linewidth \(\Gamma_{\rm hom}=442.1\pm 20.3\) kHz. The observed beating pattern is of beating period \(T_{\rm osc}=300.2\pm 11.8\) ns (\(f_{\rm osc}=3.33\pm 0.23\) MHz). (c) Temperature dependence of \(\Gamma_{\rm hom}\) measured by two-pulse PE with data shown as open circles). The solid line is the fit to the data using Eq.1. (d) Plot of the extracted \(f_{osc}\) (open circles) from two-pulse PE measurement as a function of temperature.
In the probed temperature range, the increase in \(\Gamma_{\rm hom}\) is dominated by Orbach relaxation. The solid line is the fit to the data using Eq.1 with \(\Delta E=2.05\) meV. The extracted \(\Delta E\) is consistent with the energy separation between the \(Z_{1}\) to \(Z_{2}\) level obtained from PL measurements. Of the total linewidth broadening, \(\approx 150\) kHz is due an Orbach process at 3.6 K with the remaining 300 kHz of broadening coming from the combined contribution of \(\Gamma_{0}\) and direct phonon coupling, \(\alpha_{\rm phonon}\cdot T\). The \(\alpha_{\rm phonon}\) is typically in the range of a few to tens of kHz/K [43, 44, 41] for rare-earth ions in oxides. One can thus deduce that \(\Gamma_{0}\) is most likely \(\leq 200\) kHz. This suggests that the dominant dephasing process might be from spectral diffusion due to ion-ion dipolar interactions given the short ensemble average Er-Er separation in the sample (\(\sim\)14 nm, estimated from the Er concentration) or a fluctuating field induced by background charge and defects as well as strain in the film. It is worth noting that the sample studied here is grown without any optimization. One can further improve on the homogeneous linewidth by optimizing growth to reduce strain and minimize defects. There is also the path of reducing the concentration of Er\({}^{3+}\) to minimize ion-ion dipolar interaction induced spectral diffusion. Besides this, one can also improve the homogeneous linewidth by applying moderate magnetic field to reduce the coupling of TLS to the dipole moment of Er\({}^{3+}\)[35, 41]. This can lead to lower tunneling rate, thus reducing the magnetic noise caused by TLS. The field can also freeze Er spin flip-flop processes to reduce fluctuating magnetic field induced spectral diffusion and thus extend optical coherence.
The obtained homogeneous linewidth \(\Gamma_{\rm hom}=442.1\pm 20.3\) kHz at 3.6 K suggests that a viable path exists for further engineering the light-matter interaction via integrating Er ions in a cavity with a suitable quality factor to enhance the radiative transition rate through Purcell effect close to its optical coherence limit [45, 46]. Further improvement on the homogeneous linewidth through growth optimization and reduction of spectral diffusion and phonon mediated dephasing at millikelvin temperature with external magnetic field can aid in reaching sub-kHz homogeneous linewidths towards radiative-lifetime limited coherent photon generation, which are needed coherent optical control of the Er spin states.
### Er\({}^{3+}\) electron spin coherence
The optical coherence study discussed earlier indicates that Er\({}^{3+}\) ions in CeO\({}_{2}\) show a narrow optical homogeneous linewidth with promising potential of providing up to several microseconds of optical coherence at millikelvin temperatures. For the application and use of Er\({}^{3+}\) as the spin-photon interface needed for quantum memory, the Er\({}^{3+}\) electron spin coherence is the other important figure-of-merit. Next, we move on to study the Er\({}^{3+}\) electron spin behavior. We use a X-band (9.7 GHz) pulsed electron paramagnetic resonance (EPR) spectrometer to study spin coherence and relaxation. Fig. 3 (a) shows the measured spin echo response as a function of the static magnetic field. The data are taken with the \(\tau\) delay between the \(\pi/2\) (12 ns) and \(\pi\) (24 ns) pulses at 100 ns. The resultant spectrum shows the expected resonance from nuclear-spin-zero even isotopes \({}^{166}\)Er\({}^{3+}\) (primary peak) along with the hyperfine levels of 23% of the \({}^{167}\)Er\({}^{3+}\) with nuclear spin \(I=\frac{7}{2}\) (smaller secondary peaks), consistent with results in prior work [27]. Seven of the eight hyperfine peaks are spectrally clearly resolved with one of the hyperfine resonance hiding under the primary resonance peak at \(B_{0}=0.102\) T. The measured spectrum is fit using Zeeman and hyperfine terms while also accounting for the second-order perturbation effects from the large nuclear spin of \({}^{167}\)Er\({}^{3+}\). The extracted g value is \(g=6.828\pm 0.005\), consistent with the CW EPR work reported in prior work [27]. The g value matches theoretical estimates and reported values for Er\({}^{3+}\) in CeO\({}_{2}\) nanocrystals ([31, 47]). The obtained linewidth of the resonance of Er\({}^{3+}\) spin is \(2.57\pm 0.03\) mT(\(244.9\pm 2.9\) MHz), consistent with CW results[27]. This result confirms the expected EPR resonance of Er\({}^{3+}\) ion in a crystal with cubic crystal field symmetry.
The Er\({}^{3+}\) spin coherence time, \(T_{2}\), is probed via Hahn-echo measurement. The magnetic field is tuned to its resonance at \(B=0.102\) T, the resonance of the primary peak shown in Fig. 3 (a) under the applied 9.7 GHz microwave frequency. The echo signal is predominantly from the \({}^{166}\)Er electrons with only 3.6% of the signal from the \({}^{167}\)Er electrons. Fig. 3 (b) shows the measured spin echo signal collected as a function of the delay (\(\tau\)) between the \(\pi/2\)- and the \(\pi\)-pulse at 3.6 K. From the fit of \(I\propto\exp(-T_{2}/2\tau)\) (black curve in Fig. 3 (b)), one obtains \(T_{2}=0.249\pm 0.035\)\(\mu s\). The spin \(T_{2}\) is typically limited by the phonon induced dephasing at such elevated temperatures and Er-Er spin dipolar interactions. The Er-Er dipolar spin interaction shifts the spin resonance. These shifts fluctuate and spectral diffusion occurs because of the random spin orientation resulting from spin-lattice interactions or spin diffusion. The dipolar interaction between spins magnifies itself in the spin echo decay through so-called instantaneous diffusion [48, 49]. The spin coherence can be written
Figure 3: Electron spin coherence at 3.6 K probed by pulsed EPR with microwave drive at 9.7 GHz (a) Resultant EPR spectrum from two-pulse echo as the magnetic field (B) is swept. Data is taken using the Hahn-echo sequence schematically shown in panel (b) with fixed delay time (\(\tau=100\) ns) between the \(\pi/2\) pulse and \(\pi\) pulse. (b) Spin echo measurement using two-pulse Hahn-echo sequence schematically shown in the inset. Data (open circles) are taken as a function of time delay \(\tau\) between the 12ns \(\pi/2\) pulse and 24ns \(\pi\) pulse. The solid black line is a single exponential fit revealing the spin coherence time \(T_{2}=0.249\pm 0.035\)\(\mu s\) (c) Generalized Hahn echo measurements on Er\({}^{3+}\) electron spins taken with three different flip angles \(\theta\) of the second rotation pulse. Solid lines are the single exponential fits to the data. (d) Plot of the inverse of spin coherence time \(T_{2}\), extracted from the fits to the data in panel (c) as a function of averaged inversion pulse fidelity \(<\sin^{2}(\theta/2)>\). Linear fit to the data (solid line) yields the spin coherence at single isolated ion limits to be \(T_{2}=0.660\)\(\mu s\) and a spin concentration of \(5.66\pm 0.25\) ppm in the sample.
as \(1/T_{2}=1/T_{2,\rm INST}+1/T_{2,\rm bath}\) where \(T_{2,\rm INST}\) represents the contribution from instantaneous diffusion. To understand the dominant dephasing mechanics, we carry out instantaneous diffusion measurements [48] to probe and decouple the Er-Er spin dipolar interactions. A generalized Hahn echo sequence (\(\pi/2-\tau-\theta\)) is performed on Er spins while the angle, and hence the fidelity, of the second inversion pulse is varied [48, 49]. The second pulse inhibits the decoupling of the probed spins' mutual dipolar interactions, resulting in decoherence through instantaneous diffusion. The echo signal (SE) is thus proportional to the exponential of the averaged inversion pulse fidelity \(<\sin^{2}(\theta/2)>\) and is given by [48]:
\[SE(\tau)\propto exp(\frac{8\pi^{2}}{9\sqrt{(}3)}\frac{g^{2}\beta^{2}}{\hbar}N \mbox{sin}^{2}(\theta/2)\tau) \tag{2}\]
Thus, \(T_{2,\rm INST}\) is proportional \(<\sin^{2}(\theta/2)>\) and one have the following equation for \(T_{2}\) where N is the total number of spins per \(m_{3}\) and \(\beta\) is the Bohr magnon,
\[1/T_{2}=1/T_{2,\rm INST}+1/T_{2,\rm bath}=\frac{8\pi^{2}}{9\sqrt{(}3)}\frac{g ^{2}\beta^{2}}{\hbar}N\mbox{sin}^{2}(\theta/2)+1/T_{2,\rm bath} \tag{3}\]
In the instantaneous diffusion measurements, the angle of the second rotation pulse \(\theta\) is varied by tuning the power of the microwave pulse (supplementary Section S4), while keeping the pulse length unchanged so that one rotates the same ensemble of spin within the second pulse. Fig. 3 (c) shows the measured echo intensity as a function of \(\tau\) with three different rotation angles \(\theta\). A reduction of rotation angle \(\theta\) reduces spin flips induced by the microwave pulse, hence, reduces instantaneous diffusion. Er spin \(T_{2}\) increased from \(0.25\mu s\) to \(0.58\mu s\). The inverse of the extracted \(T_{2}\) obtained through the single exponential fit is shown in Fig. 3 (d). Following Eq. 3, the slope of the linear fit to the data in Fig. 3 (d) yields the density of probed Er spins to be \(1.66\pm 0.08*10^{22}/m^{3}\), \(0.68\pm 0.03\) ppm. To estimate the overall concentration of Er, we need to take into account the fraction of probed Er out of the entire ensemble. The linewidth of the \(Er^{3+}\) spin resonance is \(2.57\pm 0.03\) mT (\(244.9\pm 2.9z\)) MHz, around 8.3 times larger than the bandwidth of the \(\theta\) rotation pulse. This indicates that only 12% of the spins within the inhomogeneous distribution are probed. Therefore, the estimated total concentration of the Er spin is \(5.66\pm 0.25\) ppm, within a factor of 2 of the Er concentration estimated from Er flux used during MBE growth. The intercept of the linear fit provides an estimate on the spin coherence at the single isolated ion limit with \(T_{2}=T_{2,\rm bath}=0.660\pm 0.004\)\(\mu s\). Thus, the measured \(T_{2}\) in Fig. 3 (b) is largely limited by the Er-Er spin dipolar interaction induced instantaneous diffusion and could thus be improved by a reduction of Er concentration. With the generalized echo sequence reducing instantaneous diffusion, the spin homogeneity \(\Gamma_{\rm h}=\frac{1}{\pi^{2}T_{2}}\) contributed by the bath is \(484.8\pm 20.6\) kHz. The deduced spin coherence \(T_{2,\rm bath}\) at the single isolated ion limit is probably limited by the phonon induced dephasing and spectral diffusion induced by interaction with other defects in the film. Further work on studying spin \(T_{2}\) at lower temperature to further probe the nature of dephasing dynamics is underway.
### Er\({}^{3+}\) electron spin relaxation
The limit on spin coherence is set by the spin relaxation. One can probe the spin relaxation mechanism to obtain an upper limit on the spin \(T_{2}\). The spin relaxation time is probed by first applying a \(\pi\)-pulse to inverse the population of spin-up and spin-down electron states, and then reading out the relaxation of spin-up to spin-down using the two-pulse Haho-echo sequence (schematically shown in Fig.4(a), details see Methods and Materials). By varying the delay time, \(\tau\), between the inversion pulse and the \(\pi/2\)-pulse, one can then map out the spin relaxation dynamics. Fig. 4 (a) shows the measured spin echo signal as a function of \(\tau\) measured at 3.6 K. The data indicates the presence of two spin relaxation channels with a short spin relaxation \(T_{1}=0.11\pm 0.01\) ms and a long relaxation \(T_{1}=0.83\pm 0.04\) ms. The two observed decay processes might come from the electron depopulation of the \(Z_{1}\) spin level to its nearby \(Z_{2}\) level, resulting in a sampling of electron population between three active states. EPR is an inductive detection method that is sensitive to the population of ground states. At 3.6 K, there is thermal population of both \(Z_{1}\) and \(Z_{2}\) levels given the small energy separation of 1.51 meV (357.5 GHz). The added Zeeman splitting further aids in reducing the energy barrier between \(Z_{1}\) spin-up and \(Z_{2}\) spin-down levels. Possible depopulation of \(Z_{1}\) spin-up level to \(Z_{2}\) spin level, mediated by phonon processes, can thus be detected by pulsed EPR.
To further probe the origin of the observed double exponential decay dynamics, we carry out optical measurements of spin relaxation \(T_{1}\) of the \(Z_{1}\) level at 3.6 K. We apply a 100 mT magnetic field parallel to the \(\langle 1-10\rangle\) direction, with orientation chosen to be the same as that used in the pulsed EPR measurements. A Zeeman splitting of 9.46 GHz between the spin-up and spin-down state of the \(Z_{1}\) level is induced, estimated based on the effective g value extracted from data shown in Fig. 3(a). Given that the spin \(T_{1}\) is shorter compared to the optical lifetime of the \(Y_{1}-Z_{1}\) transition, the optical measurement of spin \(T_{1}\) cannot be done using typical spectral hole-burning method [50, 51], where one can fully polarizes the spins through the cumulative optical excitation processes. Here we use a two pulse optical based 'pump-probe' scheme to
Figure 4: Electron spin relaxation dynamics at 3.6 K (a) Pulsed EPR based spin echo measurement with a three-pulse population inversion sequence (see Methods and Materials) shown in the inset. The measured data are shown as open circles and the solid line is a double exponential fit revealing two spin relaxation paths with a short spin relaxation \(T_{1}=0.11\pm 0.01\) ms and a long relaxation \(T_{1}0.83\pm 0.04\) ms. (b) Optical measurement of \(Z_{1}\) level spin relaxation using two optical pulses using ’pump and probe’ scheme. The panel shows the measured PL signal difference (\(\Delta\)PL, open circles) with and without the second probe optical pulse as a function of time delay \(\tau\) between pulses. Data are taken with laser resonant to the spin-down \(Z-1\) to spin-up \(Y_{1}\) level transition (inset). A 100 \(\mu s\) pulse is applied first followed with a second 100 \(\mu s\) pulse separated by time \(\tau\). PL signal is collected within a 4 ms collection window with 15,000 iterations of measurements. Details on the pulse sequence is in Supplementary Information Section S5. (c) Optically measured spin relaxation time \(T_{1}\) as a function of the applied magnetic field. The plotted \(T_{1}\) is the fitted value with 95% confidence extracted from each measured \(\Delta\)PL data in the SI.S5. The data show an increase of 0.437 ms to 1.575 ms by reducing the field strength from 250 mT to 50 mT.
probe the spin relaxation from spin up to spin down state that magnifies in population inversion recovery of the spin down states. We first apply a short 100 \(\mu s\) optical pulse resonant to the transition between \(Z_{1}\) spin down state and the \(Y_{1}\) spin up state (inset of Fig.4 to drive the electrons occupying spin-down states to the excited state, creating an initial state occupation where the population of the \(Z_{1}\) spin-up state is higher than that of spin-down. A second pulse of 100 \(\mu s\) is applied after a delay, \(\tau\), to probe the recovery of the spin-down state occupation due to spin relaxation. A reference measurement without the second excitation pulse is taken to sample the photon emission into collection window from optical decay of the \(Y_{1}\) level to both spin states after the first optical pulse as the background signal for subtraction (details in SLS5).
Figure 4(b) shows the measured difference of the PL signal, \(\Delta\)PL, collected during the collection window with and without the second optical pulse as a function of \(\tau\) between the two optical pulses. The spin recovery from spin-up to spin-down through spin relaxation is evidenced by the increasing \(\Delta\)PL with increasing \(\tau\). The data shows a single exponential decay (fitting, solid line) indicating a spin relaxation time for the Zeeman split \(Z_{1}\) spin-up to spin-down state of \(T_{1}=1.106\pm 0.256\) ms. The measured \(T_{1}\) value is consistent with the long \(T_{1}\) resolved in the pulsed EPR shown in Fig.4(a). The observed single exponential decay in optical measurement of spin relaxation of \(Z_{1}\) level also suggests that the short \(T_{1}\) of 0.11 ms observed in pulsed EPR measurement is most likely coming from the phonon mediated depopulation of electrons from \(Z_{1}\) spin up level to \(Z_{2}\) level.
At this temperature, the spin relaxation time \(T_{1}\) of the ground state is limited by phonon-mediated processes, including direct, Raman and Orbach processes [52]. One can further extend the relaxation time by tuning the magnetic field to control the direct coupling process, which can be suppressed with lower magnetic field by reducing the number of phonon modes that can couple to the Zeeman split states as [53]:
\[T_{1}^{-1}=A_{\rm o}(g^{4}){\rm sech}^{2}(\frac{g\mu_{B}B}{2k_{B}T})+A_{\rm d }(\frac{g\mu_{B}B}{h})^{5}{\rm coth}(\frac{g\mu_{B}B}{2k_{B}T})+R_{\rm o} \tag{4}\]
where \(g\mu_{B}B\) is the energy difference between the two \(Z_{1}\) spin sub-levels under magnetic field \(B\), \(k_{B}\) is Boltzmann's constant and \(h\) is Planck's constant. Figure 4(c) shows the measured spin \(T_{1}\) time as a function of the magnitude of the applied magnetic field. We measure \(T_{1}\) with B-field at 50 mT, 100 mT, 150 mT and 250 mT. The \(T_{1}\) values are extracted from measured \(\Delta\)PL data (see SLS5) using the same two optical pulse sequence. We observe an extension of \(T_{1}\) from \(T_{1}=1.106\pm 0.256\) ms to \(T_{1}=1.575\pm 0.256\) ms when reducing the field strength from 100 mT to 50 mT, and similarly a reduction of \(T_{1}\) to \(0.4345\pm 0.087\) ms at an elevated field of 250 mT. The solid line shown is a fit to the \(T_{1}\) data using Eq.4 with the Raman and Orbach processes treated as a constant in the fitting. The fitting suggests that one can further extend the spin \(T_{1}\) to 2.5 ms at 3.6 K. The observed spin \(T_{2}\) in the single-ion limit is 0.66 \(\mu s\), much less than the observed spin \(T_{1}\), possibly due to magnetic and electric noise from Er spin flip induced spectral diffusion and defects present in the film. One could further extend both spin \(T_{1}\) and \(T_{2}\) by freezing out the spin-flip induced dephasing at moderate fields and freezing out higher order phonon effects at millikelvin temperatures.
## 3 Conclusion
Our work on Er-doped CeO\({}_{2}\) hightlights the potential of this material system as a robust optical quantum memory platform owing to its narrow linewidth and long-lived optically addressable electron spin, enabled by the low nuclear spin host environment. The observed homogeneous linewidth of 440 kHz for the \(Y_{1}-Z_{1}\) transition and electron spin relaxation time of 2.5 ms at 3.6 K indicate the feasibility of using collective electron spin relaxation as a local quantum memory for quantum repeaters. The narrow homogeneous linewidth of 440 kHz also demonstrates the potential for integrating Er\({}^{3+}\) with nanophotonic cavities to achieve Purcell enhancement and near Fourier transformation-limited single-photon emission. This would allow for coherently driven optical transitions at a desired rate to address individual ions [45, 54] and examine the time-dependent spectral diffusion of individual Er\({}^{3+}\) ions in the host [7], a critical step towards entanglement distribution needed for quantum repeaters. The significant reduction in the concentration of nuclear magnetic moments in CeO\({}_{2}\) compared to that of other hosts, such as Y\({}_{2}\)SiO\({}_{5}\) and YAG, could open a path towards not only long-lived coherent Er\({}^{3+}\) electron spin states, but also long-lived nuclear spins in isotopically enriched \({}^{167}\)Er to enable long storage times on the scale of seconds, using both collective relaxation modes of nuclear spin ensembles [18] and individual nuclear spin states [55, 56].
The Er\({}^{3+}\) spin ensemble coherence value reported here is largely limited by the ion-ion dipolar interaction. As indicated by the instantaneous diffusion measurements, in the single-ion limit, the Er\({}^{3+}\) spin coherence \(T_{2}\) is around 0.66 \(\mu s\). One can be further improve spin coherence by lowering the temperature below the explored 3.6 K in this work to millikelvin temperature. One can also further improve coherence by using higher magnetic fields to freeze spin-flip induced dephasing. The MBE growth of Er-doped oxides also enables the control of Er doping levels and optimization of material quality in minimizing defects and dislocations in the film to reach high quality single crystal CeO\({}_{2}\) thereby reducing spectral diffusion and improving on both optical and spin properties. It can also enable growth of CeO\({}_{2}\) thin films with controlled delta doping of Er to create structures compatible with integration with nanophotonic cavities, either within the oxide or hybrid structures integrated with other dielectric materials. The sample studied here is grown without growth optimization yet already demonstrates appreciable spin relaxation ( \(\sim\)2.5 ms) and narrow optical homogeneous linewidths (440 kHz). Continued growth optimization employing slower growth rates--with a lower oxygen pressure [57] to suppress formation of dislocations and further reduce unintended defect concentrations in the film--has the potential to significantly reach narrower homogeneous linewidths and longer spin coherence and relaxation. Thus, Er\({}^{3+}\) in CeO\({}_{2}\), an oxide host with a very low nuclear spin environment, could emerge as a versatile platform for highly coherent light-matter quantum interfaces for developing quantum communication applications.
## 4 Materials and Methods
### Sample and growth
The Er-doped CeO\({}_{2}\) film studied here is grown on silicon (111)\(\pm\)0.5\({}^{\circ}\) using molecular beam epitaxy. Single crystal CeO\({}_{2}\) is grown at a sample temperature of 670\({}^{\circ}C\) with a growth rate of 312.1 nm/hr under oxygen pressure of 4.9e-6 torr and Ce/\(O_{2}\) flux ratio \(\sim\) 20. The grown CeO\({}_{2}\) layer is 936.3 nm thick with Er-doped through the entire grown layer with an estimated Er concentration of 3ppm, based on the Erbium flux delivered during growth [27]. Details on growth condition and structural characterization of as-grown films can be found in Ref.[27]. The as-grown sample is directly used for all the measurements shown here without any post-growth processing.
### Optical pulse sequences for optical measurements
All optical data shown in the main text are collected at 3.6 K with Er emission collected using time gating method. For PLE measurements (Fig. 2(a)), 1.5 ms long optical pulse with wavelength tuned across \(Y_{1}-Z_{1}\) transition is used to excite the Er ions. A collection window of 7 ms after the excitation pulse is used to collect the emission from Er ions. The collection window is chosen based on the optical lifetime of the \(Y_{1}-Z_{1}\) transition (3.4 ms, Ref. [27]) to enable needed signal-to-noise. For photon echo measurements (Fig.2(b) and (c)), 10 ns \(\pi\)/2-pulse followed with a 20 ns \(\pi\)-pulse after a delay \(\tau\) is used. The laser is set to be resonant with the \(Y_{1}-Z_{1}\) transition with laser power tuned to reach needed \(\pi\)/2 and \(\pi\) pulse area (see SLS3). The shortest possible pulse enabled by our instrumentation is used to minimize dephasing during the excitation process. Data are collected with \(\tau\) ranging from 180 ns to 700 ns. Details on instrumentation for all optical measurements are captured in SLS1 and SLS3.
### Pulsed EPR measurements
X-Band (9.7 GHz) EPR experiments are taken using an ELEXSYS E580 spectrometer (Bruker Biospin, Ettlingen, Germany) that is equipped with a dielectric ring resonator (Bruker ER 4118X-MD5). The Er-doped CeO\({}_{2}\) film on Si samples are diced to a size of 4 mm x 2.5 cm and mounted into a quartz tube suspended in the center of the dielectric ring resonator contained in a flow cryostat (Oxford Instruments CF935) with pumped liquid helium. The data shown in the manuscript are obtained at 3.6K with temperature controlled by an ITX temperature controller (Oxford Instruments). For spin echo field sweep (Fig.3(a)), two-pulse Hahn echo sequence is applied with a 12 ns \(\pi\)/2-pulse followed by a 24 ns \(\pi\)-pulse with a fixed delay \(\tau\) of 100 ns. The pulse length is chosen for the shorted achievable length to cover large size of Er spin ensemble. For spin coherence measurement (Fig.3(b)), the same Hahn echo sequence is used with a varying delay \(\tau\)
ranging from 100 ns to 1300 ns. For spin relaxation measurement (Fig.4(a)), three-pulse population inversion sequence is used where a 24 ns \(\pi\)-pulse is followed by a two-pulse Hahn echo sequence with varying time delay \(\tau\). The two-pulse Hahn echo sequence used here is composed of a 12 ns \(\pi/2\)-pulse followed by a 24 ns \(\pi\)-pulse with a fixed delay \(\tau\) of 100 ns.
## 5 Acknowledgements
The authors would like to thank Dr. Jonathan Marcks and Dr. Yeghishe Tsaturyan for helpful discussions. This work was primarily funded (J.Z., M.T.S., F. J. H., D. D. A.) by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division, including support for optical and spin characterization studies. The sample growth (I.M., G.D.G., S.G.) along with additional support for cryo-optical measurements (G.D.G., A.M.D.) was funded by Q-NEXT, a U.S. Department of Energy Office of Science National Quantum Information Science Research Centers under Award Number DE-FOA-0002253. The EPR work in the Chemical Sciences and Engineering Division was supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences, and Biosciences, through Argonne National Laboratory under Contract No. DE-AC02-06CH11357.
### Funding:
U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences, and Biosciences
### Author contribution
J.Z. conceived the experiments and performed the data analysis. J.Z. and G.D.G. carried out the optical measurements. I. M. carried out the growth of the sample with assistance from G.D.G. M.T.S. and A.M.D. helped carry out the fridge and optical echo measurements. J.K.B., J.N., and O.G.P. helped carry out the pulsed EPR measurements. All authors contributed to the manuscript.
### Competing interests:
All authors declare they have no competing interests.
### Data and materials availability
All data are available in the main text or the supplementary materials. |
2309.11001 | GME: GPU-based Microarchitectural Extensions to Accelerate Homomorphic
Encryption | Fully Homomorphic Encryption (FHE) enables the processing of encrypted data
without decrypting it. FHE has garnered significant attention over the past
decade as it supports secure outsourcing of data processing to remote cloud
services. Despite its promise of strong data privacy and security guarantees,
FHE introduces a slowdown of up to five orders of magnitude as compared to the
same computation using plaintext data. This overhead is presently a major
barrier to the commercial adoption of FHE.
In this work, we leverage GPUs to accelerate FHE, capitalizing on a
well-established GPU ecosystem available in the cloud. We propose GME, which
combines three key microarchitectural extensions along with a compile-time
optimization to the current AMD CDNA GPU architecture. First, GME integrates a
lightweight on-chip compute unit (CU)-side hierarchical interconnect to retain
ciphertext in cache across FHE kernels, thus eliminating redundant memory
transactions. Second, to tackle compute bottlenecks, GME introduces special
MOD-units that provide native custom hardware support for modular reduction
operations, one of the most commonly executed sets of operations in FHE. Third,
by integrating the MOD-unit with our novel pipelined $64$-bit integer
arithmetic cores (WMAC-units), GME further accelerates FHE workloads by $19\%$.
Finally, we propose a Locality-Aware Block Scheduler (LABS) that exploits the
temporal locality available in FHE primitive blocks. Incorporating these
microarchitectural features and compiler optimizations, we create a synergistic
approach achieving average speedups of $796\times$, $14.2\times$, and
$2.3\times$ over Intel Xeon CPU, NVIDIA V100 GPU, and Xilinx FPGA
implementations, respectively. | Kaustubh Shivdikar, Yuhui Bao, Rashmi Agrawal, Michael Shen, Gilbert Jonatan, Evelio Mora, Alexander Ingare, Neal Livesay, José L. Abellán, John Kim, Ajay Joshi, David Kaeli | 2023-09-20T01:50:43Z | http://arxiv.org/abs/2309.11001v1 | # GME: GPU-based Microarchitectural Extensions to
###### Abstract
Fully Homomorphic Encryption (FHE) enables the processing of encrypted data without decrypting it. FHE has garnered significant attention over the past decade as it supports secure outsourcing of data processing to remote cloud services. Despite its promise of strong data privacy and security guarantees, FHE introduces a slowdown of up to five orders of magnitude as compared to the same computation using plaintext data. This overhead is presently a major barrier to the commercial adoption of FHE.
In this work, we leverage GPUs to accelerate FHE, capitalizing on a well-established GPU ecosystem available in the cloud. We propose GME, which combines three key microarchitectural extensions along with a compile-time optimization to the current AMD CDNA GPU architecture. First, GME integrates a lightweight on-chip compute unit (CU)-side hierarchical interconnect to retain ciphertext in cache across FHE kernels, thus eliminating redundant memory transactions. Second, to tackle compute bottlenecks, GME introduces special MOD-units that provide native custom hardware support for modular reduction operations, one of the most commonly executed sets of operations in FHE. Third, by integrating the MOD-unit with our novel pipelined 64-bit integer arithmetic cores (WMAC-units), GME further accelerates FHE workloads by \(19\%\). Finally, we propose a Locality-Aware Block Scheduler (LABS) that exploits the temporal locality available in FHE primitive blocks. Incorporating these microarchitectural features and compiler optimizations, we create a synergistic approach achieving average speedups of \(796\times\), \(14.2\times\), and \(2.3\times\) over Intel Xeon CPU, NVIDIA V100 GPU, and Xilinx FPGA implementations, respectively.
## 1 Introduction
Large-scale machine learning (ML) models, such as OpenAI's GPT series and DALL-E, Google AI's BERT and T5, and Facebook's RoBERTA, have made significant advances in recent years. Unfortunately, providing public access for inference on these large-scale models leaves them susceptible to zero-day exploits [71, 38]. These exploits expose the user data as well as the ML models to hackers for potential reverse engineering [38], a concerning prospect as these models are highly valued assets for their respective companies. For example, a recent security vulnerability in the Redis client library resulted in a data breach on ChatGPT [60], which is currently regarded as one of the leading machine learning research platforms.
In the past decade, Fully Homomorphic Encryption (FHE) has emerged as the "holy grail" of data privacy. Using FHE, one can perform operations on encrypted data without decrypting it first (see Figure 1). FHE adopters can offload their encrypted private data to third-party cloud service providers while preserving end-to-end privacy. Specifically, the _secret key_ used for encryption by users is never disclosed to the cloud providers, thus facilitating privacy-preserving ML training and inference in an untrusted cloud setting (whether self-hosted or utilizing public cloud services) [87, 77, 83].
During its early stages, homomorphic encryption was limited by the number and types of computations, rendering it viable solely for _shallow circuits_[30]. In these circuits, the error would propagate and increase with each addition or multiplication operation, ultimately leading to decryption errors. Following Gentry's groundbreaking work [30], this important limitation was resolved by using _boostrapping_[19], resulting in FHE computations that permit an unlimited number of operations. Although FHE offers significant benefits in terms of privacy preservation, it faces the challenge of being extremely slow (especially the bootstrapping operation), with performance up to five orders of magnitude slower than plaintext computing [42].
Prior studies have tried to accelerate FHE kernels by developing CPU extensions [15, 31, 42, 55], GPU libraries [54, 61, 6, 56], FPGA implementations [1, 88, 66], and custom accelerators [33, 45, 67]. CPU-based solutions inherently face limitations due to their limited compute throughput [17], while FPGA-based solutions are constrained by their limited oper
Figure 1: FHE offers a safeguard against online eavesdroppers as well as untrusted cloud services by allowing direct computation on encrypted data.
ating frequency and resources available on the FPGA board. ASIC-based solutions provide the most acceleration [29], but they cannot be easily adapted to future algorithmic changes and can be fairly expensive to use in practice. Additionally, as the number of diverse domain-specific custom accelerators grows rapidly, it becomes increasingly difficult to create high-quality software libraries, compilers, drivers, and simulation tools for each accelerator in a timely manner, posing a challenge in terms of time-to-market. Therefore, while previous work has accelerated FHE workloads, they often fall short in terms of cost-effectiveness or lack the necessary infrastructure to support large-scale deployment.
Rather than developing domain-specific custom accelerators, our work focuses on enhancing the microarchitecture of GPUs that are currently deployed in the cloud and can be easily upgraded. This leads to a practical solution as we can readily exploit the cloud ecosystem that is built around GPUs. On the upside, GPUs offer a large number of vector processing units, so they are a good match to capitalize on the inherent parallelism associated with FHE workloads. However, FHE ciphertexts are large (dozens of MB), require a massive number of integer arithmetic operations, and exhibit varying stride memory access patterns. This imposes a true challenge for existing GPU architectures since GPUs have been historically designed to excel at executing thousands of threads in parallel (e.g., batched machine-learning workloads) featuring uniform memory access patterns and rich floating-point computations.
To bridge the wide performance gap between operating on encrypted data using FHE and operating on plaintext data in GPUs, we propose several microarchitectural features to extend the latest AMD CDNA GPU architecture. Specifically, our efforts are focused on improving the performance of the Residue Number System (RNS) version of the CKKS FHE scheme, as it naturally supports numerous privacy-preserving applications. Similar to results found in earlier studies [24], our benchmarking of CKKS FHE kernels indicates they are significantly bottlenecked by the limited main memory bandwidth. This is because current GPUs suffer from excessive redundant memory accesses when executing FHE-based workloads. Present GPUs are ill-equipped to deal with varying stride FHE memory access patterns. According to our experiments, this can lead to a very high degree of compute unit stalls and is a primary cause of the huge performance slowdown in FHE computations on GPU-based systems.
To address these challenges, we propose GME, a hardware-software co-design specifically tailored to provide efficient FHE execution on the AMD CDNA GPU architecture (illustrated in Figure 2). First, we present _CU-side interconnects_ that allow ciphertext to be retained within the on-chip caches, thus eliminating redundant memory transactions in the FHE kernels. Next, we optimize the most commonly executed operations present in FHE workloads (i.e., the modular reduction operations) and propose novel _MOD-units_. To complement our _MOD-units_, we introduce _WMAC-units_ that natively perform 64-bit integer operations, preventing the throttling of the existing 32-bit arithmetic GPU pipelines. Finally, in order to fully benefit from the optimizations applied to FHE kernels, we develop a Locality-Aware Block Scheduler (LABS) that enhances the temporal locality of data. LABS is able to retain on-chip cache data across FHE blocks, utilizing block computation graphs for assistance.
To faithfully implement and evaluate GME, we employ NaviSim [11], a cycle-accurate GPU architecture simulator that accurately models the CDNA ISA [6]. To further extend our research to capture inter-kernel optimizations, we extend the implementation of NaviSim with a block-level directed acyclic compute graph simulator called BlockSim. In addition, we conduct ablation studies on our microarchitectural feature implementations, enabling us to isolate each microarchitectural component and evaluate its distinct influence on the entire FHE workload.
Our contributions include:
1. _Simulator Infrastructure:_ We introduce BlockSim, which, to the best of our knowledge, is among the first efforts to develop a simulator extension for investigating FHE microarchitecture on GPUs.
2. _CU-side interconnect (**cNoC**):_ We propose an on-chip network that interconnects on-chip memory, enabling the exploitation of the large on-chip memory capacity and support for the all-to-all communication pattern commonly found in FHE workloads.
3. _GPU Microarchitecture:_ We propose microarchitectural enhancements for GPUs, including ISA extensions, modular reduction operation microarchitecture, and a wide arithmetic pipeline to deliver high through
Figure 2: The four key contributions of our work (indicated in green) evaluated within the context of an AMD CDNA GPU architecture.
put for FHE workloads.
4. _Locality-Aware Block Scheduler_: Utilizing the CU-side interconnect (**cNoC**), we propose a graph-based block scheduler designed to improve the temporal locality of data shared across FHE primitives.
Our proposed improvements result in an average speedup of 14.6\(\times\) over the prior state-of-the-art GPU implementation [41] for HE-LR and ResNet-20 FHE workloads. Our optimizations collectively reduce redundant computation by 38%, decreasing the memory pressure on DRAM. Although the proposed optimizations can be adapted for other architectures (with minor modifications), our work primarily concentrates on AMD's CDNA microarchitecture MI100 GPU.
## 2 Background
In this section, we briefly describe the AMD CDNA architecture and background of the CKKS FHE scheme.
### AMD CDNA Architecture
To meet the growing computation requirements of high-performance computing (HPC) and machine learning (ML) workloads, AMD introduced a new family of CDNA GPU architectures [8] that are used in AMD's Instinct line of accelerators. The CDNA architecture (see Figure 3) adopts a highly modular design that incorporates a Command Processor (CP), Shader Engines (including Compute Units and L1 caches), an interconnect connecting the core-side L1 caches to the memory-side L2 caches and DRAM. The CP receives requests from the driver on the CPU, including memory copying and kernel launch requests. The CP sends memory copying requests to the Direct Memory Access (DMA), which handles the transfer of data between the GPU and system memory. The CP is also responsible for breaking kernels down into work-groups and wavefronts, sending these compute tasks to Asynchronous Compute Engines (ACE), which manage the dispatch of work-groups and wavefronts on the Compute Units (CUs).
The CDNA architecture employs the CU design from the earlier GCN architecture but enhances it with new Matrix Core Engines. A CU (see Figure 3) is responsible for instruction execution and data processing. Each CU is composed of a scheduler that can fetch and issue instructions for up to 40 wavefronts. Different types of instructions are issued to different execution units, including a branch unit, scalar processing units, and vector processing units. The scalar processing units are responsible for executing instructions that manipulate data shared by work-items in a wavefront. The vector processing units include a vector memory unit, four Single-Instruction Multiple-Data (SIMD) units, and a matrix core engine. Each SIMD unit is equipped with 16 single-precision Arithmetic Logic Units (ALUs), which are optimized for FP32 operations. The matrix core engine handles multiply-accumulate operations, supporting various datapres (like 8-bit integers (INT8), 16-bit half-precision FP (FP16), 16-bit Brain FP (bf16), and 32-bit single-precision FP32). We cannot leverage these engines for FHE, as they work with INT8 operands that are not well-suited for FHE computations [78] (FHE workloads benefit from INT64 arithmetic pipelines). Each CU has a 64 KB memory space called the Local Data Share (LDS), which enables low-latency communication between work-items within a work-group. LDS is analogous to shared memory in CUDA. This memory is configured with 32 banks to achieve low latency and high bandwidth access. LDS facilitates effective data sharing among work-items and acts as a software cache to minimize global memory accesses. However, a significant limitation of LDS is that CUs can only access its local LDS, and directly accessing remote LDS is not possible.
The CDNA architecture has a two-level cache hierarchy. Each CU has a dedicated L1 vector cache. CUs in a Shader Engine (typically 15 CUs) share an L1 scalar cache and an L1 instruction cache. The second level of cache is composed of memory-side L2 caches. Each L2 cache interfaces to a DRAM controller (typically implemented in HBM or GDDR technology). The L2 caches and the DRAM controllers are banked, allowing them to service a part of the address space.
### CKKS FHE Scheme
In this paper, we focus on the CKKS FHE scheme, as it can support a wide range of privacy-preserving applications by allowing operations on floating-point data. We list the parameters that define the CKKS FHE scheme in Table 1 and the corresponding values of key parameters in Table 3. The main parameters --i.e., \(N\) and \(Q\)-- define the size of the ciphertext and also govern the size of the working data set that is required to be present in the on-chip memory. The ciphertext consists of a pair of elements in the polynomial ring \(R_{Q}=\mathbb{Z}_{Q}[x]/(x^{N}+1)\). Each element of this ring is a polyno
Figure 3: Architecture diagram showing the limitations of AMD GPU memory hierarchy. Each compute unit has a dedicated L1V cache and an LDS unit that cannot be shared with neighboring compute units.
mial \(\sum_{i=0}^{N-1}a_{i}x^{i}\) with "degree-bound" \(N-1\) and coefficients \(a_{i}\) in \(\mathbb{Z}_{Q}\). For a message \(\mathbf{m}\in\mathbb{C}^{n}\), we denote its encryption as \(\llbracket\mathbf{m}\rrbracket=(\mathbf{A_{m}},\mathbf{B_{m}})\) where \(\mathbf{A_{m}}\) and \(\mathbf{B_{m}}\) are the two polynomials that comprise the ciphertext.
For 128-bit security, typical values of \(N\) range from \(2^{16}\) to \(2^{17}\) and \(\log Q\) values range from 1700 to 2200 bits for practical purposes. These large sizes of \(N\) and \(\log Q\) are required to maintain the security of the underlying Ring-Learning with Errors assumption [57]. However, there are no commercially available compute systems that have hundred-bit wide or thousand-bit wide ALUs, which are necessary to process these large coefficients. A common approach for implementing the CKKS scheme on hardware with a much smaller word length is to choose \(Q\) to be a product of distinct word-sized primes \(q_{1},\ldots,q_{\ell}\). Then \(\mathbb{Z}_{Q}\) can be identified with the "product ring" \(\prod_{i=1}^{\ell}\mathbb{Z}_{q_{i}}\) via the Chinese Remainder Theorem [79]. In practice, this means that the elements of \(\mathbb{Z}_{Q}\) can be represented as an \(\ell\)-tuple \((x_{1},\ldots,x_{\ell})\) where \(x_{i}\in\mathbb{Z}_{q_{i}}\) for each \(i\). This representation of elements in \(\mathbb{Z}_{Q}\) is referred to as the _Residue Number System_ (RNS) and is commonly referred to as the limbs of the ciphertext.
In this work, as shown in Table 3, we choose \(N=2^{16}\) and \(\log Q=1728\), meaning that our ciphertext size will be 28.3 MB, where each polynomial in the ciphertext is \(\sim\)14 MB. After RNS decomposition on these polynomials using a word length of 54 bits, we get 32 limbs in each polynomial, where each limb is \(\sim\) 0.44 MB large. The last level cache and the LDS in the AMD MI100 are 8 MB and 7.5 MB, respectively. Thus we cannot accommodate even a single ciphertext in the on-chip memory. At most, we can fit \(\sim\)18 limbs of a ciphertext polynomial, and as a result, we will have to perform frequent accesses to the main memory to operate on a single ciphertext. In addition, the large value of \(N\) implies that we need to operate on \(2^{16}\) coefficients for any given homomorphic operation. The AMD MI100 GPU includes 120 CUs with 4 SIMD units each. Each SIMD unit can execute 16 threads in parallel. Therefore, a total of 7680 operations (scalar additions/multiplications) can be performed in parallel. However, we need to schedule the operations on \(2^{16}\) coefficients in over eight batches (\(2^{16}\) / 7680), adding to the complexity of scheduling operations.
We list all the building blocks in the CKKS scheme in Table 2. All of the operations that form the building blocks of the CKKS scheme reduce to 64 bit-wide scalar modular additions and scalar modular multiplications. The commercially available GPU architectures do not implement these wide modular arithmetic operations directly, but can emulate them via multiple arithmetic instructions, which significantly increases the amount of compute required for these operations. Therefore, providing native modular arithmetic units is critical to accelerating FHE computation. To perform modular addition over operands that are already reduced, we use the standard approach of conditional subtraction if the addition overflows the modulus. For generic modular multiplications, we use the modified Barrett reduction technique [76].
The ScalarAdd and ScalarMult are the two most basic building blocks that add and multiply a scalar constant to a ciphertext. PolyAdd and PolyMult add and multiply a plaintext polynomial to a ciphertext. We define separate ScalarAdd and ScalarMult operations (in addition to PolyAdd and PolyMult) because the scalar constant values can be fetched directly from the register file that can help save expensive main memory accesses. Note that the PolyMult is followed by an HERscale operation to restore the scale of a ciphertext to \(\Lambda\) from scale \(\Delta^{2}\). The CKKS supports floating-point messages, so all encoded messages must include a scaling factor \(\Delta\). This scaling factor is typically the size of one of the limbs of the ciphertext. When multiplying messages together, this scaling factor grows as well. The scaling factor must be shrunk down in order to avoid overflowing the ciphertext coefficient modulus.
In order to enable fast polynomial multiplication, by default, we represent polynomials as a series of \(N\) evaluations at fixed roots of unity. This allows polynomial multiplication to occur in \(O(N)\) time instead of \(O(N^{2})\) time. We refer to this polynomial representation as the _evaluation representation_. There are certain sub-operations within the building blocks, defined in Table 2, that operate over the polynomial's _coefficient representation_, which is simply a vector of its coefficients. Moving between the two polynomial representations requires a number-theoretic transform (NTT) or inverse NTT, which is the finite field version of the fast Fourier transform (FFT). We incorporate a merged-NTT algorithmic optimization [65], improving spatial locality for twiddle factors as they are read sequentially.
The HEAdd operation is straightforward and adds the corresponding polynomials within the two ciphertexts. However, the HEMult and HERotate operations are computationally expensive as they perform a KeySwitch operation after the multiplication and automorph operations, respectively. In both the HEMult and HERotate implementations, there is an intermediate ciphertext with a decryption key that differs from the decryption key of the input ciphertexts. In order to change
\begin{table}
\begin{tabular}{c l} \hline
**Param** & **Description** \\ \hline \hline \(N\) & Polynomial degree-bound \\ \(n\) & Length of the message. \(n\leq\frac{N}{2}\) \\ \(Q\) & Polynomial modulus \\ \(L\) & Maximum number of limbs in a ciphertext \\ \(\mathcal{C}\) & The set \(\{q_{0},q_{1},\ldots,q_{L}\}\) of prime factors of \(Q\) \\ \(\ell\) & Number of limbs, number of factors in \(Q\); \\ dnum & Number of digits in the switching key \\ \(\alpha\) & Number of limbs that comprise a single digit \\ & in the key-switching decomposition \(\alpha=\lceil\frac{L+1}{\text{dnum}}\rceil\) \\ \(P\) & Product of extension limbs added for \\ & raised modulus. Total extension limbs \(=\alpha+1\) \\ fftlter & Multiplicative depth of bootstrapping \\ & linear transform \\ \(\Delta\) & Scale multiplied during encryption \\ \(\mathbf{m}\) & A message vector of \(n\) slots \\ \(\llbracket\mathbf{m}\rrbracket\) & Ciphertext encrypting a message \\ \(\mathbf{A_{m}}\) & A randomly sampled polynomial from message \(\mathbf{m}\) \\ \(P\) & Encrypted message as a polynomial \\ \(P_{m}\) & Polynomial encrypting message \(m\) \\ \([P]_{q_{i}}\) & \(q_{i}\)-limb of \(P\) \\
**evk** & Evaluation key \\ \(\textbf{evk}^{(r)}_{\text{rot}}\) & Evaluation key for _HE-Rotate_ block with \\ & \((r)\) rotations \\ \hline \end{tabular}
\end{table}
Table 1: CKKS Parameters and descriptions
this new decryption key back to the original decryption key, we perform a key switch operation. This operation takes in a switching key (either \(\mathbf{evk}_{\text{mult}}\) or \(\mathbf{evk}_{\text{rot}}^{(\mathbf{r})}\)) and a ciphertext \(\llbracket\mathbf{m}\rrbracket_{s}\) that is decryptable under a secret key \(s\). The output of the key switch operation is a ciphertext \(\llbracket\mathbf{m}\rrbracket_{s^{\prime}}\) that encrypts the same message but is decryptable under a different key \(s^{\prime}\).
To incur minimal noise growth during the key switch operation, the key switch operation requires that we split the polynomial into dnum digits, then raise the modulus before multiplying with the switching key followed by a modulus down operation. The modulus raise and down operations operate on the coefficient representation of the polynomial, requiring us to perform expensive NTT and iNTT conversions. Moreover, the switching keys are the same size as the ciphertext itself, requiring us to fetch \(\sim\)112 MB of data to multiply the switching keys with the ciphertext. Thus, the key switching operation not only adds to the bulk of the compute through hundreds of NTT and iNTT operations, but also leads to memory bandwidth bottlenecks. Finally, there exists an operation known as bootstrapping [30] that needs to be performed frequently to de-noise the ciphertext. This bootstrapping operation is a sequence of the basic building blocks in the CKKS scheme, meaning that it suffers from the same compute and memory bottlenecks that exist in these building blocks, making it one of the most expensive operations.
## 3 GME Architecture
The current issue with GPUs while implementing FHE workloads is the significant disproportion in the usage of various hardware resources present on the GPUs. As a result, specific resources such as CUs experience underutilization, while others, like HBM and on-chip caches, pose as significant bottlenecks. In this paper, we propose to re-architect the current GPU microarchitecture and also introduce novel microarchitectural extensions that enable optimal utilization of GPU resources so as to maximize the performance of the FHE workloads running on the GPU. We propose **GME**, a robust set of microarchitectural features targeting AMD's CDNA architecture, unlocking the full potential of the GPU to accelerate FHE workloads over 14.2\(\times\) as compared to the previous comparable accelerators [41].
In our work, we pinpoint critical bottlenecks encountered during FHE workload execution and address them progressively using four microarchitectural feature extensions. Our on-chip CU-side hierarchical network (**cNoC**) and the Locality Aware Block Scheduler (**LABS**) contribute to minimizing the DRAM bandwidth bottleneck. Simultaneously, our implementation of native modular reduction (**MOD**) and wider multiply-accumulate units (**WMAC**) features improve the math pipeline throughput, ensuring a streamlined data flow with evenly distributed resource utilization. The list and impact of our contributions can be visualized in Figure 2.
### cNoC: CU-side interconnect
Modern GPUs have a network-on-chip that interconnects the cores (in the case of AMD GPUs, compute units) together with the memory partitions or memory banks. On-chip communication occurs between the cores and the memory banks, not necessarily between the cores. In this work, we propose a new type of on-chip interconnect that we refer to as a _CU-side network-on-chip (**cNoC**) that interconnects the CUs together - in particular, all the CU's LDS are interconnected together with (**cNoC**) to enable a "global" LDS that can be shared between the CUs. By exploiting the (cNoC), the dedicated on-chip memory can be shared between cores, thus minimizing memory accesses. We also provide synchronization barriers of varying granularity to mitigate race conditions. Since the LDS is user controlled, our approach does not incur the overhead associated with cache coherence and avoids redundant cache invalidations, but comes with some extra programmer
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Block** & **Computation** & **Description** \\ \hline \hline ScalarAdd\((\llbracket\mathbf{m}\rrbracket,c)\) & \(\llbracket\mathbf{m}+\mathbf{c}\rrbracket=(\mathbf{B_{m}}+\mathbf{c}, \mathbf{A_{m}})\) & Add a scalar \(c\) to a ciphertext where, \\ & & & **c** is a length-\(N\) vector with every element \(c\) \\ ScalarMult\((\llbracket\mathbf{m}\rrbracket,c)\) & \(\llbracket\mathbf{m}\cdot\mathbf{c}\rrbracket=(\mathbf{B_{m}}\cdot\mathbf{c}, \mathbf{A_{m}}\cdot\mathbf{c})\) & Multiply a scalar by a ciphertext \\ \hline PolyAdd\((\llbracket\mathbf{m}\rrbracket,\mathbf{P_{m^{\prime}}})\) & \(\llbracket\mathbf{m}+\mathbf{m^{\prime}}\rrbracket=(\mathbf{B_{m}}+\mathbf{P_{ m^{\prime}}},\mathbf{A_{m}})\) & Add an unencrypted polynomial \\ & & to a ciphertext \\ PolyMult\((\llbracket\mathbf{m}\rrbracket,\mathbf{P_{m^{\prime}}})\) & \(\llbracket\mathbf{m}\cdot\mathbf{m^{\prime}}\rrbracket=(\mathbf{B_{m}}* \mathbf{P_{m^{\prime}}},\mathbf{A_{m}}*\mathbf{P_{m^{\prime}}})\) & Multiplying an unencrypted polynomial \\ & & with a ciphertext \\ \hline HEAdd\((\llbracket\mathbf{m}\rrbracket,\llbracket\mathbf{m^{\prime}}\rrbracket)\) & \(\llbracket\mathbf{m}+\mathbf{m^{\prime}}\rrbracket=(\mathbf{B_{m}}+\mathbf{B _{m^{\prime}}},\mathbf{A_{m}}+\mathbf{A_{m^{\prime}}})\) & Add two ciphertexts \\ & \(\llbracket\mathbf{m}\cdot\mathbf{m^{\prime}}\rrbracket=\text{KeySwitch}( \mathbf{A_{m}}*\mathbf{A_{m^{\prime}}};\mathbf{evk}_{\text{mult}})+\) & Multiply two ciphertexts \\ & \((\mathbf{B_{m}}*\mathbf{B_{m^{\prime}}},\mathbf{A_{m}}*\mathbf{B_{m^{\prime}}}+ \mathbf{A_{m^{\prime}}}*\mathbf{B_{m}})\) & \\ HERotate\((\llbracket\mathbf{m}\rrbracket,r,\mathbf{evk}_{\text{rot}}^{(\mathbf{r})})\) & \(\llbracket\mathbf{m}\ll r\rrbracket=\text{KeySwitch}(\psi_{r}(\mathbf{A_{m}}), \mathbf{evk}_{\text{rot}}^{(\mathbf{r})})+\) & Circular rotate elements left by \(r\) slots \\ & \((\psi_{r}(\mathbf{B_{m}}),\mathbf{0})\) & \(\psi_{r}\) is an automorphism performed \\ HERescale\((\llbracket\mathbf{m}\rrbracket)\) & \(\llbracket\Delta^{-1}\cdot\mathbf{m}\rrbracket=(\Delta^{-1}\mathbf{B_{m}}, \Delta^{-1}\mathbf{A_{m}})\) & Restore the scale of a ciphertext \\ & & & from scale \(\Delta^{2}\) back to \(\Delta\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: HE building blocks using CKKS
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline \(\log(q)\) & \(N\) & \(\log Q\) & \(L\) & \(L_{boot}\) & dnum & fftIter & \(\lambda\) \\ \hline
54 & 2\({}^{16}\) & 1728 & 23 & 17 & 3 & 4 & 128 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Practical parameters for our FHE operations.
effort. By implementing a global address space (GAS) in our GPU, we establish data sharing and form a unified GAS by combining all LDSs. The virtual address space is then mapped onto this unified GAS, with translation using a hash of the lower address bits.
Current GPUs are designed hierarchically - e.g., MI100 GPU comprises numerous compute units, with 8 of them combined to form a _Shader Engine_ (seen in Figure 5). The proposed (**cNoC**) takes advantage of this hierarchy, utilizing a hierarchical on-chip network (illustrated in Figure 5) that features a single router for each _Shader Engine_, connecting the eight compute units that make up a _Shader Engine_. The MI100 GPU houses 15 _Shader Engines_, resulting in a total of 120 compute units. The routers are arranged in a \(3\times 5\) 2D grid and interconnected through a torus topology. While this _concentrated-torus_ topology [10, 39] can increase network complexity, it reduces the number of required routers (from 120 to 15), thereby minimizing the chip area needed for the network. In a concentrated-torus topology, all routers have the same degree (number of ports), creating an edge-symmetric topology that is well-suited for the all-to-all communication patterns of FHE workloads.
Figure 4(a) illustrates the conventional approach of data sharing, where memory transactions must traverse through the full memory hierarchy to share data between neighboring LDS. In contrast, our proposed CU-side interconnect, presented in Figure 4(b), incorporates on-chip routers that circumvent off-chip interconnects, improving data reuse. This results in a decrease of redundant memory operations by 38%, effectively supporting the all-to-all communication pattern commonly seen in FHE workloads.
### Enhancing the Vector ALU
**Native modular reduction extension: (MOD)** The existing GPU arithmetic pipeline is highly optimized for data manipulation operations like _multiply_, _add_, _bit-shift_, and _compare_. A wavefront executing any of these instructions takes 4 clock cycles in a lock-step manner in the SIMD units. In a single wavefront consisting of 64 threads, 16 threads are executed concurrently on the SIMD units during each clock cycle. Conversely, operations like _divide_ and _modulus_ are emulated using a series of native instructions, resulting in considerably slower performance compared to their native counterparts.
As stated in Section 2.2, the modular reduction operation, used for determining the remainder of a division, is performed after each addition and multiplication. As a result, optimizing modular reduction is crucial for speeding up FHE workloads. At present, the MI100 GPU executes a modular operation through a sequence of addition, multiplication, bit shift, and conditional operations, drawing on the conventional Barrett's reduction algorithm [48]. This operation currently takes a considerable amount of time, with the mod-red operation requiring an average of 46 cycles for execution on the MI100 GPU. In our study, we suggest enhancing the Vector ALU pipeline within the CDNA architecture to natively support modular reduction, which brings it down to an average of 17 cycles for each mod-red instruction. We augment the CDNA instruction set architecture (ISA) with a collection of vector instructions designed to perform modular reduction operations natively after addition or multiplication operations. The new native modular instructions proposed include:
* Native modular reduction: \(\texttt{mod-red}\)\(\texttt{<v0,s0>}\)\(|\)\(\mathbf{V}_{0}=\mathbf{V}_{0}\) mod \(s_{0}\)
* Native modular addition: \(\texttt{mod-add}\)\(\texttt{<v0,v1,s0>}\)\(|\)\(\mathbf{V}_{0}=(\mathbf{V}_{0}+\mathbf{V}_{1})\) mod \(s_{0}\)
* Native modular multiplication: \(\texttt{mod-mult}\)\(\texttt{<v0,v1,s0>}\)\(|\)\(\mathbf{V}_{0}=(\mathbf{V}_{0}\times\mathbf{V}_{1})\) mod \(s_{0}\)
Modular reduction involves several comparison operations, resulting in branch divergence in GPUs. Our implementation is derived from an improved Barrett's reduction algo
Figure 4: Inter-CU communication: Traditional vs proposed communication with on-chip network
Figure 5: Proposed hierarchical on-chip network featuring a concentrated 2D torus topology
rithm [76]. This approach minimizes the number of comparison operations to one per modular reduction operation, significantly reducing the number of branch instructions and enhancing compute utilization.
**Wider multiply-accumulate units (WMAC):** In the CKKS FHE scheme, we can choose to perform operations on 32, 64, or 128-bit wide RNS limbs for a ciphertext. This limb bit width governs the operand size for the vector ALUs, impacting the number of modular addition and multiplication operations required. Moreover, there is an algorithmic-level performance versus precision trade-off to consider when deciding on the bit width. If we opt for 32-bit wide RNS limbs, we will have numerous limbs to work with, increasing the available levels [2] while simultaneously reducing the achievable precision for an application. Conversely, if we select 128-bit RNS limbs, we will have fewer limbs to work with, resulting in a decrease in the number of available levels but result in high precision for an application. With our chosen parameters, using 128-bit wide RNS limbs would leave us with an insufficient number of limbs to perform a single bootstrapping operation. To strike a balance between performance and precision, we choose to use 64-bit wide RNS limbs in this work.
Most GPUs in the market natively support 16-, 32-, and 64-bit floating point computations as well as 4-, 8-, 32-bit integer computations. Unfortunately, they lack dedicated hardware support for 64-bit integer operations, the most common operation in FHE workloads. Instructions for processing 64-bit integer operands are emulated using multiple 32-bit integer instructions, making them comparatively slower. To complement our native modular reduction, which relies on 64-bit integer operations, we add support for hardware-backed 64-bit integer multiplier and accumulator, as well as widen the register-file size to accommodate the large ciphertexts. Table 4 demonstrates the decrease in total cycles for each of our proposed native modular instructions in comparison to the MI100 GPU-emulated instructions in the baseline (vanilla) configuration.
Prior studies [84, 28] argued that dedicating resources to specialized 64-bit integer cores was not justifiable in terms of opportunity cost, as workloads at the time did not necessitate INT64 support, and emulation with 32-bit cores was sufficient. However, in the context of FHE, we maintain that the performance improvements attained through using an upgraded vector ALU justify the additional chip resources allocated.
### LABS: Locality-Aware Block Scheduler
So far, our microarchitectural extensions primarily focused on optimizing individual FHE blocks. To better leverage these new features, we focus next on inter-block optimization opportunities, targeting the workgroup dispatcher within the CDNA architecture. GPU scheduling is typically managed using streams of blocks that are scheduled on compute units in a greedy manner [9]. The presence of large GPU register files allows the scheduler to oversubscriber blocks to each compute unit. However, the existing scheduler within the CDNA architecture is not cognizant of inter-block data dependencies, forcing cache flushes when transitioning from one block to the next.
We propose a Locality-Aware Block Scheduler (LABS) designed to schedule blocks with shared data together, thus avoiding redundant on-chip cache flushes, specifically in the LDS. LABS further benefits from our set of microarchitectural enhancements, which relax the operational constraints during block scheduling and create new opportunities for optimization (for instance, the (**cNoC**) feature enables LDS data to be globally accessible across all CUs, thereby allowing the scheduler to assign blocks to any available CU). To develop LABS, we employ a well-known graph-based mapping solution and frame the problem of block mapping to CUs as a compile-time Graph Partitioning Problem (GPP) [80, 85, 80].
**Graph Partitioning Problem:** To develop our locality-aware block scheduler, we use two graphs. Let \(G=G(V,E)\) represent a directed acyclic compute graph with vertices \(V\) (corresponding to FHE blocks) and edges \(E\) (indicating the data dependencies of the blocks). Similarly, let \(G_{a}=G_{a}(V_{a},E_{a})\) denote an undirected graph with vertices \(V_{a}\) (representing GPU compute units) and edges \(E_{a}\) (illustrating the communication links between compute units). Both edge sets, \(E\) and \(E_{a}\), are assumed to be weighted, with edge weights of \(E\) signifying the size of data transferred between related blocks, and \(E_{a}\) representing the bandwidth of communication between corresponding compute units. We can then define \(\pi:V\to V_{a}\) as a mapping of \(V\) into \(V_{a}\) disjoint subsets. Our objective is to find a mapping \(\pi\) that minimizes communication overhead between compute units.
We formulate our Graph Partitioning Problem (GPP) by introducing a cost function \(\Phi\). For a graph \(G\), if it is partitioned such that \(E_{c}\) denotes the set of edge cuts, then \(\Phi\) can be expressed as the sum of the individual cut-edge weights (with \((v,w)\) representing the edge-weight of the edge connecting node \(v\) to node \(w\)). The cost function \(\Phi\) reflects the communication overhead associated with assigning FHE blocks to separate compute units. The goal of the graph partitioning problem is to discover a partition that evenly distributes the load across each compute unit while minimizing the communication cost \(\Phi\).
\[\Phi=|E_{c}|=\sum_{(v,w)\in E_{c}}|(v,w)|\]
In this equation, \(|(v,w)|\) signifies the data transferred between FHE blocks. To partition the compute graph and prepare it for mapping onto the architecture graph, we utilize a multilevel mesh partitioning technique. For readers interested in gaining further insights into our graph partitioning implementation of the multi-level mesh partitioning algorithm, we recommend
\begin{table}
\begin{tabular}{l|c c c} \hline \(\mu\)-**arch.** & mod-red & mod-add & mod-mul \\
**Feature** & (cycles)1 & (cycles)2 & (cycles)3 \\ \hline
**Vanilla MI100\({}^{\dagger}\)** & 46 & 62 & 63 \\
**MOD\({}^{\Delta}\)** & 26 & 18 & 38 \\
**MOD+WMAC** & 17 & 7 & 23 \\ \hline \end{tabular}
\end{table}
Table 4: Cycle counts for 64-bit modulus instructions comparing MOD and WMAC features
referring to the work of Walshaw and Cross [85].
**Architecture-aware mapping:** In this work, we focus on mapping our partitioned subgraphs onto the set of compute units \(V_{a}\), where communication costs (both latency and bandwidth) are not uniformly distributed across the network [75]. To uniformly distribute the communication overheads across the network, we introduce a network cost function \(\Gamma\). Here, \(\Gamma\) is defined as the product of individual cut-weights and their corresponding edge-weights in the architecture graph when mapped using a mapping function \(\pi\). Formally, \(\Gamma\) is described as:
\[\Gamma=\sum_{(v,w)\in E_{c}}|(v,w)|.|(\pi(v),\pi(w))|\]
In this equation, \(\pi(v)\) represents the mapping of block \(v\) to a compute unit from the set \(V_{a}\), after applying the mapping function \(\pi\). Additionally, \(|(\pi(v),\pi(w))|\) represents the communication bandwidth between compute units \(\pi(v)\) and \(\pi(w)\). Similar to our analysis with \(\Phi\), our goal is to minimize \(\Gamma\). To accomplish this, we use a compile-time optimization by applying _simulated annealing_, alongside mesh partitioning, to map FHE blocks onto compute units efficiently. The evaluation of performance improvements by incorporating the **LABS** is discussed further in Section 4.
## 4 Evaluation
In this section, we first give a concise overview of the GPU simulator employed to model our microarchitectural extensions. Next, we outline the evaluation methodology assumed to assess the performance of our bootstrapping and other workload implementations. Finally, we present evaluation results.
### The NaviSim and BlockSim Sim Simulators
In our work, we leverage NaviSim [11], a cycle-level execution-driven GPU architecture simulator. NaviSim faithfully models the CDNA architecture by implementing a CDNA ISA emulator and a detailed timing simulator of all the computational components and memory hierarchy. NaviSim utilizes the Akita simulation engine [81] to enable modularity and high-performance parallel simulation. NaviSim is highly configurable and accurate and has been extensively validated against an AMD MI100 GPU. As an execution-driven simulator, NaviSim recreates the execution results of GPU instructions during simulation with the help of an instruction emulator for CDNA ISA [7, 12]. Currently, NaviSim supports kernels written in both OpenCL [43] and the HIP programming language [9]. For our experiments, we implement our kernels using OpenCL. NaviSim can generate a wide range of output data to facilitate performance analysis. For performance metrics related to individual components, NaviSim reports instruction counts, average latency spent accessing each level of cache, transaction counts for each cache, TLB transaction counts, DRAM transaction counts, and read/write data sizes. For low-level details, NaviSim can generate instruction traces and memory traces. Finally, NaviSim can produce traces using the Daisen format so that users can use Daisen, a web-based visualization tool [82], to inspect the detailed behavior of each component.
We enhance NaviSim's capabilities by incorporating our new custom kernel-level simulator, BlockSim. BlockSim is designed to enable us to identify inter-kernel optimization opportunities. With an adjustable sampling rate for performance metrics, BlockSim accelerates simulations, facilitating more efficient design space exploration. BlockSim generates analytical models of the FHE Blocks to provide estimates for run times of various GPU configurations. When the best design parameters are identified, NaviSim is then employed to generate cycle-accurate performance metrics. Besides supporting FHE workloads, BlockSim serves as an essential component of NaviSim by abstracting low-level implementation details from the user, allowing them to focus on entire workloads rather than individual kernels. BlockSim enables restructuring of the wavefront scheduler and integrates compile-time optimizations obtained from **LABS**. We utilize AMD's CDNA architecture-based MI100 GPU to create a baseline for FHE application evaluations. We further validate our BlockSim findings with the MI100 GPU.
### Experimental Setup
In our experiments, we determine our baseline performance using an AMD MI100 CDNA GPU (see table 5). We then iteratively introduce microarchitectural extensions and evaluate the performance benefits of each enhancement. We first evaluate our three microarchitectural extensions (**cNoC**, **MOD**, **WMAC**), then evaluate our compile-time optimization **LABS**, and conclude with a memory size exploration to determine the impact of on-chip memory size on FHE workloads. We evaluate these microarchitectural enhancements and compiler optimization using NaviSim and BlockSim. To determine the power and area overhead of our proposed microarchitectural components, we implement them in RTL. Utilizing Cadence Genus Synthesis Solutions, we synthesize these RTL components targeting an ASAP7 technology library [22] and determine the area and power consumption for each proposed microarchitectural element.
We first evaluate our bootstrapping implementation performance, utilizing the _amortized mult time per slot_ metric [41]. This metric has been used frequently in the past to perform a
\begin{table}
\begin{tabular}{l c} \hline
**Parameter** & **Value** \\ \hline GPU Core Freq & 1502 MHz \\ Process Size & 7 nm \\ TFLOPS & 23.07 \\ \hline Register File & 15 MB \\ CU count & 120 \\ L1 Vector Cache & 16 KB per CU \\ L1 Scalar Cache & 16 KB \\ L1 Inst Cache & 32 KB \\ Shared L2 & 8 MB \\ LDS & 7.5 MB \\ GPU Memory & 32 GB HBM2 \\ Mem Bandwidth & 1229 GB/s \\ \hline Host CPU & AMD EPYC 7002 \\ Host OS & Ubuntu 18.04 \\ GPU Driver & AMD ROCm 5.2.5 \\ \hline \end{tabular}
\end{table}
Table 5: MI100 GPU Parameters
comparison between different bootstrapping implementations. We can compute this metric as follows:
\[\mathbf{T}_{A,S}=\frac{\mathbf{T}_{\text{boot}}+\sum_{\ell=1}^{L-L_{\text{boot}}} \mathbf{T}_{\text{mult}}(\ell)}{L-L_{\text{boot}}}.\frac{1}{n} \tag{1}\]
Here, \(\mathbf{T}_{\text{boot}}\) stands for total bootstrapping runtime, and L\({}_{\text{boot}}\) stands for the number of levels that the bootstrapping operation utilizes. The rest of the parameters are defined in Table 1. The parameters that we have used in our implementation have an L\({}_{\text{boot}}=17\) and \(n=2\)15. In addition, we analyze the performance of two workloads: HE-based logistic regression (HELR) [35] and encrypted ResNet-20 [50] utilizing the CIFAR-10 dataset. For all three workloads, we evaluate the contributions of each individual FHE building block (see Table 2) that make up the respective workload. In addition, for these workloads, we report the performance benefits achieved by employing each of the proposed microattectural enhancements.
Footnote 15: In this section, we refer to the CPU implementation as Lattigo, the GPU implementation as 100x, and the CraterLake ASIC design as CL. For the other accelerators, we use the full names from the respective papers.
We also compare our implementations with other state-of-the-art CKKS accelerators, incorporating a diverse selection of CPU [16, 62], GPU [62, 41, 27], FPGA [1], and ASIC [44, 45, 69, 70] platforms.1 Table 6 presents a detailed comparison of the key architectural parameters across all the related works. Table 6 also showcases the distribution of chip area and power requirements for each microarchitectural enhancement of GME. Since the maximum operating frequency \(F_{max}\) of our microarchitectural enhancements (1.63 GHz) is greater than the typical operating frequency of the MI100 GPU (1.5 GHz), we do not expect our extensions to change the critical path timings of the MI100 design. It is essential to emphasize that operating frequencies differ across various designs, a crucial factor to consider when comparing execution times in absolute terms. Moreover, the ASIC designs make use of large on-chip memory, resulting in an expensive solution, and they are also not as flexible as CPU, GPU, and FPGA.
Footnote 15: The values displayed here exclude contributions from the LABS optimization, as LABS is an _inter-block_ optimization, and the metrics provided are intended for individual blocks.
### Results
**Performance of FHE Building Blocks:** We begin by comparing the performance of individual FHE blocks with the previous state-of-the-art GPU implementation [41]. Since these are individual FHE blocks, the reported metrics do not account for our inter-block **LABS** compiler optimization. We find that HEMult and HERRotate are the most expensive operations, as they require key switching operations that involve the most data transfers from the main memory. The next most expensive operation is HERscale, where the runtime is dominated by the compute-intensive NTT operations.
Across the five FHE blocks mentioned in Table 7, we achieve an average speedup of \(6.4\times\) compared to the 100x implementation. In particular, we see a substantial performance improvement in the most expensive operations, namely HEMult and HERRotate, as our proposed microarchitectural enhancements reduce the data transfer time by \(12\times\) for both blocks. For HERscale, we manage to decrease the average memory transaction latency by \(13\times\) using our microarchitectural enhancements to the on-chip network, **cNoC**. Thus making HERscale the fastest block in comparison to 100x GPU implementation.
\begin{table}
\begin{tabular}{l|c c c c c c c c c|c c c}
**Parameters** & **\(\mathbf{\frac{\mathbf{T}_{\text{boot}}}{\text{L}}}\) & **\(\mathbf{\frac{\mathbf{T}_{\text{boot}}}{\text{L}}}\)** & **\(\mathbf{\frac{\mathbf{T}_{\text{boot}}}{\text{L}}}\)** & **\(\mathbf{\frac{\mathbf{T}_{\text{boot}}}{\text{L}}}\)** & **\(\mathbf{\frac{\mathbf{T}_{\text{boot}}}{\text{L}}}\)** & **\(\mathbf{\frac{\mathbf{T}_{\text{boot}}}{\text{L}}}\)** & **\(\mathbf{\frac{\mathbf{T}_{\text{boot}}}{\text{L}}}\)** & **\(\mathbf{\frac{\mathbf{T}_{\text{boot}}}{\text{L}}}\)** & **\(\mathbf{\frac{\mathbf{T}_{\text{boot}}}{\text{L}}}\)** \\ \hline \hline Technology (\(nm\)) & 14 & 12/14 & 7 & 12/14 & 7 & 16 & 12 & 7 & **7** & & & \\ Word size (bit) & 54 & 32 & 64 & 28 & 64 & 54 & 54 & 32 & **54** & & & \\ On-chip memory (MB) & 6 & 64 & 512 & 256 & 512 & 43 & 6 & 20.25 & **15.5** & & & \\ \hline Frequency (GHz) & 3.5 & 1.0 & 1.2 & 1.0 & 1.0 & 0.3 & 1.2 & 1.4 & **1.5** & 1.68\({}^{\ddagger}\) & 1.63\({}^{\ddagger}\) & 1.72\({}^{\ddagger}\) \\ Area (\(mm^{2}\)) & 122 & 151.4 & 373.6 & 472.3 & 418.3 & - & 815 & 826 & **700\({}^{*}\)\(+\) 186.2\({}^{\dagger}\)** & 96.82 & 48.27 & 41.11 \\ Power (\(W\)) & 91 & 180.4 & 163.2 & 317 & 281.3 & 225 & 250 & 400 & **300\({}^{*}\)\(+\) 107.5\({}^{\dagger}\)** & 53.91 & 31.86 & 21.73 \\ \hline \end{tabular}
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
**
*
*
*
*
*
*
*
*
*
*
*
*
*
**
*
*
*
*
**
*
*
*
*
*
*
*
*
*
*
*
**
*
*
**
*
*
*
*
*
**
*
*
**
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
**
*
*
*
**
*
*
*
*
*
*
*
*
**
*
*
*
*
*
**
*
*
*
*
*
*
**
*
*
*
**
*
*
*
*
*
*
*
*
**
*
*
**
**
*
*
*
*
**
*
*
*
*
**
*
*
*
**
*
*
*
*
*
*
*
*
*
*
*
*
*
*
**
*
*
*
*
*
**
*
*
*
*
*
**
*
**Impact of Microarchitectural Extensions:** Figures 6 and 7 highlight the impact of each of our proposed microarchitectural extensions as well as our compile-time optimizations across three different workloads, i.e., bootstrapping, HE-LR, and ResNet-20.
First, our proposed concentrated 2D torus network enables ciphertexts to be preserved in on-chip memory across kernels, leading to a significant increase in compute unit utilization across workloads, thereby reducing the average cycles consumed per memory transaction (see Avg. CPT in Figure 6). In fact, when comparing the average number of cycles spent per memory transaction (average CPT), we observe that the ResNet-20 workload consistently displays a lower average CPT value compared to the HE-LR workload. This indicates a higher degree of data reuse within the ResNet-20 workload across FHE blocks as opposed to the HE-LR workload. With **cNoC** enhancement, as the data required from previous kernels is retained in the on-chip memory, CUs are no longer starved for data and this also results in a substantial decrease in DRAM bandwidth utilization and DRAM traffic (the total amount of data transferred from DRAM). The L1 cache utilization decreases notably across all three workloads for the **cNoC** microarchitectural enhancement. This is due to the fact that the LDS bypasses the L1 cache, and memory accesses to the LDS are not included in the performance metrics of the L1 cache.
The proposed **MOD** extension enhances the CDNA ISA by adding new instructions. These new instructions are complex instructions that implement commonly used operations in FHE, like mod-red, mod-add, and mod-mult. As these instructions are complex (composed of multiple sub-instructions), they consume a higher number of cycles than comparatively simpler instructions such as mult or add. This is the reason for the increase in the average cycles per instruction (CPI) metric shown in Figure 6.
The compile-time **LABS** optimization in our approach further removes redundant memory transactions by scheduling blocks that share data together, thus reducing total DRAM traffic and enhancing CU utilization. **LABS** takes advantage of the on-chip ciphertext preservation enabled by our **cNoC** microarchitectural enhancement. Across bootstrapping, HE-LR, and ResNet-20 workloads, **LABS** consistently delivers an additional speedup of over 1.5\(\times\) on top of **cNoC** and **MOD** (See Figure 7).
**Performance Comparison:** We compare the performance of GME with 100x implementation of FHE workloads in Table 8. GME surpasses the previous best GPU-based implementation for bootstrapping and HE-LR by factors of 15.7\(\times\) and 14.2\(\times\), respectively. Note that we do not compare the performance of ResNet-20 workload with 100x, as they do not implement this workload. With close to double the on-chip memory (LDS), and similar peak memory bandwidth, our microarchitectural extensions paired with our compiler optimization delivered significant performance improvement across all three FHE workloads. GME significantly outperforms the CPU implementation Lattigo by 514\(\times\), 1165\(\times\), and 427\(\times\) for bootstrapping, HE-LR, and ResNet-20 workloads, respectively. We assessed Lattigo's performance by executing workloads on an Intel 8th-generation Xeon Platinum CPU with 128 GB of DDR4 memory.
In addition, GME outperforms the FPGA design implementation of FHE workloads, called FAB[1], by 2.7\(\times\) and 1.9\(\times\) for bootstrapping and HE-LR workloads, respectively. A primary factor contributing to this acceleration is the low operating frequency of FPGAs (the Alveo U280 used in FAB operates at 300MHz, while GME cores can achieve peak frequencies of 1.5GHz [21]). In their work, FAB scales their implementation to 8 FPGAs for the HE-LR workload (referred
Figure 6: Influence of individual proposed microarchitectural extension on architectural performance metrics. Metrics illustrate a cumulative profile where each enhancement builds upon the preceding set of improvements
Figure 7: Speedup achieved from each microarchitectural extension. The baseline refers to a vanilla MI100 GPU. The reported speedup is cumulative, with each microarchitectural enhancement building upon the previous ones
to as FAB-2). GME surpasses FAB-2 by 1.4\(\times\). This occurs because, when the intended application cannot be accommodated on a single FPGA device, considerable communication overheads negate the advantages of scaling out.
However, GME does not outperform all ASIC implementations shown in Table 8. While it achieves an average speedup of 18.7\(\times\) over F1 for the HE-LR workload, it falls short in comparison to BTS, CL, and ARK due to their large on-chip memory and higher HBM bandwidths. ASIC implementations are tailored for a single workload. Their customized designs lack flexibility, so they cannot easily accommodate multiple workloads across domains. Cutting-edge implementations such as ARK [44] integrate the latest HBM3 technology, enabling them to utilize nearly twice the memory bandwidth available in HBM3, as compared to HBM2 used on MI100 GPUs. CraterLake (CL) [70] incorporates extra physical layers (PHY) to facilitate communication between DRAM and on-chip memory, thereby enhancing the available bandwidth for FHE workloads. In this paper, we limit our focus to an existing HBM model compatible with the CDNA architecture without modifications to the physical communication layers.
**On-chip Memory Size Exploration:** Finally, we look for the ideal on-chip memory (LDS) size for the FHE workload, as shown in Figure 8. By increasing the total LDS size from 7.5MB (which is the current LDS size on MI100 (GPU) to 15.5MB, we achieve speedups of 1.74\(\times\), 1.53\(\times\), and 1.51\(\times\) for Bootstrapping, HE-LR, and ResNet-20 workloads, respectively. However, increasing the LDS size beyond 15.5 MB does not result in substantial speedup, as DRAM bandwidth becomes a bottleneck.
## 5 Discussion
In the field of accelerator design, developing general-purpose hardware is of vital importance. Rather than creating a custom accelerator specifically for FHE, we focus on extending the capabilities of existing GPUs to take advantage of the established ecosystems for GPUs. General-purpose hardware, such as GPUs, reap the benefits of versatile use of all microarchitectural elements present on the GPU. In this section, we demonstrate the potential advantages of the proposed microarchitectural enhancements across various domains, confirming the importance of these microarchitectural features. Our observations are based on prior works, which highlight the potential benefits of similar optimizations across diverse workloads. We evaluate the influence of each optimization by examining communication overheads, high data reuse, utilizing modular reduction, or employing integer arithmetic. Table 9 presents an overview of our findings, highlighting the potential advantages of the proposed microarchitectural extensions across an array of other workloads.
The recent Hopper architecture by NVIDIA for the H100 GPU introduced a feature termed DSMEM (Distributed Shared Memory). This allows the virtual address space of shared memory to be logically spread out across various SMs (streaming multiprocessors) [26]. Such a configuration promotes data sharing between SMs, similar to the (**cNoC**) feature we introduced. However, the details of the SM-to-SM network for DSMEM are not publicly available and to the best of
Figure 8: Exploring the impact of on-chip memory size on FHE workload performance
\begin{table}
\begin{tabular}{l|l|l l l l} \hline
**Accelerator** & **Arch.** & \(\mathbf{T}_{A.S.}\) & **Boot** & **HE-** & **ResNet** \\ & & (_ns_) & (_ms_) & (_ms_) & **(_ms_) \\ \hline \hline Lattigo [59] & CPU & 8.8\(e\)4 & 3.9\(e\)4 & 23293 & - \\ HyPHEN [62] & CPU & 2110 & 2.1\(e\)4 & - & 3.7\(e\)4 \\ \hline F1 [69] & ASIC & 2.6\(e\)5 & Yes\({}^{\dagger}\) & 1024 & - \\ BTS [45] & ASIC & 45 & 58.9 & 28.4 & 1910 \\ CL [70] & ASIC & 17 & 4.5 & 15.2 & 321 \\ ARK [44] & ASIC & 14 & 3.7 & 7.42 & 125 \\ \hline FAB [1] & FPGA & 470 & 92.4 & 103 & - \\ \hline
100x [41] & V100 & 740 & 528 & 775 & - \\ Hyphen [62] & V100 & - & 830 & - & 1400 \\ T-FHE [27] & A100 & 404 & 157 & 178 & 3793 \\ Baseline & MI100 & 863 & 413 & 658 & 9989 \\
**GME** & **MI100+** & **74.5** & **33.63** & **54.5** & **982** \\ \hline \end{tabular} \({}^{\dagger}\)F1 is limited to a single-slot bootstrapping, while others support packed bootstrapping.
\end{table}
Table 8: HE workloads execution time comparison of proposed GME extensions with other architectures
our knowledge, the SM-to-SM connectivity is not global but limited to the Thread Block Cluster comprised of 8 SMs. In contrast, the **(cNoC)** proposed by us enables global connectivity to all 120 CUs in our MI100 GPU, enabling efficient all-to-all communication. For enhancing FHE performance, it's crucial to substantially reduce the latency in SM-to-SM communication. We aim to conduct a detailed analysis comparing the inter-SM communication overheads of the H100 GPU to those of GME in future work.
## 6 Related Work
**CPU/GPU implementations:** Several algorithmic implementations, such as Lattigo [58], SEAL [73], HEXL [15], HEAAN [20], HELib [13, 34], and PALISADE [64], have recently been proposed for FHE using the CKKS scheme. Despite the efforts put forth by these libraries, a CPU-based implementation of FHE remains infeasible due to the relatively limited computational power of CPUs.
PRIFT [3] and the work by Badawi et al. [5] aims to accelerate FHE using NVIDIA GPUs. Although they support most HE blocks, they do not accelerate bootstrapping. 100x [41] speeds up all HE blocks, including bootstrapping. While 100x optimizes off-chip memory transactions through _kernel-fusions_, their implementation still results in redundant memory transactions due to partitioned on-chip memory of V100. Locality-aware block scheduling [51] has been proposed in GPUs to maximize locality within each core; however, LABS maximizes locality by exploiting the globally shared LDS through the proposed **(cNoC)**.
**FPGA accelerators:** Multiple prior efforts [46, 47, 66, 68] have developed designs for FHE workloads. However, most of them either do not cover all HE primitives or only support smaller parameter sets that allow computation up to a multiplicative depth of 10. HEAX [66] is an FPGA-based accelerator that only speeds up CKKS encrypted multiplication, with the remainder offloaded to the host processor.
FAB demonstrates performance comparable to the previous GPU implementation, 100x [41], and ASIC designs BTS [45] and F1 [69] for certain FHE workloads. Although FPGAs show great potential for accelerating FHE workloads, they are limited by low operating frequencies and compute resources. Furthermore, the substantial communication overhead and the time required to program the FPGA discourages their wide-scale deployment [63].
**ASIC accelerators:** There exist several recent ASIC designs including F1 [69], CracterLake [70], BTS [45], and ARK [44] that accelerate the CKKS FHE scheme. F1 implementation makes use of small \(N\) and \(Q\) values, implementing only a single-slot bootstrapping. BTS is the first ASIC proposal demonstrating the performance of a fully-packed CKKS bootstrapping. CracterLake and ARK design further enhance the packed CKKS bootstrapping performance and demonstrate several orders of performance improvement across various workloads.
## 7 Conclusion
In this work, we present an ambitious plan for extending existing GPUs to support FHE. We propose three novel microarchitectural extensions followed by compiler optimization. We suggest a 2D torus on-chip network that caters to the all-to-all communication patterns of FHE workloads. Our native modular reduction ISA extension reduces the latency of modulus reduction operation by 43%. We enable native support for 64-bit integer arithmetic to mitigate math pipeline throttling. Our proposed BlockSim simulator enhances the capabilities of the open-source GPU simulator, NaviSim, allowing for coarse-grained simulation for faster design space exploration. Overall, comparing against previous state-of-the-art GPU implementations [41], we obtain an average speedup of 14.6\(\times\) across workloads as well as outperform the CPU, the FPGA, and some ASIC implementations.
## Acknowledgments
This research was supported in part by the Institute for Experiential AI and the NSF IUCRC Center for Hardware and Embedded Systems Security and Trust (CHEST), NSF CNS 2312275, NSF CNS 2312276, and by Samsung Advanced Institute of Technology, Samsung Electronics Co., Ltd. Additionally, we acknowledge the financial assistance from grant RYC2021-031966-I funded by MCIN/AEI/10.13039/501100011033, and the "European Union NextGenerationEU/PRTR."
|
2308.16447 | Non-simple systoles on random hyperbolic surfaces for large genus | In this paper, we investigate the asymptotic behavior of the non-simple
systole, which is the length of a shortest non-simple closed geodesic, on a
random closed hyperbolic surface on the moduli space $\mathcal{M}_g$ of Riemann
surfaces of genus $g$ endowed with the Weil-Petersson measure. We show that as
the genus $g$ goes to infinity, the non-simple systole of a generic hyperbolic
surface in $\mathcal{M}_g$ behaves exactly like $\log g$. | Yuxin He, Yang Shen, Yunhui Wu, Yuhao Xue | 2023-08-31T04:26:43Z | http://arxiv.org/abs/2308.16447v1 | # Non-simple systoles on random hyperbolic surfaces for large genus
###### Abstract.
In this paper, we investigate the asymptotic behavior of the non-simple systole, which is the length of a shortest non-simple closed geodesic, on a random closed hyperbolic surface on the moduli space \(\mathcal{M}_{g}\) of Riemann surfaces of genus \(g\) endowed with the Weil-Petersson measure. We show that as the genus \(g\) goes to infinity, the non-simple systole of a generic hyperbolic surface in \(\mathcal{M}_{g}\) behaves exactly like \(\log g\).
## 1. Introduction
The study of closed geodesics on hyperbolic surfaces has deep connection to their spectral theory, dynamics and hyperbolic geometry. Let \(X=X_{g}\) be a closed hyperbolic surface of genus \(g\geq 2\). The systole of \(X\), the length of a shortest closed geodesic on \(X\), is always realized by a simple closed geodesic, i.e. a closed geodesic without self-intersections. The _non-simple systole_\(\ell^{ns}_{sys}(X)\) of \(X\) is defined as
\[\ell^{ns}_{sys}(X)=\min\{\ell_{\alpha}(X);\ \alpha\subset X\text{ is a non-simple closed geodesic}\}\]
where \(\ell_{\alpha}(X)\) is the length of \(\alpha\) in \(X\). It is known that \(\ell^{ns}_{sys}(X)\) is always realized as a figure-eight closed geodesic in \(X\) (see e.g. [1, Theorem 4.2.4]). In this work, we view the non-simple systole as a random variable on moduli space \(\mathcal{M}_{g}\) of Riemann surfaces of genus \(g\) endowed with the Weil-Petersson probability measure \(\operatorname{Prob}^{g}_{\operatorname{WP}}\). This subject was initiated by Mirzakhani in [10, 11], based on her celebrated thesis works [10, 11]. Firstly it is known that for all \(g\geq 2\), \(\inf_{X\in\mathcal{M}_{g}}\ell^{ns}_{sys}(X)=2\arccos(3)\sim 3.52...\) (see e.g. [1, 12]) and \(\sup_{X\in\mathcal{M}_{g}}\ell^{ns}_{sys}(X)\asymp\log g\) (see e.g. [1, 13]). In this paper, we show that as \(g\) goes to infinity, a generic hyperbolic surface in \(\mathcal{M}_{g}\) has non-simple systole behaving like \(\log g\). More precisely, let \(\omega:\{2,3,\cdots\}\to\mathbb{R}^{>0}\) be any function satisfying
\[\lim_{g\to\infty}\omega(g)=+\infty\text{ and }\lim_{g\to\infty}\frac{\omega(g)}{ \log\log g}=0. \tag{1}\]
**Theorem 1**.: _For any \(\omega(g)\) satisfying (1), the following limit holds:_
\[\lim_{g\to\infty}\operatorname{Prob}^{g}_{\operatorname{WP}}\big{(}X\in \mathcal{M}_{g};\ |\ell^{ns}_{sys}(X)-(\log g-\log\log g)|<\omega(g)\big{)}=1.\]
_Remark_.: It was shown in [10, Theorem 4] that for any \(\epsilon>0\),
\[\lim_{g\to\infty}\operatorname{Prob}^{g}_{\operatorname{WP}}\big{(}X\in \mathcal{M}_{g};\ (1-\epsilon)\log g<\ell^{ns}_{sys}(X)<2\log g\big{)}=1.\]
As a direct consequence of Theorem 1, as \(g\) goes to infinity, the asymttopic behavior of the expected value of \(\ell^{ns}_{sys}(\cdot)\) over \(\mathcal{M}_{g}\) can also be determined.
**Theorem 2**.: _The following limit holds:_
\[\lim_{g\to\infty}\frac{\int_{\mathcal{M}_{g}}\ell_{sys}^{ns}(X)dX}{\mathrm{Vol_{ WP}}(\mathcal{M}_{g})\log g}=1.\]
Proof.: Take \(\omega(g)=\log\log\log g\) and set \(V_{g}=\mathrm{Vol_{WP}}(\mathcal{M}_{g})\). Define
\[A_{\omega}(g):=\{X\in\mathcal{M}_{g};\ |\ell_{sys}^{ns}(X)-(\log g-\log\log g)|< \omega(g)\}.\]
By Theorem 1 we know that
\[\lim_{g\to\infty}\mathrm{Prob}_{\mathrm{WP}}^{g}\left(X\in\mathcal{M}_{g};\ X\in A_{ \omega}(g)\right)=1.\]
Then firstly it is clear that
\[\liminf_{g\to\infty}\frac{\int_{\mathcal{M}_{g}}\ell_{sys}^{ns}(X)dX}{V_{g} \cdot\log g}\geq\lim_{g\to\infty}\mathrm{Prob}_{\mathrm{WP}}^{g}\left(X\in \mathcal{M}_{g};\ X\in A_{\omega}(g)\right)=1.\]
For the other direction, since \(\sup_{X\in\mathcal{M}_{g}}\ell_{sys}^{ns}(X)\leq C\cdot\log g\) for some universal constant \(C>0\) (see e.g. Lemma 9),
\[\limsup_{g\to\infty}\frac{\int_{\mathcal{M}_{g}}\ell_{sys}^{ns}(X )dX}{V_{g}\cdot\log g}=\limsup_{g\to\infty}\left(\frac{\int_{A_{\omega}(g)} \ell_{sys}^{ns}(X)}{V_{g}\cdot\log g}+\frac{\int_{A_{\omega}^{c}(g)}\ell_{sys }^{ns}(X)}{V_{g}\cdot\log g}\right)\] \[\leq 1+C\cdot\limsup_{g\to\infty}\mathrm{Prob}_{\mathrm{WP}}^{g} \left(X\in\mathcal{M}_{g};\ X\notin A_{\omega}(g)\right)=1.\]
The proof is complete.
_Remark_.:
1. Mirzkhani-Petri in [19] showed that \[\lim_{g\to\infty}\frac{\int_{\mathcal{M}_{g}}\ell_{\mathrm{sys}}(X)dX}{ \mathrm{Vol_{WP}}(\mathcal{M}_{g})}=1.61498...\] where \(\ell_{\mathrm{sys}}(X)\) is the systole of \(X\).
2. Based on [20], joint with Parlier, the third and forth named authors in [21] showed that \[\lim_{g\to\infty}\frac{\int_{\mathcal{M}_{g}}\ell_{\mathrm{sys}}^{\mathrm{ sep}}(X)dX}{\mathrm{Vol_{WP}}(\mathcal{M}_{g})\log g}=2\] where \(\ell_{\mathrm{sys}}^{\mathrm{sep}}(X)\) is the length of a shortest separating simple closed geodesic in \(X\), an unbounded function over \(\mathcal{M}_{g}\).
The geometry and spectra of random hyperbolic surfaces under this Weil-Petersson measure have been widely studied in recent years. For examples, one may see [1] for Bers' constant, [13, 20] for diameter, [19] for systole, [13, 20, 21] for separating systole, [13, 20, 21] for first eigenvalue, [1] for eigenfunction, [22] for Weyl law, [23, 24] for GOE, [20] for prime geodesic theorem, [21] for determinant of Laplacian. One may also see [2, 25, 26, 27, 28, 29, 30, 31, 32] and the references therein for more related topics.
### Strategy on the proof of Theorem 1
The proof of Theorem 1 mainly consists of two parts.
A relative easier part is to prove the lower bound, that is to show that
\[\lim_{g\to\infty}\operatorname{Prob}^{g}_{\operatorname{WP}}\big{(}X\in\mathcal{ M}_{g};\ \ell^{ns}_{sys}(X)>\log g-\log\log g-\omega(g)\big{)}=1. \tag{2}\]
We know that \(\ell^{ns}_{sys}(X)\) is realized by a figure-eight closed geodesic that is always filling in a unique pair of pants. And the length of such a figure-eight closed geodesic can be determined by the lengths of the three boundary geodesics of the pair of pants (see e.g. formula (17)). For \(L=L_{g}=\log g-\log\log g-\omega(g)\) and \(X\in\mathcal{M}_{g}\), denote by \(N_{\mathrm{f}-8}(X,L)\) the number of figure-eight closed geodesics of length \(\leq L\) in \(X\). We view it as a random variable on \(\mathcal{M}_{g}\). Then using Mirzakhani's integration formula and change of variables, a direct computation shows that its expected value \(\mathbb{E}^{g}_{\operatorname{WP}}[N_{\mathrm{f}-8}(X,L)]\) satisfies that as \(g\to\infty\),
\[\mathbb{E}^{g}_{\operatorname{WP}}[N_{\mathrm{f}-8}(X,L)]\sim\frac{Le^{L}}{8 \pi^{2}g}\to 0. \tag{3}\]
Here we say \(f(g)\sim h(g)\) if \(\lim_{g\to\infty}\frac{f(g)}{h(g)}=1\). Thus, we have
\[\operatorname{Prob}^{g}_{\operatorname{WP}}\left(X\in\mathcal{M}_{g};\ N_{ \mathrm{f}-8}(X,L_{g})\geq 1\right)\leq\mathbb{E}^{g}_{\operatorname{WP}}[N_{ \mathrm{f}-8}(X,L_{g})]\to 0\]
as \(g\to\infty\). This in particular implies (2).
The hard part of Theorem 1 is the upper bound, that is to show that
\[\lim_{g\to\infty}\operatorname{Prob}^{g}_{\operatorname{WP}}\big{(}X\in \mathcal{M}_{g};\ \ell^{ns}_{sys}(X)<\log g-\log\log g+\omega(g)\big{)}=1. \tag{4}\]
Set \(L=L_{g}=\log g-\log\log g+\omega(g)\), and for any \(X\in\mathcal{M}_{g}\) we define the following particular set and quantity:
\[\mathcal{N}^{(g-2,3)}_{(0,3),\star}(X,L)=\left\{\begin{aligned} &(\gamma_{1},\gamma_{2},\gamma_{3}) \text{ is a pair of ordered simple closed}\\ &(\gamma_{1},\gamma_{2},\gamma_{3});&\text{ curves such that }X\setminus\cup_{i=1}^{3}\gamma_{i}\simeq S_{0,3}\bigcup S_{g-2,3},\\ &\ell_{\gamma_{1}}(X)\leq L,\ \ell_{\gamma_{2}}(X)+\ell_{\gamma_{3}}(X) \leq L\\ &\text{ and }\ell_{\gamma_{1}}(X),\ell_{\gamma_{2}}(X),\ell_{ \gamma_{3}}(X)\geq 10\log L\end{aligned}\right\}\]
and
\[N^{(g-2,3)}_{(0,3),\star}(X,L)=\#\mathcal{N}^{(g-2,3)}_{(0,3),\star}(X,L).\]
It is not hard to see that there exists some universal constant \(c>0\) such that
\[\begin{split}&\operatorname{Prob}^{g}_{\operatorname{WP}}\left(N_{ \mathrm{f}-8}(X,L+c)=0\right)\\ \leq&\operatorname{Prob}^{g}_{\operatorname{WP}} \left(X\in\mathcal{M}_{g};\ N^{(g-2,3)}_{(0,3),\star}(X,L)=0\right)\\ \leq&\frac{\mathbb{E}^{g}_{\operatorname{WP}}\left[ \left(N^{(g-2,3)}_{(0,3),\star}(X,L)\right)^{2}\right]-\mathbb{E}^{g}_{ \operatorname{WP}}\left[N^{(g-2,3)}_{(0,3),\star}(X,L)\right]^{2}}{\mathbb{E }^{g}_{\operatorname{WP}}\left[N^{(g-2,3)}_{(0,3),\star}(X,L)\right]^{2}}. \end{split} \tag{5}\]
To prove (4), it suffices to show
(6) \[\lim_{g\to\infty}\text{RHS of \eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeqeq:eqeq:eq:eqeq:eqeq:eqeqeq:eq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeqeq:eq
\[C_{1,2}(X,L) =\#\left\{(\Gamma_{1},\Gamma_{2})\in\mathcal{C}(X,L);\ S(\Gamma_{1}, \Gamma_{2})\simeq S_{1,2}\right\},\] \[C_{\geq 3}(X,L) =\#\{(\Gamma_{1},\Gamma_{2})\in\mathcal{C}(X,L);\ |\chi(S(\Gamma_{1}, \Gamma_{2}))|\geq 3\}.\]
Through using the method in [20] and the counting result on filling multi-geodesics in [20, 21], we can show that (see Proposition 23)
\[\mathbb{E}_{\mathrm{WP}}^{g}\left[C_{\geq 3}(X,L)\right]\prec\left(L^{67}e^{(2 +\epsilon)\cdot L}\frac{1}{g^{3}}+\frac{L^{3}e^{8L}}{g^{11}}\right)=o(1). \tag{13}\]
For \(C_{0,4}(X,L)\) and \(C_{1,2}(X,L)\), through classifying all the accurate relative positions of \((\Gamma_{1},\Gamma_{2})\) in both \(S_{1,2}\) and \(S_{0,4}\), applying the McShane-Mirzakhani identity in [19] as for counting closed geodesics (we warn here that both the general counting result and the counting result in [20, 21] on closed geodesics are inefficient to deal with these two cases), and then using Mirzakhani's integration formula and known bounds on Weil-Petersson volumes, we can show that (see Proposition 24 and 28)
\[\mathbb{E}_{\mathrm{WP}}^{g}\left[C_{1,2}(X,L)\right]\prec\frac{e^{2L}}{g^{2} }=o(1) \tag{14}\]
and
\[\mathbb{E}_{\mathrm{WP}}^{g}\left[C_{0,4}(X,L)\right]\prec\frac{Le^{2L}}{g^{2} }=o(1). \tag{15}\]
Then combining all these equations (7)--(15), one may finish the proof of (6), thus get (4) which is the upper bound in Theorem 1.
### Notations
For any two nonnegative functions \(f\) and \(h\) (may be of multi-variables), we say \(f\prec h\) if there exists a uniform constant \(C>0\) such that \(f\leq Ch\). And we also say \(f\asymp h\) if \(f\prec h\) and \(h\prec f\).
### Plan of the paper
Section 2 will provide a review of relevant and necessary background materials. In Section 3 we compute the expectation of the number of figure-eight closed geodesics of length \(\leq L\) over \(\mathcal{M}_{g}\) which will imply (2), i.e. the lower bound in Theorem 1. In Section 4 we prove (4), i.e. the upper bound in Theorem 1. In which we apply the counting result on closed geodesics in [20], and also apply the McShane-Mirzakhani identity in [19] to count closed geodesics for \(C_{1,2}(X,L)\) and \(C_{0,4}(X,L)\).
### Acknowledgement
We would like to thank all the participants in our seminar on Teichmuller theory for helpful discussions on this project. The third named author is partially supported by the NSFC grant No. 12171263.
###### Contents
* 1 Introduction
* 2 Preliminaries
* 2.1 Moduli space and Weil-Petersson metric
* 2.2 Mirzakhani's integration formula
* 2.3 Weil-Petersson volumes
* 2.4 Figure-eight closed geodesics
### 2.5. Three countings on closed geodesics
* [nosep]
* [nosep]
on \(\mathcal{M}_{g}\), and we also denote by \(\mathbb{E}_{\mathrm{WP}}^{g}[f]\) the expected value of \(f\) over \(\mathcal{M}_{g}\). Namely,
\[\mathrm{Prob}_{\mathrm{WP}}^{g}(\mathcal{A}):=\frac{1}{V_{g}}\int_{\mathcal{M}_ {g}}\mathbf{1}_{\mathcal{A}}dX,\quad\mathbb{E}_{\mathrm{WP}}^{g}[f]:=\frac{1} {V_{g}}\int_{\mathcal{M}_{g}}f(X)dX\]
where \(\mathcal{A}\subset\mathcal{M}_{g}\) is a Borel subset, \(\mathbf{1}_{\mathcal{A}}:\mathcal{M}_{g}\to\{0,1\}\) is its characteristic function, and \(dX\) is short for \(d\operatorname{Vol}_{\mathrm{WP}}(X)\).
### Mirzakhani's integration formula
In this subsection, we recall Mirzakhani's integration formula in [14].
Let \(\gamma\) be a non-trivial and non-peripheral closed curve on topological surface \(S_{g,n}\) and \(X\in\mathcal{T}_{g,n}\). Denote \(\ell(\gamma)=\ell_{\gamma}(X)\) to be the hyperbolic length of the unique closed geodesic in the homotopy class of \(\gamma\) on \(X\). Let \(\Gamma=(\gamma_{1},\cdots,\gamma_{k})\) be an ordered k-tuple where the \(\gamma_{i}\)'s are distinct disjoint homotopy classes of nontrivial, non-peripheral, unoriented simple closed curves on \(S_{g,n}\). Let \(\mathcal{O}_{\Gamma}\) be the orbit containing \(\Gamma\) under the \(\operatorname{Mod}_{g,n}\)-action:
\[\mathcal{O}_{\Gamma}=\{(h\cdot\gamma_{1},\cdots,h\cdot\gamma_{k});\ h\in \operatorname{Mod}_{g,n}\}.\]
Given a function \(F:\mathbb{R}_{\geq 0}^{k}\to\mathbb{R}\), one may define a function on \(\mathcal{M}_{g,n}\):
\[F^{\Gamma}:\mathcal{M}_{g,n} \to \mathbb{R}\] \[X \mapsto \sum_{(\alpha_{1},\cdots,\alpha_{k})\in\mathcal{O}_{\Gamma}}F( \ell_{\alpha_{1}}(X),\cdots,\ell_{\alpha_{k}}(X)).\]
Note that although \(\ell_{\alpha_{i}}(X)\) can be only defined on \(\mathcal{T}_{g,n}\), after taking sum \(\sum_{(\alpha_{1},\cdots,\alpha_{k})\in\mathcal{O}_{\Gamma}}\), the function \(F^{\Gamma}\) is well-defined on the moduli space \(\mathcal{M}_{g,n}\).
For any \(x=(x_{1},\cdots,x_{k})\in\mathbb{R}_{\geq 0}^{k}\), we set \(\mathcal{M}(S_{g,n}(\Gamma);\ell_{\Gamma}=x)\) to be the moduli space of the hyperbolic surfaces (possibly disconnected) homeomorphic to \(S_{g,n}\setminus\cup_{j=1}^{k}\gamma_{j}\) with \(\ell(\gamma_{i}^{1})=\ell(\gamma_{i}^{2})=x_{i}\) for every \(i=1,\cdots,k\), where \(\gamma_{i}^{1}\) and \(\gamma_{i}^{2}\) are the two boundary components of \(S_{g,n}\setminus\cup_{j=1}^{k}\gamma_{j}\) given by cutting along \(\gamma_{i}\). Assume \(S_{g,n}\setminus\cup_{j=1}^{k}\gamma_{j}\cong\cup_{i=1}^{s}S_{g_{i},n_{i}}\). Consider the Weil-Petersson volume
\[V_{g,n}(\Gamma,x)=\operatorname{Vol}_{\mathrm{WP}}\left(\mathcal{M}(S_{g,n}( \Gamma);\ell_{\Gamma}=x)\right)=\prod_{i=1}^{s}V_{g_{i},n_{i}}(x^{(i)})\]
where \(x^{(i)}\) is the list of those coordinates \(x_{j}\) of \(x\) such that \(\gamma_{j}\) is a boundary component of \(S_{g_{i},n_{i}}\). The following integration formula is due to Mirzakhani. One may see [14, Theorem 7.1], [14, Theorem 2.2], [15, Theorem 2.2], [16, Theorem 2.2] for different versions.
**Theorem 4** (Mirzakhani's integration formula).: _For any \(\Gamma=(\gamma_{1},\cdots,\gamma_{k})\), the integral of \(F^{\Gamma}\) over \(M_{g,n}\) with respect to the Weil-Petersson metric is given by_
\[\int_{\mathcal{M}_{g,n}}F^{\Gamma}dX=C_{\Gamma}\int_{\mathbb{R}_{\geq 0}^{k}}F(x _{1},\cdots,x_{k})V_{g,n}(\Gamma,x)x_{1}\cdots x_{k}dx_{1}\cdots dx_{k}\]
_where the constant \(C_{\Gamma}\in(0,1]\) only depends on \(\Gamma\)._
_Remark_.: One may see [20, Theorem 4.1] for the detailed explanation and expression for the constant \(C_{\Gamma}\). We will give the exact value of \(C_{\Gamma}\) only when required in this paper.
### Weil-Petersson volumes
Denote \(V_{g,n}(x_{1},\cdots,x_{n})\) to be the Weil-Petersson volume of \(\mathcal{M}_{g,n}(x_{1},\cdots,x_{n})\) and \(V_{g,n}=V_{g,n}(0,\cdots,0)\). In this subsection we only list the bounds for \(V_{g,n}(x_{1},\cdots,x_{n})\) that we will need in this paper.
**Theorem 5** ([20, Theorem 1.1]).: _The initial volume \(V_{0,3}(x,y,z)=1\). The volume \(V_{g,n}(x_{1},\cdots,x_{n})\) is a polynomial in \(x_{1}^{2},\cdots,x_{n}^{2}\) with degree \(3g-3+n\). Namely we have_
\[V_{g,n}(x_{1},\cdots,x_{n})=\sum_{\alpha:\,|\alpha|\leq 3g-3+n}C_{\alpha}\cdot x ^{2\alpha}\]
_where \(C_{\alpha}>0\) lies in \(\pi^{6g-6+2n-|2\alpha|}\cdot\mathbb{Q}\). Here \(\alpha=(\alpha_{1},\cdots,\alpha_{n})\) is a multi-index and \(|\alpha|=\alpha_{1}+\cdots+\alpha_{n}\), \(x^{2\alpha}=x_{1}^{2\alpha_{1}}\cdots x_{n}^{2\alpha_{n}}\)._
**Theorem 6**.:
1. _(_[_20_, Lemma 3.2]__). For any_ \(g,n\geq 0\)__ \[V_{g-1,n+4}\leq V_{g,n+2}\] _and_ \[b_{0}\leq\frac{V_{g,n+1}}{(2g-2+n)V_{g,n}}\leq b_{1}\] _for some universal constants_ \(b_{0},b_{1}>0\) _independent of_ \(g,n\)_._
2. _(_[_20_, Theorem 3.5]__)._ \[\frac{(2g-2+n)V_{g,n}}{V_{g,n+1}}=\frac{1}{4\pi^{2}}+O_{n}\left(\frac{1}{g} \right),\] \[\frac{V_{g,n}}{V_{g-1,n+2}}=1+O_{n}\left(\frac{1}{g}\right).\] _Where the implied constants for_ \(O_{n}(\cdot)\) _are related to_ \(n\) _and independent of_ \(g\)_._
Part (2) above can also be derived by [20] of Mirzakhani-Zograf in which the precise asymptotic behavior of \(V_{g,n}\) is provided for given \(n\).
Set \(r=2g-2+n\). We also use the following quantity \(W_{r}\) to approximate \(V_{g,n}\):
\[W_{r}:=\begin{cases}V_{\frac{r}{2}+1,0}&\text{if $r$ is even,}\\ V_{\frac{r+1}{2},1}&\text{if $r$ is odd.}\end{cases} \tag{16}\]
The estimation about sum of products of Weil-Petersson volumes can be found in e.g. [20, 21, 22, 23]. Here we use the following version:
**Theorem 7** ([23, Lemma 24]).: _Assume \(q\geq 1\), \(n_{1},\cdots,n_{q}\geq 0\), \(r\geq 2\). Then there exists two universal constants \(c,D>0\) such that_
\[\sum_{\{g_{i}\}}V_{g_{1},n_{1}}\cdots V_{g_{q},n_{q}}\leq c\left(\frac{D}{r} \right)^{q-1}W_{r}\]
_where the sum is taken over all \(\{g_{i}\}_{i=1}^{q}\subset\mathbb{N}\) such that \(2g_{i}-2+n_{i}\geq 1\) for all \(i=1,\cdots,q\), and \(\sum_{i=1}^{q}(2g_{i}-2+n_{i})=r\)._
The following asymptotic behavior of \(V_{g,n}(x_{1},\cdots,x_{n})\) was firstly studied in [14, Proposition 3.1]. We use the following version in [13]. One may also see more sharp ones in [1].
**Theorem 8** ([13, Lemma 20]).: _There exists a constant \(c(n)>0\) independent of \(g\) and \(x_{i}\)'s such that_
\[\left(1-c(n)\frac{\sum_{i=1}^{n}x_{i}^{2}}{g}\right)\prod_{i=1}^{n}\frac{\sinh (x_{i}/2)}{x_{i}/2}\leq\frac{V_{g,n}(x_{1},\cdots,x_{n})}{V_{g,n}}\leq\prod_{i =1}^{n}\frac{\sinh(x_{i}/2)}{x_{i}/2}.\]
### Figure-eight closed geodesics
Let \(X\) be a hyperbolic surface. We say a closed geodesic in \(X\) is a _figure-eight closed geodesic_ if it has exactly one self-intersection point. Given a figure-eight closed geodesic \(\alpha\), it is filling in a pair of pants \(P(x,y,z)\) with three geodesic boundary of lengths \(x,y,z\) as shown in Figure 1. The length \(L(x,y,z)\) of the figure-eight closed geodesic \(\alpha\) is given by (see e.g. [10, Equation (4.2.3)]):
\[\cosh\left(\frac{L(x,y,z)}{2}\right)=\cosh(\tfrac{z}{2})+2\cosh(\tfrac{x}{2}) \cosh(\tfrac{y}{2}). \tag{17}\]
It is clear that \(L(x,y,z)\geq 2\arccos 3\).
_Remark_.: In a pair of pants \(P(x,y,z)\), there are exactly three different figure-eight closed geodesics of lengths \(L(x,y,z)\), \(L(z,x,y)\) and \(L(y,z,x)\). Here \(L(x,y,z)\) is the length of the figure-eight closed geodesic winding around \(x\) and \(y\) as shown in Figure 1.
It is known (see e.g. [14, Lemma 5.2] or [10, Section 5.2]) that the length of the shortest figure-eight closed geodesic in any \(X\in\mathcal{M}_{g}\) is bounded.
**Lemma 9**.: _There exists a universal constant \(c>0\) independent of \(g\) such that for any \(X\in\mathcal{M}_{g}\), the shortest figure-eight closed geodesic in \(X\) has length \(\leq c\log g\)._
Figure 1. A figure-eight closed geodesic \(\alpha\) in the pair of pants \(P(x,y,z)\).
Outline of the proof of Lemma 9.: Let \(\gamma\subset X\) be a systolic curve. It is known that \(\ell_{\gamma}(X)\prec\log g\). Next consider the maximal collar around \(\gamma\) and then one may get a pair of pants such that its boundary contains \(\gamma\) and each of the three boundary geodesics has length \(\prec\log g\). Then the conclusion follows by (17). One may see [14, 15] for more details.
_Remark_.: From [13, Theorem 4] we know that the growth rate \(\log g\) in the upper bound in Lemma 9 holds for generic hyperbolic surfaces in \(\mathcal{M}_{g}\) as \(g\to\infty\). In this paper, we study its precise asymptotic behavior.
Recall that the _non-simple systole_\(\ell_{sys}^{ns}(X)\) of \(X\in\mathcal{M}_{g}\) is defined as
\[\ell_{sys}^{ns}(X)=\min\{\ell_{\alpha}(X);\ \alpha\subset X\text{ is a non-simple closed geodesic}\}.\]
We focus on figure-eight closed geodesics because the non-simple systole of a hyperbolic surface is always achieved by a figure-eight closed geodesic (see e.g. [15, Theorem 4.2.4]). That is,
\[\ell_{sys}^{ns}(X)=\text{length of a shortest figure-eight closed geodesic in $X$.}\]
In particular, as introduced above we have
\[\sup_{X\in\mathcal{M}_{g}}\ell_{sys}^{ns}(X)\asymp\log g.\]
### Three countings on closed geodesics
In this subsection, we mainly introduce three results about counting closed geodesics in hyperbolic surfaces. In this paper, we only consider primitive closed geodesics without orientations.
#### 2.5.1. On closed hyperbolic surfaces
First, by the Collar Lemma (see e.g. [15, Theorem 4.1.6]), we have that a closed hyperbolic surface of genus \(g\) has most \(3g-3\) pairwisely disjoint simple closed geodesics of length \(\leq 2\operatorname{arcsinh}1\approx 1.7627\). It is also known from [15, Theorem 6.6.4] that for all \(L>0\) and \(X\in\mathcal{M}_{g}\), there are at most \((g-1)e^{L+6}\) closed geodesics in \(X\) of length \(\leq L\) which are not iterates of closed geodesics of length \(\leq 2\operatorname{arcsinh}1\). As a consequence, we have
**Theorem 10**.: _For any \(L>0\) and \(X\in\mathcal{M}_{g}\), there are at most \((g-1)e^{L+7}\) primitive closed geodesics in \(X\) of length \(\leq L\)._
#### 2.5.2. On compact hyperbolic surfaces with geodesic boundaries
For compact hyperbolic surfaces with geodesic boundaries, the following result for filling closed multi-geodesics is useful.
_Definition_.: Let \(Y\in\mathcal{M}_{g,n}(L_{1},\cdots,L_{n})\) be a hyperbolic surface with boundaries. Let \(\Gamma=(\gamma_{1},\cdots,\gamma_{k})\) be an ordered \(k\)-tuple where \(\gamma_{i}\)'s are non-peripheral closed geodesics in \(Y\). We say \(\Gamma\) is _filling_ in \(Y\) if each component of the complement \(Y\setminus\cup_{i=1}^{k}\gamma_{i}\) is homeomorphic to either a disk or a cylinder which is homotopic to a boundary component of \(Y\).
In particular, a filling \(1\)-tuple is a filling closed geodesic in \(Y\).
Define the length of a \(k\)-tuple \(\Gamma=(\gamma_{1},\cdots,\gamma_{k})\) to be the total length of \(\gamma_{i}\)'s, that is,
\[\ell_{\Gamma}(Y):=\sum_{i=1}^{k}\ell_{\gamma_{i}}(Y).\]
Define the counting function \(N_{k}^{\text{fill}}(Y,L)\) for \(L\geq 0\) to be
\[N_{k}^{\text{fill}}(Y,L):=\#\left\{\Gamma=(\gamma_{1},\cdots,\gamma_{k}); \begin{array}{l}\text{$\Gamma$ is a filling $k$-tuple in $Y$}\\ \text{and $\ell_{\Gamma}(Y)\leq L$}\end{array}\right\}.\]
**Theorem 11** ([23, Theorem 4] or [23, Theorem 18]).: _For any \(k\in\mathbb{Z}_{\geq 1}\), \(0<\varepsilon<\frac{1}{2}\) and \(m=2g-2+n\geq 1\), there exists a constant \(c(k,\varepsilon,m)>0\) only depending on \(k,\varepsilon\) and \(m\) such that for all \(L>0\) and any compact hyperbolic surface \(Y\) of genus \(g\) with \(n\) boundary simple closed geodesics, the following holds:_
\[N_{k}^{\text{fill}}(Y,L)\leq c(k,\varepsilon,m)\cdot(1+L)^{k-1}e^{L-\frac{1- \varepsilon}{2}\ell(\partial Y)}.\]
_Where \(\ell(\partial Y)\) is the total length of the boundary closed geodesics of \(Y\)._
#### 2.5.3. On \(S_{1,2}\) and \(S_{0,4}\)
For the cases of \(S_{1,2}\) and \(S_{0,4}\), we will apply the McShane-Mirzakhani identity to give more refined counting results on certain specific types of simple closed geodesics. The McShane-Mirzakhani identity states as:
**Theorem 12** ([24, Theorem 1.3]).: _For \(Y\in\mathcal{M}_{g,n}(L_{1},\cdots,L_{n})\) with \(n\) geodesic boundaries \(\beta_{1},\cdots,\beta_{n}\) of length \(L_{1},\cdots,L_{n}\) respectively, we have_
\[\sum_{\{\gamma_{1},\gamma_{2}\}}\mathcal{D}(L_{1},\ell(\gamma_{1}),\ell( \gamma_{2}))+\sum_{i=2}^{n}\sum_{\gamma}\mathcal{R}(L_{1},L_{i},\ell(\gamma) )=L_{1}\]
_where the first sum is over all unordered pairs of simple closed geodesics \(\{\gamma_{1},\gamma_{2}\}\) bounding a pair of pants with \(\beta_{1}\), and the second sum is over all simple closed geodesics \(\gamma\) bounding a pair of pants with \(\beta_{1}\) and \(\beta_{i}\). Here \(\mathcal{D}\) and \(\mathcal{R}\) are given by_
\[\mathcal{D}(x,y,z)=2\log\left(\frac{e^{\frac{x}{2}}+e^{\frac{y+z}{2}}}{e^{ \frac{-x}{2}}+e^{\frac{y+z}{2}}}\right)\]
_and_
\[\mathcal{R}(x,y,z)=x-\log\left(\frac{\cosh(\frac{y}{2})+\cosh(\frac{x+z}{2})}{ \cosh(\frac{y}{2})+\cosh(\frac{x-z}{2})}\right).\]
The functions \(\mathcal{D}(x,y,z)\) and \(\mathcal{R}(x,y,z)\) have the following elementary properties:
**Lemma 13** ([23, Lemma 27]).: _Assume that \(x,y,z>0\), then the following properties hold:_
1. \(\mathcal{R}(x,y,z)\geq 0\) _and_ \(\mathcal{D}(x,y,z)\geq 0\)_._
2. \(\mathcal{R}(x,y,z)\) _is decreasing with respect to_ \(z\) _and increasing with respect to_ \(y\)_._ \(\mathcal{D}(x,y,z)\) _is decreasing with respect to_ \(y\) _and_ \(z\) _and increasing with respect to_ \(x\)
_._
3. _We have_ \[\frac{x}{\mathcal{R}(x,y,z)}\leq 100(1+x)(1+e^{\frac{x}{2}}e^{-\frac{x+y}{2}}),\] _and_ \[\frac{x}{\mathcal{D}(x,y,z)}\leq 100(1+x)(1+e^{\frac{y+z}{2}}e^{-\frac{x}{2}}).\]
As a direct consequence of Theorem 12 and the monotonicity in Part (2) of Lemma 13, we have
**Theorem 14**.: _On a surface \(Y\in\mathcal{M}_{0,4}(L_{1},L_{2},L_{3},L_{4})\), the number of simple closed geodesics of length \(\leq L\) which bound a pair of pants with the two boundaries of lengths \(L_{1}\) and \(L_{2}\) has the upper bound_
\[\leq\min\left\{\frac{L_{1}}{\mathcal{R}(L_{1},L_{2},L)},\ \frac{L_{2}}{ \mathcal{R}(L_{2},L_{1},L)},\ \frac{L_{3}}{\mathcal{R}(L_{3},L_{4},L)},\ \frac{L_{4}}{\mathcal{R}(L_{4},L_{3},L)}\right\}.\] _On a surface_ \(Y\in\mathcal{M}_{1,2}(L_{1},L_{2})\)_, the number of simple closed geodesics of length_ \(\leq L\) _which bound a pair of pants with the two boundaries of lengths_ \(L_{1}\) _and_ \(L_{2}\) _has the upper bound_ (19) \[\leq\min\left\{\frac{L_{1}}{\mathcal{R}(L_{1},L_{2},L)},\ \frac{L_{2}}{ \mathcal{R}(L_{2},L_{1},L)}\right\}.\] _On a surface_ \(Y\in\mathcal{M}_{1,2}(L_{1},L_{2})\)_, the number of unordered pairs of simple closed geodesics of total length_ \(\leq L\) _which bound a pair of pants with the boundary of length_ \(L_{1}\) _has the upper bound_ (20) \[\leq\min\left\{\frac{L_{1}}{\mathcal{D}(L_{1},L,0)},\ \frac{L_{2}}{ \mathcal{D}(L_{2},L,0)}\right\}. \tag{18}\]
Proof.: These three bounds follow from Theorem 12 and the monotonicity of \(\mathcal{D}(x,y,z)\) and \(\mathcal{R}(x,y,z)\). For (20), we also apply the fact that \(\mathcal{D}(x,y,z)\) only depends on \(y+z\) but not on \(y,z\) respectively.
Then by applying the estimates in Lemma 13, one may get upper bounds for (18), (19) and (20).
## 3. Lower bound
In this section we compute the expectation of the number of figure-eight closed geodesics of length \(\leq L\) over \(\mathcal{M}_{g}\), and hence gives the lower bound of the length of non-simple systole for random hyperbolic surfaces.
Let \(X\in\mathcal{M}_{g}\) and denote
\[\mathcal{N}_{\mathrm{f}-8}(X,L):=\left\{\alpha\subset X;\begin{array}{l} \alpha\text{ is a figure-eight closed geodesic in }X\\ \text{and }\ell(\alpha)\leq L\end{array}\right\}\]
and
\[N_{\mathrm{f}-8}(X,L):=\#\mathcal{N}_{\mathrm{f}-8}(X,L)\]
to be the number of figure-eight closed geodesics in \(X\) with length \(\leq L\). Then the non-simple systole of \(X\) has length \(>L\) if and only if \(N_{\mathrm{f}-8}(X,L)=0\).
The main result of this section is as follows.
**Proposition 15**.: _For any \(L\geq 2\arccos 3\) and \(g>2\),_
\[\mathbb{E}_{\mathrm{WP}}^{g}[N_{\mathrm{f}-8}(X,L)]=\frac{1}{8\pi^{2}g}(L-3-4 \log 2)e^{L}+O\left(\tfrac{1}{g}L^{2}e^{\frac{1}{2}L}+\tfrac{1}{g^{2}}L^{4}e^{L}\right)\]
_where the implied constant is independent of \(L\) and \(g\)._
Let's firstly assume Proposition 15 and prove the following direct consequence which is the lower bound in Theorem 1.
**Theorem 16**.: _For any function \(\omega(g)\) satisfying_
\[\lim_{g\to\infty}\omega(g)=+\infty\text{ and }\lim_{g\to\infty}\frac{\omega(g)}{ \log\log g}=0,\]
_then we have_
\[\lim_{g\to\infty}\mathrm{Prob}_{\mathrm{WP}}^{g}\left(X\in\mathcal{M}_{g};\ \ell_{\mathrm{sys}}^{\mathrm{ns}}(X)>\log g-\log\log g-\omega(g)\right)=1.\]
Proof.: Taking \(L=L_{g}=\log g-\log\log g-\omega(g)\) in Proposition 15, it is clear that
\[\mathbb{E}_{\mathrm{WP}}^{g}[N_{\mathrm{f}-8}(X,L_{g})]\to 0,\text{ as }g\to\infty.\]
Then it follows by Markov's inequality that
\[\mathrm{Prob}_{\mathrm{WP}}^{g}\left(X\in\mathcal{M}_{g};\ N_{\mathrm{f}-8}(X,L_{g})\geq 1\right)\leq\mathbb{E}_{\mathrm{WP}}^{g}[N_{\mathrm{f}-8}(X,L_{g}) ]\to 0,\text{ as }g\to\infty.\]
This also means that
\[\lim_{g\to\infty}\mathrm{Prob}_{\mathrm{WP}}^{g}\left(X\in\mathcal{M}_{g};\ N_{\mathrm{f}-8}(X,L_{g})=0\right)=1.\]
Then the conclusion follows because for any \(X\in\mathcal{M}_{g}\), \(\ell_{sys}^{ns}(X)\) is equal to the length of a shortest figure-eight closed geodesic in \(X\).
For a figure-eight closed geodesic \(\alpha\subset X\in\mathcal{M}_{g}\), it is always filling in a unique pair of pants \(P(x,y,z)\) as shown in Figure 1. If two of the boundary geodesics in \(P(x,y,z)\) are the same simple closed geodesics in \(X\), then the completion \(\overline{P(x,y,z)}\subset X\) of \(P(x,y,z)\) is a hyperbolic torus with one geodesic boundary; otherwise \(\overline{P(x,y,z)}\subset X\) is still a pair of pants. So the complement \(X\setminus\overline{P(x,y,z)}\) may have one or two or three components (see Figure 2 for an illustration when \(g=3\)). We classify all figure-eight closed geodesics by the topology of \(X\setminus\overline{P(x,y,z)}\). Denote
\[\mathcal{N}_{\mathrm{f}-8}^{(g-2,3)}(X,L) := \left\{\alpha\in\mathcal{N}_{\mathrm{f}-8}(X,L);\ X\setminus \overline{P(x,y,z)}\cong S_{g-2,3}\right\},\] \[\mathcal{N}_{\mathrm{f}-8}^{(g-1,1)}(X,L) := \left\{\alpha\in\mathcal{N}_{\mathrm{f}-8}(X,L);\ X\setminus \overline{P(x,y,z)}\cong S_{g-1,1}\right\},\] \[\mathcal{N}_{\mathrm{f}-8}^{(g_{1},1)(g_{2},2)}(X,L) := \{\alpha\in\mathcal{N}_{\mathrm{f}-8}(X,L);\] \[X\setminus\overline{P(x,y,z)}\cong S_{g_{1},1}\cup S_{g_{2},1} \cup S_{g_{3},1}\},\]
and
\[N_{\mathrm{f-8}}^{(g-2,3)}(X,L) :=\#\mathcal{N}_{\mathrm{f-8}}^{(g-2,3)}(X,L),\] \[N_{\mathrm{f-8}}^{(g-1,1)}(X,L) :=\#\mathcal{N}_{\mathrm{f-8}}^{(g-1,1)}(X,L),\] \[N_{\mathrm{f-8}}^{(g_{1},1)(g_{2},2)}(X,L) :=\#\mathcal{N}_{\mathrm{f-8}}^{(g_{1},1)(g_{2},2)}(X,L),\] \[N_{\mathrm{f-8}}^{(g_{1},1)(g_{2},1)(g_{3},1)}(X,L) :=\#\mathcal{N}_{\mathrm{f-8}}^{(g_{1},1)(g_{2},1)(g_{3},1)}(X,L).\]
Then
\[N_{\mathrm{f-8}}(X,L) = N_{\mathrm{f-8}}^{(g-2,3)}(X,L)+\sum_{(g_{1},g_{2})}N_{\mathrm{f- 8}}^{(g_{1},1)(g_{2},2)}(X,L)\] \[+N_{\mathrm{f-8}}^{(g-1,1)}(X,L)+\sum_{(g_{1},g_{2},g_{3})}N_{ \mathrm{f-8}}^{(g_{1},1)(g_{2},1)(g_{3},1)}(X,L)\]
where the first sum is taken over all \((g_{1},g_{2})\) with \(g_{1}+g_{2}=g-1\) and \(g_{1},g_{2}\geq 1\); the second sum is taken over all \((g_{1},g_{2},g_{3})\) with \(g_{1}+g_{2}+g_{3}=g\) and \(g_{1}\geq g_{2}\geq g_{3}\geq 1\).
We now compute \(\mathbb{E}_{\mathrm{WP}}^{g}[N_{\mathrm{f-8}}^{(g-2,3)}]\), \(\mathbb{E}_{\mathrm{WP}}^{g}[N_{\mathrm{f-8}}^{(g-1,1)}]\) and sum of all possible \(\mathbb{E}_{\mathrm{WP}}^{g}[N_{\mathrm{f-8}}^{(g_{1},1)(g_{2},2)}]\) and \(\mathbb{E}_{\mathrm{WP}}^{g}[N_{\mathrm{f-8}}^{(g_{1},1)(g_{2},1)(g_{3},1)}]\) in the following Lemmas.
**Lemma 17**.: _For any \(L\geq 2\arccos 3\) and \(g>2\),_
\[\mathbb{E}_{\mathrm{WP}}^{g}[N_{\mathrm{f-8}}^{(g-2,3)}(X,L)]=\frac{1}{8\pi^{ 2}g}(L-3-4\log 2)e^{L}+O\left(\tfrac{1}{g}Le^{\frac{1}{2}L}+\tfrac{1}{g^{2}}L^{4}e ^{L}\right)\]
_where the implied constant is independent of \(L\) and \(g\)._
Proof.: Instead of a figure-eight closed geodesic, we consider the unique pair of pants \(P(x,y,z)\) (with three boundary lengths equal to \(x,y,z\)) in which a figure-eight closed geodesic is filling. In each pair of pants, there are exactly three figure-eight closed geodesics. And in the pair of pants \(P(x,y,z)\), the number of figure-eight closed geodesics of length \(\leq L\) is equal to \(\mathbf{1}_{\{L(x,y,z)\leq L\}}+\mathbf{1}_{\{L(z,x,y)\leq L\}}+\mathbf{1}_{\{ L(y,z,x)\leq L\}}\) where \(L(x,y,z)\) is the length function given in (17). So the counting \(N_{f-8}^{(g-2,3)}(X,L)\) of figure-eight closed geodesics can be replaced by the counting of pairs of pants \(P(x,y,z)\)'s satisfying \(X\setminus\overline{P(x,y,z)}\cong S_{g-2,3}\). And by Mirzakhani's integration formula Theorem 4 (here \(C_{\Gamma}=1\) for the pair \(\Gamma=(x,y,z)\)), we have
\[\eqref{eq:22}\quad\quad\mathbb{E}_{\mathrm{WP}}^{g}[N_{f-8}^{(g-2,3)}(X,L)]\] \[=\frac{1}{V_{g}}\int_{\mathcal{M}_{g}}\sum_{\text{pairs of pants }P(x,y,z)}\mathbf{1}_{\{L(x,y,z)\leq L\}}+\mathbf{1}_{\{L(z,x,y)\leq L\}}+ \mathbf{1}_{\{L(y,z,x)\leq L\}}dX\] \[=\frac{1}{6}\int_{x,y,z\geq 0}\big{(}\mathbf{1}_{\{L(x,y,z) \leq L\}}+\mathbf{1}_{\{L(z,x,y)\leq L\}}+\mathbf{1}_{\{L(y,z,x)\leq L\}} \big{)}\] \[\qquad\frac{V_{g-2,3}(x,y,z)V_{0,3}(x,y,z)}{V_{g}}xyz\ dxdydz\] \[=\frac{1}{2}\int_{x,y,z\geq 0}\mathbf{1}_{\{L(x,y,z)\leq L\}} \frac{V_{g-2,3}(x,y,z)}{V_{g}}xyz\ dxdydz\]
where in the last equation we apply that \(V_{0,3}(x,y,z)=1\) and the product \(V_{g-2,3}(x,y,z)xyz\) is symmetric with respect to \(x,y,z\). By Part (2) of Theorem 6 we have
\[\frac{V_{g-2,3}}{V_{g}}=\frac{1}{8\pi^{2}g}\cdot\left(1+O\left(\frac{1}{g} \right)\right). \tag{23}\]
This together with Theorem 8 imply that
\[\frac{V_{g-2,3}(x,y,z)}{V_{g}}xyz=\frac{1}{\pi^{2}g}\sinh(\tfrac{1}{2}x)\sinh (\tfrac{1}{2}y)\sinh(\tfrac{1}{2}z)\cdot\left(1+O(\tfrac{1+x^{2}+y^{2}+z^{2}}{ g})\right). \tag{24}\]
Applying (24) into the last line of (22), by the fact that \(L(x,y,z)>\frac{1}{2}(x+y+z)\), the remainder term can be bounded as
\[\left|\frac{1}{2}\int_{\{L(x,y,z)\leq L\}}\frac{1}{\pi^{2}g} \sinh(\tfrac{1}{2}x)\sinh(\tfrac{1}{2}y)\sinh(\tfrac{1}{2}z)O(\tfrac{1+x^{2}+ y^{2}+z^{2}}{g})dxdydz\right|\] \[\prec\frac{1}{g^{2}}\int_{\{x+y+z\leq 2L\}}(1+x^{2}+y^{2}+z^{2})e^ {\tfrac{1}{2}(x+y+z)}dxdydz\] \[\prec\frac{1}{g^{2}}L^{4}e^{L}. \tag{25}\]
And the main term is
\[\frac{1}{2}\int_{x,y,z\geq 0}\mathbf{1}_{\{L(x,y,z)\leq L\}}\frac{1}{\pi^{2}g} \sinh(\tfrac{1}{2}x)\sinh(\tfrac{1}{2}y)\sinh(\tfrac{1}{2}z)dxdydz. \tag{26}\]
We change the variables \((x,y,z)\) into \((x,y,t)\) with \(t=L(x,y,z)\). By (17),
\[\tfrac{1}{2}\sinh(\tfrac{1}{2}t)dt=\tfrac{1}{2}\sinh(\tfrac{1}{2}z)dz+*dx+*dy.\]
So
\[\frac{1}{2}\int_{x,y,z\geq 0}\mathbf{1}_{\{L(x,y,z)\leq L\}}\frac{1 }{\pi^{2}g}\sinh(\tfrac{1}{2}x)\sinh(\tfrac{1}{2}y)\sinh(\tfrac{1}{2}z)dxdydz\] \[=\int_{\mathbf{Cond}}\frac{1}{2\pi^{2}g}\sinh(\tfrac{1}{2}x)\sinh (\tfrac{1}{2}y)\sinh(\tfrac{1}{2}t)dxdydt \tag{27}\]
where the integration region \(\mathbf{Cond}\) is
\[\begin{cases}x,y,z\geq 0\\ L(x,y,z)\leq L\end{cases}\iff\begin{cases}x\geq 0,\ \cosh(\tfrac{1}{2}x)\leq \frac{\cosh(\tfrac{1}{2}t)-1}{2\cosh(\tfrac{1}{2}y)}\\ y\geq 0,\ 2\cosh(\tfrac{1}{2}y)\leq\cosh(\tfrac{1}{2}t)-1\\ 0\leq t\leq L,\ \cosh(\tfrac{1}{2}t)\geq 3\end{cases}.\]
We consider the integral for \(x,y\) and \(t\) in order. First taking an integral for \(x\), we get
\[\int_{x\geq 0}\mathbf{1}_{\left\{\cosh(\tfrac{1}{2}x)\leq\frac{\cosh(\tfrac{1} {2}t)-1}{2\cosh(\tfrac{1}{2}y)}\right\}}\cdot\frac{1}{2\pi^{2}g}\sinh(\tfrac{1 }{2}x)dx=\frac{1}{\pi^{2}g}\left(\frac{\cosh(\tfrac{1}{2}t)-1}{2\cosh(\tfrac{1 }{2}y)}-1\right).\]
Then taking an integral for \(y\), we get
\[\int_{y\geq 0}\mathbf{1}_{\left\{2\cosh(\tfrac{1}{2}y)\leq\cosh( \tfrac{1}{2}t)-1\right\}}\cdot\sinh(\tfrac{1}{2}y)\frac{1}{\pi^{2}g}\left( \frac{\cosh(\tfrac{1}{2}t)-1}{2\cosh(\tfrac{1}{2}y)}-1\right)dy\] \[=\frac{1}{2\pi^{2}g}\left(\cosh(\tfrac{1}{2}t)-1\right)\left(2 \log(\tfrac{\cosh(\tfrac{1}{2}t)-1}{2})\right)-\frac{2}{\pi^{2}g}\left( \tfrac{\cosh(\tfrac{1}{2}t)-1}{2}-1\right)\] \[=\frac{4}{\pi^{2}g}\sinh^{2}(\tfrac{1}{4}t)\log(\sinh(\tfrac{1}{ 4}t))-\frac{2}{\pi^{2}g}\sinh^{2}(\tfrac{1}{4}t)+\frac{2}{\pi^{2}g}.\]
Finally taking an integral for \(t\), it is clear that \(\log(\sinh(\tfrac{1}{4}t))=\tfrac{1}{4}t-\log 2+O(e^{-\frac{1}{2}t})\), so we get
\[\int_{\mathbf{Cond}}\frac{1}{2\pi^{2}g}\sinh(\tfrac{1}{2}x)\sinh (\tfrac{1}{2}y)\sinh(\tfrac{1}{2}t)dxdydt\] \[=\int_{2\arccos 3}^{L}\sinh(\tfrac{1}{2}t)\bigg{(}\frac{4}{\pi^{2}g }\sinh^{2}(\tfrac{1}{4}t)\log(\sinh(\tfrac{1}{4}t))\] \[\qquad-\frac{2}{\pi^{2}g}\sinh^{2}(\tfrac{1}{4}t)+\frac{2}{\pi^{2 }g}\bigg{)}dt\] \[=\int_{2\arccos 3}^{L}\sinh(\tfrac{1}{2}t)\bigg{(}\frac{4}{\pi^{2}g }\sinh^{2}(\tfrac{1}{4}t)(\tfrac{1}{4}t-\log 2+O(e^{-\frac{1}{2}t}))\] \[\qquad-\frac{2}{\pi^{2}g}\sinh^{2}(\tfrac{1}{4}t)+\frac{2}{\pi^{2 }g}\bigg{)}dt \tag{28}\]
\[=\frac{1}{8\pi^{2}g}(L-3-4\log 2)e^{L}+O(\frac{1}{g}Le^{\frac{1}{2}L}).\]
So combining (22), (25), (27) and (28), we obtain
\[\mathbb{E}_{\mathrm{WP}}^{g}[N_{\mathrm{f}-8}^{(g-2,3)}(X,L)]=\frac{1}{8\pi^{2} g}(L-3-4\log 2)e^{L}+O\left(\frac{1}{g}Le^{\frac{1}{2}L}+\frac{1}{g^{2}}L^{4}e^{L}\right)\]
as desired.
_Remark_.: Similar computations were taken in [1].
**Lemma 18**.: _For any \(L\geq 2\arccos 3\) and \(g>2\) we have_
\[\mathbb{E}_{\mathrm{WP}}^{g}[N_{\mathrm{f}-8}^{(g-1,1)}(X,L)]\prec\frac{1}{g}L ^{2}e^{\frac{1}{2}L},\]
\[\sum_{(g_{1},g_{2})}\mathbb{E}_{\mathrm{WP}}^{g}[N_{\mathrm{f}-8}^{(g_{1},1)( g_{2},2)}(X,L)]\prec\frac{1}{g^{2}}L^{2}e^{L},\]
\[\sum_{(g_{1},g_{2},g_{3})}\mathbb{E}_{\mathrm{WP}}^{g}[N_{\mathrm{f}-8}^{(g_{1 },1)(g_{2},1)(g_{3},1)}(X,L)]\prec\frac{1}{g^{3}}L^{2}e^{L},\]
_where the first sum is taken over all \((g_{1},g_{2})\) with \(g_{1}+g_{2}=g-1\) and \(g_{1},g_{2}\geq 1\); the second sum is taken over all \((g_{1},g_{2},g_{3})\) with \(g_{1}+g_{2}+g_{3}=g\) and \(g_{1}\geq g_{2}\geq g_{3}\geq 1\). The implied constants are independent of \(L\) and \(g\)._
Proof.: In a pair of pants \(P(x,y,z)\), there are exactly three figure-eight closed geodesics. And from (17) we know that the condition that a figure-eight closed geodesic in \(P(x,y,z)\) has length \(\leq L\) can imply that \(x+y+z\leq 2L\) and all \(x,y,z\leq L\). So
\[N_{\mathrm{f}-8}^{(g-1,1)}(X,L)\leq 3\cdot\#\left\{P(x,y,z);\begin{array}{l}X \setminus\overline{P(x,y,z)}\cong S_{g-1,1}\\ x,y\text{ are the same loop in }X\\ x=y\leq L,z\leq L\end{array}\right\},\]
\[N_{\mathrm{f}-8}^{(g_{1},1)(g_{2},2)}(X,L)\leq 3\cdot\#\left\{P(x,y,z);\begin{array} []{l}X\setminus\overline{P(x,y,z)}\cong S_{g_{1},1}\cup S_{g_{2},2}\\ x+y+z\leq 2L\end{array}\right\},\]
\[N_{\mathrm{f}-8}^{(g_{1},g_{2},g_{3})}(X,L)\leq 3\cdot\#\left\{P(x,y,z);\begin{array} []{l}X\setminus\overline{P(x,y,z)}\cong S_{g_{1},1}\cup S_{g_{2},1}\cup S_{g_ {3},1}\\ x+y+z\leq 2L\end{array}\right\}.\]
Then applying Mirzakhani's integration formula Theorem 4 and Theorem 8,
\[\mathbb{E}_{\mathrm{WP}}^{g}[N_{\mathrm{f}-8}^{(g-1,1)}(X,L)] \leq 3\int_{\begin{subarray}{c}x,z\geq 0\\ x,z\leq L\end{subarray}}\frac{V_{g-1,1}(z)V_{0,3}(x,x,z)}{V_{g}}xz\ dxdz\] \[\prec \int_{\begin{subarray}{c}x,z\geq 0\\ x,z\leq L\end{subarray}}x\sinh(\frac{1}{2}z)\frac{V_{g-1,1}}{V_{g}}dxdz\] \[\prec \frac{V_{g-1,1}}{V_{g}}L^{2}e^{\frac{1}{2}L},\]
and
\[\sum_{(g_{1},g_{2})}\mathbb{E}_{\mathrm{WP}}^{g}[N_{\mathrm{f}-8}^{(g_{1},1)( g_{2},2)}(X,L)] \tag{30}\]
\[\leq 3\sum_{(g_{1},g_{2})}\int_{\begin{subarray}{c}x,y,z\geq 0\\ x+y+z\leq 2L\end{subarray}}\frac{V_{g_{1},1}(x)V_{g_{2},2}(y,z)V_{0,3}(x,y,z)}{V_{g}} xyz\ dxdydz\] \[\prec\sum_{(g_{1},g_{2})}\int_{\begin{subarray}{c}x,y,z\geq 0\\ x+y+z\leq 2L\end{subarray}}\sinh(\tfrac{1}{2}x)\sinh(\tfrac{1}{2}y)\sinh( \tfrac{1}{2}z)\frac{V_{g_{1},1}V_{g_{2},2}}{V_{g}}dxdydz\] \[\prec\sum_{(g_{1},g_{2})}\frac{V_{g_{1},1}V_{g_{2},2}}{V_{g}}L^{2 }e^{L},\]
and similarly
\[\sum_{(g_{1},g_{2},g_{3})}\mathbb{E}_{\mathrm{WP}}^{g}[N_{\mathrm{f}-8}^{(g_{ 1},1)(g_{2},1)(g_{3},1)}(X,L)]\prec\sum_{(g_{1},g_{2},g_{3})}\frac{V_{g_{1},1} V_{g_{2},1}V_{g_{3},1}}{V_{g}}L^{2}e^{L}. \tag{31}\]
Then applying Theorem 6 and Theorem 7 for \(q=2,3\) we have
\[\frac{V_{g-1,1}}{V_{g}}\prec\frac{1}{g}, \tag{32}\]
\[\sum_{(g_{1},g_{2})}\frac{V_{g_{1},1}V_{g_{2},2}}{V_{g}}\prec\frac{1}{g}\frac{ W_{2g-3}}{V_{g}}\prec\frac{1}{g^{2}}, \tag{33}\]
\[\sum_{(g_{1},g_{2},g_{3})}\frac{V_{g_{1},1}V_{g_{2},1}V_{g_{3},1}}{V_{g}} \prec\frac{1}{g^{2}}\frac{W_{2g-3}}{V_{g}}\prec\frac{1}{g^{3}}. \tag{34}\]
Then the conclusion follows from all these equations (29)-(34).
Proof of Proposition 15.: The conclusion clearly follows from (21), Lemma 17 and Lemma 18.
## 4. Upper bound
In this section, we will prove the upper bound of the length of non-simple systole for random hyperbolic surfaces. That is, we show
**Theorem 19**.: _For any function \(\omega(g)\) satisfying_
\[\lim_{g\to\infty}\omega(g)=+\infty\text{ and }\lim_{g\to\infty}\frac{\omega(g)}{ \log\log g}=0,\]
_then we have_
\[\lim_{g\to\infty}\operatorname{Prob}_{\mathrm{WP}}^{g}\big{(}X\in\mathcal{M }_{g};\ \ell_{\mathrm{sys}}^{\mathrm{ns}}(X)<\log g-\log\log g+\omega(g)\big{)}=1.\]
In order to prove Theorem 19, it suffices to show that
\[\lim_{g\to\infty}\operatorname{Prob}_{\mathrm{WP}}^{g}\big{(}X\in\mathcal{M }_{g};\ N_{\mathrm{f}-8}(X,L_{g})=0\big{)}=0 \tag{35}\]
where
\[L_{g}=\log g-\log\log g+\omega(g).\]
Instead of working on \(N_{\mathrm{f}-8}(X,L_{g})\), we consider \(N_{(0,3),\ast}^{(g-2,3)}(X,L_{g})\) defined as follows.
_Definition_.: For any \(L>1\) and \(X\in\mathcal{M}_{g}\), denote by
\[\mathcal{N}_{(0,3),\star}^{(g-2,3)}(X,L)=\left\{\begin{array}{rl}(\gamma_{1}, \gamma_{2},\gamma_{3});&\text{\rm{ curves such that }}X\setminus\cup_{i=1}^{3}\gamma_{i}\simeq S_{0,3}\bigcup S_{g-2,3},\\ (\gamma_{1},\gamma_{2},\gamma_{3});&\ell_{\gamma_{1}}(X)\leq L,\ \ell_{\gamma_{2}}(X)+\ell_{ \gamma_{3}}(X)\leq L\\ &\text{\rm{ and }}\ell_{\gamma_{1}}(X),\ell_{\gamma_{2}}(X),\ell_{\gamma_{3}}(X) \geq 10\log L\end{array}\right\}\]
and
\[N_{(0,3),\star}^{(g-2,3)}(X,L)=\#\mathcal{N}_{(0,3),\star}^{(g-2,3)}(X,L).\]
It follows by Equation (17) that each \((\gamma_{1},\gamma_{2},\gamma_{3})\in\mathcal{N}_{(0,3),\star}^{(g-2,3)}(X,L)\) bounds a pair of pants \(P\) that contains a figure-eight closed geodesic of length \(\leq L+c\) for some uniform constant \(c>0\): acutually the desired figure-eight closed geodesic is the one winding around \(\gamma_{2}\) and \(\gamma_{3}\). Then we have
\[\begin{split}&\operatorname{Prob}_{\text{\rm WP}}^{g}\left(X\in \mathcal{M}_{g};\ N_{\text{\rm{f}}-8}(X,L_{g})=0\right)\\ \leq&\operatorname{Prob}_{\text{\rm WP}}^{g}\left(X \in\mathcal{M}_{g};\ N_{(0,3),\star}^{(g-2,3)}(X,L_{g}-c)=0\right).\end{split} \tag{36}\]
For any \(L>1\), we view \(N_{(0,3),\star}^{(g-2,3)}(X,L)\) as a nonnegative integer-valued random variable on \(\mathcal{M}_{g}\). Then by the standard Cauchy-Schwarz inequality we know that
\[\operatorname{Prob}_{\text{\rm WP}}^{g}\left(X\in\mathcal{M}_{g};\ N_{(0,3), \star}^{(g-2,3)}(X,L)>0\right)\geq\frac{\mathbb{E}_{\text{\rm WP}}^{g}\left[ N_{(0,3),\star}^{(g-2,3)}(X,L)\right]^{2}}{\mathbb{E}_{\text{\rm WP}}^{g}\left[ (N_{(0,3),\star}^{(g-2,3)}(X,L))^{2}\right]}\]
implying
\[\begin{split}&\operatorname{Prob}_{\text{\rm WP}}^{g}\left(X\in \mathcal{M}_{g};\ N_{(0,3),\star}^{(g-2,3)}(X,L)=0\right)\\ &\leq\frac{\mathbb{E}_{\text{\rm WP}}^{g}\left[(N_{(0,3),\star} ^{(g-2,3)}(X,L))^{2}\right]-\mathbb{E}_{\text{\rm WP}}^{g}\left[N_{(0,3), \star}^{(g-2,3)}(X,L)\right]^{2}}{\mathbb{E}_{\text{\rm WP}}^{g}\left[N_{(0,3 ),\star}^{(g-2,3)}(X,L)\right]^{2}}.\end{split} \tag{37}\]
For any \(\Gamma=(\gamma_{1},\gamma_{2},\gamma_{3})\in\mathcal{N}_{(0,3),\star}^{(g-2,3 )}(X,L)\), denote by \(P(\Gamma)\) the pair of pants bounded by the three closed geodesics in \(\Gamma\). Set
\[\begin{split}\mathcal{A}(X,L)&=\left\{(\Gamma_{1}, \Gamma_{2})\in\left(\mathcal{N}_{(0,3),\star}^{(g-2,3)}(X,L)\right)^{2};\ P( \Gamma_{1})=P(\Gamma_{2})\right\},\\ \mathcal{B}(X,L)&=\left\{(\Gamma_{1},\Gamma_{2})\in \left(\mathcal{N}_{(0,3),\star}^{(g-2,3)}(X,L)\right)^{2};\ \overline{P(\Gamma_{1})}\cap\overline{P(\Gamma_{2})}=\emptyset\right\},\\ \mathcal{C}(X,L)&=\left\{(\Gamma_{1},\Gamma_{2})\in \left(\mathcal{N}_{(0,3),\star}^{(g-2,3)}(X,L)\right)^{2};\ P(\Gamma_{1})\neq P (\Gamma_{2}),\\ \mathcal{D}(X,L)&=\left\{(\Gamma_{1},\Gamma_{2})\in \left(\mathcal{N}_{(0,3),\star}^{(g-2,3)}(X,L)\right)^{2};\ \frac{P(\Gamma_{1})}{P(\Gamma_{1})}\cap\overline{P(\Gamma_{2})}=\emptyset, \right\}.\end{split}\]
Assume \(\Gamma_{1}=(\gamma_{11},\gamma_{12},\gamma_{13})\) and \(\Gamma_{2}=(\gamma_{21},\gamma_{22},\gamma_{23})\), as shown in Figure 3:
1. In the first picture, \(\gamma_{1,i}=\gamma_{2,4-i}(i=1,2,3)\). Then we have \(\Gamma_{1}\neq\Gamma_{2}\) and \(P(\Gamma_{1})=P(\Gamma_{2})\). Hence \((\Gamma_{1},\Gamma_{2})\in\mathcal{A}(X,L)\);
2. In the second picture, \(\overline{P(\Gamma_{1})}\cap\overline{P(\Gamma_{2})}=\emptyset\). Hence \((\Gamma_{1},\Gamma_{2})\in\mathcal{B}(X,L)\) and \(X\setminus(\Gamma_{1}\cup\Gamma_{2})\simeq S_{0,3}\cup S_{0,3}\cup S_{g-4,n+6}\);
3. In the third picture, \(P(\Gamma_{1})\cap P(\Gamma_{2})\neq\emptyset\). Hence \((\Gamma_{1},\Gamma_{2})\in\mathcal{C}(X,L)\);
4. In the forth picture, \(P(\Gamma_{1})\cap P(\Gamma_{2})=\emptyset\) and \(\gamma_{1i}=\gamma_{2i}\)\((i=1,2)\). Hence \((\Gamma_{1},\Gamma_{2})\in\mathcal{D}(X,L)\).
Denote by
\[A(X,L) =\#\mathcal{A}(X,L),\ B(X,L)=\#\mathcal{B}(X,L),\] \[C(X,L) =\#\mathcal{C}(X,L),\ D(X,L)=\#\mathcal{D}(X,L).\]
It is clear that
\[\begin{split}&\mathbb{E}_{\mathrm{WP}}^{g}\left[(N_{(0,3),\star}^{ (g-2,3)}(X,L))^{2}\right]=\mathbb{E}_{\mathrm{WP}}^{g}\left[A(X,L)\right]\\ &+\mathbb{E}_{\mathrm{WP}}^{g}\left[B(X,L)\right]+\mathbb{E}_{ \mathrm{WP}}^{g}\left[C(X,L)\right]+\mathbb{E}_{\mathrm{WP}}^{g}\left[D(X,L) \right].\end{split} \tag{38}\]
Since for any pair of pants \(P\) in \(X\), there exist at most \(6\) different \(\Gamma^{\prime}\)s such that \(P=P(\Gamma)\), it follows that
\[\mathbb{E}_{\mathrm{WP}}^{g}\left[A(X,L)\right]\leq 6\cdot\mathbb{E}_{\mathrm{WP} }^{g}\left[N_{(0,3),\star}^{(g-2,3)}(X,L)\right]. \tag{39}\]
Now we split the proof of Theorem 19 into the following several subsections. The estimate for \(\mathbb{E}_{\mathrm{WP}}^{g}\left[C(X,L)\right]\) is the hard part. And the estimates for \(\mathbb{E}_{\mathrm{WP}}^{g}\left[A(X,L)\right],\ \mathbb{E}_{\mathrm{WP}}^{g}\left[B(X,L)\right]\) and \(\mathbb{E}_{\mathrm{WP}}^{g}\left[D(X,L)\right]\) are relative easier.
### Estimations of \(\mathbb{E}_{\mathrm{WP}}^{g}\left[N_{(0,3),\star}^{(g-2,3)}(X,L)\right]\)
Recall that by Proposition 15 we have
\[\mathbb{E}_{\mathrm{WP}}^{g}\left[N_{\mathrm{f}-8}(X,L_{g})\right]\sim\frac{L _{g}e^{L_{g}}}{8\pi^{2}g} \tag{40}\]
where \(L_{g}=\log g-\log\log g+\omega(g)\) and \(\omega(g)=o(\log g)\). We will see that \(\mathbb{E}_{\mathrm{WP}}^{g}\left[N_{(0,3),\star}^{(g-2,3)}(X,L_{g})\right]\) is of the same growth rate.
**Proposition 20**.: _Assume \(L>1\) and \(L=O\left(\log g\right)\), then_
\[\mathbb{E}_{\mathrm{WP}}^{g}\left[N_{(0,3),\star}^{(g-2,3)}(X,L)\right]=\frac {1}{2\pi^{2}g}Le^{L}\left(1+O\left(\frac{\log L}{L}\right)\right)\]
_where the implied constant is independent of \(L\) and \(g\)._
Proof.: For any \(L>1\), let \(D_{L}\subset\mathbb{R}_{\geq 0}^{3}\) be a domain defined by
\[D_{L}:=\left\{(x,y,z)\in\mathbb{R}_{\geq 0}^{3};\ x\leq L,\ y+z\leq L,\ x,y,z \geq 10\log L\right\}.\]
Assume \(\phi_{L}:\mathbb{R}_{+}^{3}\rightarrow\mathbb{R}_{\geq 0}\) is the characteristic function of \(D_{L}\), i.e.
\[\phi_{L}(u)=\begin{cases}0&u\notin D_{L},\\ 1&u\in D_{L}.\end{cases}\]
Assume \(\Gamma=(\gamma_{1},\gamma_{2},\gamma_{3})\) is a pair of _ordered_ simple closed curves in \(X\) such that
\[X\setminus\left(\overset{3}{\cup}\overset{}{\cup}\gamma_{i}\right)\simeq S_{0,3}\cup S_{g-2,3}.\]
Apply Mirzakhani's integration formula Theorem 4 to the ordered simple closed multi-curve \(\overset{3}{\underset{i=1}{\sum}}\gamma_{i}\) and function \(\phi_{L}\), we have
\[\mathbb{E}_{\mathrm{WP}}^{g}\left[N_{(0,3),\star}^{(g-2,3)}(X,L)\right]\] \[=\frac{1}{V_{g}}\int\limits_{\mathcal{M}_{g}}\phi_{L}^{\Gamma}(X)dX\] \[=\frac{1}{V_{g}}\int\limits_{D_{L}}V_{0,3}(x,y,z)V_{g-2,3}(x,y,z) xyzdxdydz. \tag{41}\]
Recall that Equation (23) says that
\[\frac{V_{g-2,3}}{V_{g}}=\frac{1}{8\pi^{2}g}\left(1+O\left(\frac{1}{g}\right) \right). \tag{42}\]
By Theorem 8, for any \(0<x,y,z\leq L\), as \(g\rightarrow\infty\), we have
\[V_{g-2,3}(x,y,z)=V_{g-2,3}\frac{8\sinh\frac{x}{2}\sinh\frac{y}{2}\sinh\frac{z} {2}}{xyz}\cdot\left(1+O\left(\frac{L^{2}}{g}\right)\right) \tag{43}\]
where the implied constant is independent of \(g\) and \(L\). So we have
\[\mathbb{E}_{\mathrm{WP}}^{g}\left[N_{(0,3),\star}^{(g-2,3)}(X,L)\right]\] \[=\frac{1}{\pi^{2}g}\int\limits_{D_{L}}\sinh\frac{x}{2}\sinh\frac{ y}{2}\sinh\frac{z}{2}dxdydz\cdot\left(1+O\left(\frac{1}{g}\right)\right)\cdot \left(1+O\left(\frac{L^{2}}{g}\right)\right) \tag{44}\]
where the implied constant is independent of \(L\) and \(g\). From direct calculations we have
\[\int_{10\log L}^{L}\sinh\frac{x}{2}dx=2\left(\cosh\frac{L}{2}-\cosh(5\log L) \right)=e^{\frac{L}{2}}+O(L^{5}) \tag{45}\]
and
\[\int\limits_{\begin{subarray}{c}y+z\leq L\\ y,\,z\geq 10\log L\end{subarray}}\sinh\frac{y}{2}\sinh\frac{z}{2}dydz\] \[=\int_{10\log L}^{L-10\log L}\sinh\frac{z}{2}\int_{10\log L}^{L- z}\sinh\frac{y}{2}dydz\] \[=2\int_{10\log L}^{L-10\log L}\sinh\frac{z}{2}\left(\cosh\frac{L -z}{2}-\cosh(5\log L)\right)dz\] \[=2\int_{10\log L}^{L-10\log L}\sinh\frac{z}{2}\cosh\frac{L-z}{2} dz+O\left(e^{\frac{L}{2}}\right)\] \[=\frac{1}{2}Le^{\frac{L}{2}}+O\left(e^{\frac{L}{2}}\log L\right), \tag{46}\]
where the implied constants are independent of \(L\). From (45) and (46) we have
\[\int_{D_{L}}\sinh\frac{x}{2}\sinh\frac{y}{2}\sinh\frac{z}{2}dxdydz\] \[=\int_{10\log L}^{L}\sinh\frac{x}{2}dx\times\int\limits_{ \begin{subarray}{c}y+z\leq L\\ y,\,z\geq 10\log L\end{subarray}}\sinh\frac{y}{2}\sinh\frac{z}{2}dydz\] \[=\left(e^{\frac{L}{2}}+O\left(L^{5}\right)\right)\times\left( \frac{1}{2}Le^{\frac{L}{2}}+O\left(e^{\frac{L}{2}}\log L\right)\right)\] \[=\frac{1}{2}Le^{L}+O\left(e^{L}\log L\right). \tag{47}\]
From (44), (47) and the assumption that \(L=O\left(\log g\right)\), we obtain
\[\mathbb{E}_{\mathrm{WP}}^{g}\left[N_{(0,3),\star}^{(g-2,3)}(X,L)\right]\] \[=\frac{1}{\pi^{2}g}\cdot\left(\frac{1}{2}Le^{L}+O\left(e^{L}\log L \right)\right)\cdot\left(1+O\left(\frac{1}{g}\right)\right)\cdot\left(1+O \left(\frac{L^{2}}{g}\right)\right)\] \[=\frac{1}{2\pi^{2}g}Le^{L}\left(1+O\left(\frac{\log L}{L}\right)\right)\]
as desired.
### Estimations of \(\mathbb{E}_{\mathrm{WP}}^{g}\left[B(X,L)\right]\)
For \(B(X,L)\), we will show that
\[\mathbb{E}_{\mathrm{WP}}^{g}\left[B(X,L_{g})\right]\sim\mathbb{E}_{\mathrm{ WP}}^{g}\left[N_{(0,3),\star}^{(g-2,3)}(X,L_{g})\right]^{2},\quad\text{as }g\to\infty,\]
where \(L_{g}=\log g-\log\log g+\omega(g)\) and \(\omega(g)=o(\log g)\). More precisely,
**Proposition 21**.: _Assume \(L>1\) and \(L=O(\log g)\), then_
\[\mathbb{E}_{\mathrm{WP}}^{g}\left[B(X,L)\right]=\frac{1}{4\pi^{4}g^{2}}L^{2}e^{2L }\left(1+O\left(\frac{\log L}{L}\right)\right), \tag{48}\]
_where the implied constant is independent of \(g\) and \(L\)._
Proof.: Assume \((\Gamma_{1},\Gamma_{2})\in\mathcal{B}(X,L)\) and \(\Gamma_{1}=(\alpha_{1},\alpha_{2},\alpha_{3})\), \(\Gamma_{2}=(\beta_{1},\beta_{2},\beta_{3})\). From the definition of \(\mathcal{B}(X,L)\), it is not hard to check that
\[X\setminus(\Gamma_{1}\cup\Gamma_{2})=P(\Gamma_{1})\cup P(\Gamma_{2})\cup Y, \text{ where }Y\simeq S_{g-4,6}.\]
Define a function \(\phi_{L,2}:\mathbb{R}_{+}^{3}\times\mathbb{R}_{+}^{3}\rightarrow\mathbb{R}_{ \geq 0}\) as follows:
\[\phi_{L,2}(u,v):=\phi_{L}(u)\cdot\phi_{L}(v)\]
where \(\phi_{L}\) is defined in the proof of Proposition 20. Assume \((\gamma_{1},\gamma_{2},\cdots,\gamma_{6})\) is an ordered simple closed multi-curve in \(S_{g}\) such that
\[X\setminus\overset{6}{\cup}_{i=1}^{6}\gamma_{i}\simeq S_{0,3}^{1}\cup S_{0,3} ^{2}\cup S_{g-4,6}\]
where the boundary of \(S_{0,3}^{1}\) consists of \(\gamma_{i}(1\leq i\leq 3)\) and the boundary of \(S_{0,3}^{2}\) consists of \(\gamma_{j}(4\leq j\leq 6)\). By Part (2) of Theorem 6 we have
\[\frac{V_{g-4,6}}{V_{g}}=\frac{1}{64\pi^{4}g^{2}}\left(1+O\left(\frac{1}{g} \right)\right). \tag{49}\]
By Theorem 8 we have
\[V_{g-4,6}(x_{1},...,x_{6})=V_{g-4,6}\cdot\prod_{i=1}^{6}\frac{2\sinh\frac{x_{ i}}{2}}{x_{i}}\cdot\left(1+O\left(\frac{L^{2}}{g}\right)\right) \tag{50}\]
where the implied constant is independent of \(L\) and \(g\). Applying Mirzakhani's integration formula Theorem 4 to ordered simple closed multi-curve \((\gamma_{1},\gamma_{2},\cdots,\gamma_{6})\) and function \(\phi_{L,2}\), together with (47), (49) and (50), we have
\[\mathbb{E}_{\mathrm{WP}}^{g}\left[B(X,L)\right]\] \[=\frac{1}{V_{g}}\int_{D_{L}\times D_{L}}V_{g-4,6}(x_{1},...,x_{6 })\prod_{i=1}^{6}x_{i}dx_{1}...dx_{6}\] \[=\frac{V_{g-4,6}}{V_{g}}\left(\int_{D_{L}}8\sinh\frac{x}{2}\sinh \frac{y}{2}\sinh\frac{z}{2}dxdydz\right)^{2}\cdot\left(1+O\left(\frac{L^{2}}{g }\right)\right)\] \[=\frac{1}{64\pi^{4}g^{2}}\left(1+O\left(\frac{1}{g}\right)\right) \cdot\left(4Le^{L}+O\left(e^{L}\log L\right)\right)^{2}\cdot\left(1+O\left( \frac{L^{2}}{g}\right)\right)\] \[=\frac{1}{4\pi^{4}g^{2}}L^{2}e^{2L}\left(1+O\left(\frac{\log L}{L }\right)\right) \tag{51}\]
where the implied constant is independent of \(L\) and \(g\)
### Estimations of \(\mathbb{E}_{\mathrm{WP}}^{g}\left[C(X,L)\right]\)
For \((\Gamma_{1},\Gamma_{2})\in\mathcal{C}(X,L)\), \(\Gamma_{1}\cup\Gamma_{2}\) is not the union of disjoint simple closed geodesics but with intersections. This is the hard case. We will need the following construction as in [20, 21, 22] to deform \(\Gamma_{1}\cup\Gamma_{2}\).
ConstructionFix a closed hyperbolic surface \(X\in\mathcal{M}_{g}\) and let \(X_{1},X_{2}\) be two distinct connected, precompact subsurfaces of \(X\) with geodesic boundaries, such that \(X_{1}\cap X_{2}\neq\emptyset\) and neither of them contains the other. Then the union \(X_{1}\cup X_{2}\) is a subsurface whose boundary consists of only piecewise geodesics. We can construct from it a new subsurface, with geodesic boundary, by deforming each of its boundary components \(\xi\subset\partial\left(X_{1}\cup X_{2}\right)\) as follows:
1. if \(\xi\) is homotopically nontrivial, we deform \(X_{1}\cup X_{2}\) by shrinking \(\xi\) to the unique simple closed geodesic homotopic to it;
2. if \(\xi\) is homotopically trivial, we fill into \(X_{1}\cup X_{2}\) the disc bounded by \(\xi\).
Denote by \(S(X_{1},X_{2})\) the subsurface with geodesic boundary constructed from \(X_{1}\) and \(X_{2}\) as above. For any \((\Gamma_{1},\Gamma_{2})\in\mathcal{C}(X,L)\), denote by \(S(\Gamma_{1},\Gamma_{2})=S\left(P(\Gamma_{1}),P(\Gamma_{2})\right)\) the subsurface of \(X\) constructed from \(P(\Gamma_{1})\) and \(P(\Gamma_{2})\).
By the construction, it is clear that
\[\ell(\partial S(X_{1},X_{2}))\leq\ell(\partial X_{1})+\ell(\partial X_{2}), \tag{52}\]
and by isoperimetric inequality(see e.g. [2, Section 8.1] or [21]) we have
\[\mathrm{Area}(S(X_{1},X_{2}))\leq\mathrm{Area}(X_{1})+\mathrm{Area}(X_{2})+ \ell(\partial X_{1})+\ell(\partial X_{2}). \tag{53}\]
Recall that by the definition of \(\mathcal{N}_{(0,3),\star}^{(g-2,3)}(X,L)\), we know that for any \(\Gamma\in\mathcal{N}_{(0,3),\star}^{(g-2,3)}(X,L)\),
\[X\setminus\Gamma\simeq S_{0,3}\cup S_{g-2,3}.\]
Thus we may divide \(\mathcal{C}(X,L)\) into following pairwisely disjoint three parts:
\[\mathcal{C}(X,L)=\mathcal{C}_{0,4}(X,L)\cup\mathcal{C}_{1,2}(X,L)\cup \mathcal{C}_{\geq 3}(X,L) \tag{54}\]
where
\[\mathcal{C}_{0,4}(X,L) =\left\{(\Gamma_{1},\Gamma_{2})\in\mathcal{C}(X,L);\ S(\Gamma_{1 },\Gamma_{2})\simeq S_{0,4}\right\},\] \[\mathcal{C}_{1,2}(X,L) =\left\{(\Gamma_{1},\Gamma_{2})\in\mathcal{C}(X,L);\ S(\Gamma_{1 },\Gamma_{2})\simeq S_{1,2}\right\},\] \[\mathcal{C}_{\geq 3}(X,L) =\left\{(\Gamma_{1},\Gamma_{2})\in\mathcal{C}(X,L);\ |\chi(S(\Gamma_{1 },\Gamma_{2}))|\geq 3\right\}.\]
Assume \(\Gamma_{1}=(\gamma_{11},\gamma_{12},\gamma_{13})\) and \(\Gamma_{2}=(\gamma_{21},\gamma_{22},\gamma_{23})\). As in Figure 4:
1. in the first picture, the simple closed geodesic \(\gamma_{12}\) coincides with \(\gamma_{22}\). We have \(S(\Gamma_{1},\Gamma_{2})\simeq S_{0,4}\) of geodesic boundaries \(\gamma_{11},\gamma_{12}=\gamma_{22},\gamma_{21}\) and \(\beta\). Hence \((\Gamma_{1},\Gamma_{2})\in\mathcal{C}_{0,4}(X,L)\);
2. in the second picture, we have \(S(\Gamma_{1},\Gamma_{2})\simeq S_{1,2}\) of geodesic boundaries \(\gamma_{11}\) and \(\gamma_{21}\). Hence \((\Gamma_{1},\Gamma_{2})\in\mathcal{C}_{1,2}(X,L)\);
3. in the third picture, we have \(S(\Gamma_{1},\Gamma_{2})\simeq S_{0,5}\) of geodesic boundaries \(\gamma_{11},\gamma_{21},\gamma_{23}\) where \(\gamma_{21}\) and \(\gamma_{23}\) appear twice in the boundary of \(S(\Gamma_{1},\Gamma_{2})\). Hence \((\Gamma_{1},\Gamma_{2})\in\mathcal{C}_{\geq 3}(X,L)\).
For \(L>0\) and \(X\in\mathcal{M}_{g}\), define
\[\operatorname{Sub}_{L}(X)\stackrel{{\mathrm{def}}}{{=}}\left\{ \begin{matrix}Y\subset X\text{ is a connected subsurface of geodesic boundary}\\ \text{ such that }\ell(\partial Y)\leq 2L\text{ and }\operatorname{Area}(Y)\leq 4L+4\pi\end{matrix} \right\}.\]
**Lemma 22**.: _For any \((\Gamma_{1},\Gamma_{2})\in\mathcal{C}(X,L)\), there exists a triple \((\alpha_{1},\alpha_{2},Y)\) and a universal constant \(c>0\) such that_
1. \(Y=S(\Gamma_{1},\Gamma_{2})\in\operatorname{Sub}_{L}(X)\)_;_
2. \(\alpha_{i}\) _is a figure-eight closed geodesic contained in_ \(P(\Gamma_{i})\) _for_ \(i=1,2\)_;_
3. \((\alpha_{1},\alpha_{2})\) _is a filling 2-tuple in_ \(Y\) _and_ \(\ell(\alpha_{1}),\ell(\alpha_{2})\leq L+c\)_._
Proof.: For Part (1), it follows from (52) and (53) that for any \((\Gamma_{1},\Gamma_{2})\in\mathcal{C}(X,L)\),
\[S(\Gamma_{1},\Gamma_{2})\in\operatorname{Sub}_{L}(X).\]
Part (2) is clear.
For Part (3), we first assume \(\Gamma_{i}=(\gamma_{i1},\gamma_{i2},\gamma_{i3})(i=1,2)\). For \(i=1,2\), let \(\alpha_{i}\) be the figure-eight closed geodesic contained in \(P(\Gamma_{i})\) winding around \(\gamma_{i2}\) and \(\gamma_{i3}\). Then from (17) we have
\[\cosh\frac{\ell(\alpha_{i})}{2}=\cosh\frac{\ell(\gamma_{i1})}{2}+2\cosh\frac{ \ell(\gamma_{i2})}{2}\cosh\frac{\ell(\gamma_{i3})}{2},\ i=1,2. \tag{55}\]
From the assumption that \((\Gamma_{1},\Gamma_{2})\in\mathcal{C}(X,L)\), we have
\[\ell(\gamma_{i1})\leq L\text{ and }\ell(\gamma_{i2})+\ell(\gamma_{i3})\leq L,\ i= 1,2. \tag{56}\]
From (55) and (56), one may check that there exists a universal constant \(c>0\) such that \(\ell(\alpha_{1}),\ell(\alpha_{2})\leq L+c\). Now we show that \((\alpha_{1},\alpha_{2})\) is a filling 2-tuple in \(S(\Gamma_{1},\Gamma_{2})\). Suppose not, then there exists a simple closed geodesic \(\beta\) in \(Y\) such that \(\beta\cap(\alpha_{1}\cup\alpha_{2})=\emptyset.\) Since \(\alpha_{i}\) fills \(P(\Gamma_{i})(i=1,2)\), it follows that \(\beta\cap(P(\Gamma_{1})\cup P(\Gamma_{2}))=\emptyset\). Then by the construction of \(S(\Gamma_{1},\Gamma_{2})\) we have \(\beta\cap S(\Gamma_{1},\Gamma_{2})=\emptyset\), which is a contradiction.
The proof is complete.
Similar to [25], we set the following assumption.
**Assumption** (\(\star\)).: Let \(Y_{0}\in\operatorname{Sub}_{L}(X)\) satisfying
1. \(Y_{0}\) is homeomorphic to to \(S_{g_{0},k}\) for some \(g_{0}\geq 0\) and \(k>0\) with \(m=|\chi(Y_{0})|=2g_{0}-2+k\geq 1\);
2. the boundary \(\partial Y_{0}\) is a simple closed multi-geodesics in \(X\) consisting of \(k\) simple closed geodesics which has \(n_{0}\) pairs of simple closed geodesics for some \(n_{0}\geq 0\) such that each pair corresponds to a single simple closed geodesic in \(X\);
3. the interior of its complement \(X\setminus S_{g_{0},k}\) consists of \(q\) components \(S_{g_{1},n_{1}},...,S_{g_{q},n_{q}}\) for some \(q\geq 1\) where \(\sum_{i=1}^{q}n_{i}=k-2n_{0}\).
Our aim is to bound \(\mathbb{E}_{\mathrm{WP}}^{g}\left[C(X,L)\right]\), from (54) it suffices to bound the three terms \(\mathbb{E}_{\mathrm{WP}}^{g}\left[C_{\geq 3}(X,L)\right],\ \mathbb{E}_{ \mathrm{WP}}^{g}\left[C_{0,4}(X,L)\right]\) and \(\mathbb{E}_{\mathrm{WP}}^{g}\left[C_{1,2}(X,L)\right]\) separately.
#### 4.3.1. Bounds for \(\mathbb{E}_{\mathrm{WP}}^{g}\left[C_{\geq 3}(X,L)\right]\)
We first bound \(\mathbb{E}_{\mathrm{WP}}^{g}\left[C_{\geq 3}(X,L)\right]\) through using the method in [13].
**Proposition 23**.: _Assume \(L>1\) and \(L\prec\log g\), then for any fixed small \(\epsilon>0\),_
\[\mathbb{E}_{\mathrm{WP}}^{g}\left[C_{\geq 3}(X,L)\right]\prec\left(L^{67}e^{2L +\epsilon L}\frac{1}{g^{3}}+\frac{L^{3}e^{8L}}{g^{11}}\right).\]
Proof.: For \((\Gamma_{1},\Gamma_{2})\in\mathcal{C}_{\geq 3}(X,L)\), by Lemma 22, there exists a filling \(2\)-tuple \((\alpha_{1},\alpha_{2})\) in \(Y\) with total length \(\leq 2L+2c\), and \(\alpha_{i}\) is a filling figure-eight closed geodesic in a unique pair of pants for both \(i=1,2\) respectively. Consider the alternatives of three geodesics in elements in \(\mathcal{N}_{(0,3),\star}^{(g-2,3)}(X,L)\), there are at most \(36\) pairs \((\Gamma_{1},\Gamma_{2})\) corresponding to the same triples \((\alpha_{1},\alpha_{2},Y)\). It follows that
\[C_{\geq 3}(X,L)\leq\sum_{\begin{subarray}{c}Y\in\mathrm{Sub}_{L}(\mathrm{X}); \\ 3\leq|\chi(Y)|\end{subarray}}36\cdot N_{2}^{\mathrm{fill}}(Y,2L+2c)\]
where \(N_{2}^{\mathrm{fill}}(Y,2L+2c)\) is defined in Subsection 2.5.2. Therefore we have (57)
\[\mathbb{E}_{\mathrm{WP}}^{g}\left[C_{\geq 3}(X,L)\right]\leq\frac{1}{V_{g}} \int_{\mathcal{M}_{g}}\sum_{\begin{subarray}{c}Y\in\mathrm{Sub}_{L}(\mathrm{ X});\\ 3\leq|\chi(Y)|\end{subarray}}36N_{2}^{\mathrm{fill}}(Y,2L+2c)\cdot 1_{[0,2L]}( \ell(\partial Y))dX.\]
Now we divide the summation above into following two parts: the first part consists of all subsurfaces \(Y\in\mathrm{Sub}_{L}(X)\) such that \(3\leq|\chi(Y)|\leq 10\); the second part consists of all subsurfaces \(Y\in\mathrm{Sub}_{L}(X)\) such that \(10<|\chi(Y)|\leq\left[\frac{4L+4\pi}{2\pi}\right]\).
For the first part, assume \(Y\simeq S_{g_{0},k}\in\mathrm{Sub}_{L}(X)\) satisfies _Assumption_ (\(\star\)) with an additional assumption that
\[3\leq m=2g_{0}-2+k\leq 10. \tag{58}\]
From [13, Proposition 34] and Theorem 11, we have that for any fixed \(0<\epsilon<\frac{1}{2}\),
\[\frac{1}{V_{g}}\int_{\mathcal{M}_{g}}\sum_{\begin{subarray}{c}Y\in\mathrm{Sub} _{L}(\mathrm{X});\\ Y\simeq S_{g_{0},k}\end{subarray}}36N_{2}^{\mathrm{fill}}(Y,2L+2c)\cdot 1_{[0,2L]}( \ell(\partial Y))dX \tag{59}\]
\[\prec\frac{1}{V_{g}}\int_{\mathcal{M}_{g}}\sum_{\begin{subarray}{c}Y\in \operatorname{Sub}_{L}(\operatorname{X});\\ Y\simeq S_{g_{0},k}\end{subarray}}Le^{2L-\frac{1-\varepsilon}{2}\ell(\partial Y )}\cdot 1_{[0,2L]}(\ell(\partial Y))dX\quad\text{(by Theorem \ref{thm:1})}\] \[=Le^{\frac{3}{2}L}\times\frac{1}{V_{g}}\int_{\mathcal{M}_{g}}\sum_ {\begin{subarray}{c}Y\in\operatorname{Sub}_{L}(\operatorname{X});\\ Y\simeq S_{g_{0},k}\end{subarray}}e^{\frac{1}{2}L-\frac{1-\varepsilon}{2}\ell( \partial Y)}\cdot 1_{[0,2L]}(\ell(\partial Y))dX\] \[\prec Le^{\frac{3}{2}L}\times L^{66}e^{\frac{1}{2}L+\epsilon L} \frac{1}{g^{m}}\quad\text{(by[WX22b, Proposition \ref{thm:2}])}\] \[=L^{67}e^{2L+\epsilon L}\frac{1}{g^{m}}.\]
Since there are at most finite pairs \((g_{0},k)\) satisfying the assumption (58), take summation over all possible subsurfaces \(Y\) for inequality (59), we have
\[\frac{1}{V_{g}}\int_{\mathcal{M}_{g}}\sum_{\begin{subarray}{c}Y\in \operatorname{Sub}_{L}(\operatorname{X});\\ 3\leq|\chi(Y)|\leq 10\end{subarray}}36N_{2}^{\text{fill}}(Y,2L+2c) \cdot 1_{[0,2L]}(\ell(\partial Y))dX\prec L^{67}e^{2L+\epsilon L}\frac{1}{g^{3}}. \tag{60}\]
For the second part, firstly by (53) and the Gauss-Bonnet formula we know that \(|\chi(Y)|\prec L\). Since \(\ell(\partial Y)\leq 2L\), by Theorem 10 we have
\[N_{2}^{\text{fill}}(Y,2L+2c)\prec\left(|\chi(Y)|\,e^{2L}\right)^{2}\prec L^{ 2}e^{\frac{9}{2}L-\frac{1}{4}\ell(\partial Y)}. \tag{61}\]
From (61) and [WX22b, Proposition 33], we have
\[\frac{1}{V_{g}}\int_{\mathcal{M}_{g}}\sum_{\begin{subarray}{c}Y \in\operatorname{Sub}_{L}(\operatorname{X});\\ 11\leq|\chi(Y)|\leq\left[\frac{4L+4\pi}{2\pi}\right]\end{subarray}}36N_{2}^{ \text{fill}}(Y,2L+2c)\cdot 1_{[0,2L]}(\ell(\partial Y))dX\] \[\prec\frac{1}{V_{g}}\int_{\mathcal{M}_{g}}\sum_{\begin{subarray} {c}Y\in\operatorname{Sub}_{L}(\operatorname{X});\\ 11\leq|\chi(Y)|\leq\left[\frac{4L+4\pi}{2\pi}\right]\end{subarray}}L^{2}e^{ \frac{9}{2}L-\frac{1}{4}\ell(\partial Y)}\cdot 1_{[0,2L]}(\ell(\partial Y))dX\] \[\prec L^{2}e^{\frac{9}{2}L}\times\frac{1}{V_{g}}\int_{\mathcal{M }_{g}}\sum_{\begin{subarray}{c}Y\in\operatorname{Sub}_{L}(\operatorname{X});\\ 11\leq|\chi(Y)|\leq\left[\frac{4L+4\pi}{2\pi}\right]\end{subarray}}e^{-\frac{ 1}{4}\ell(\partial Y)}\cdot 1_{[0,2L]}(\ell(\partial Y))dX\] \[\prec L^{2}e^{\frac{9}{2}L}\times Le^{\frac{7}{2}L}\frac{1}{g^{11}} \quad\text{(by[WX22b, Proposition \ref{thm:2}])}\] \[=\frac{L^{3}e^{8L}}{g^{11}}. \tag{62}\]
Then combining (57), (60) and (62), we complete the proof.
#### 4.3.2. Bounds for \(\mathbb{E}_{\text{WP}}^{g}\left[C_{1,2}(X,L)\right]\)
One may be aware of that the method in Proposition 23 cannot afford desired estimations for \(\mathbb{E}_{\text{WP}}^{g}\left[C_{1,2}(X,L)\right]\) and \(\mathbb{E}_{\text{WP}}^{g}\left[C_{0,4}(X,L)\right]\). Our aim for \(\mathbb{E}_{\text{WP}}^{g}\left[C_{1,2}(X,L)\right]\) is as follows.
**Proposition 24**.: _For \(L>1\) and large \(g\),_
\[\mathbb{E}_{\text{WP}}^{g}\left[C_{1,2}(X,L)\right]\prec\frac{e^{2L}}{g^{2}}.\]
The estimations for pairs \((\Gamma_{1},\Gamma_{2})\) with \(S(\Gamma_{1},\Gamma_{2})=Y\) in Lemma 22 are not good enough. We need to accurately classify the relative position of \((\Gamma_{1},\Gamma_{2})\) in \(Y\). We begin with the following bounds.
**Lemma 25**.: _For \(X\in\mathcal{M}_{g}\) and \(L>1\),_
\[\begin{split}&\#\Big{\{}(\Gamma_{1},\Gamma_{2})\in\mathcal{C}_{1,2}(X,L),\text{ either }\Gamma_{1}\text{ or }\Gamma_{2}\text{ contains }\partial S(\Gamma_{1},\Gamma_{2})\Big{\}}\\ \prec&\sum_{(\gamma_{1},\gamma_{2},\gamma_{3})}1_{[ 10\log L,L]}(\ell(\gamma_{1}))\cdot 1_{[10\log L,L]}(\ell(\gamma_{2}))\cdot 1_{[0,L]}(\ell(\gamma_{3}))\\ &\cdot\left(\frac{\ell(\gamma_{1})}{\mathcal{R}(\ell(\gamma_{1}),\ell(\gamma_{2}),L)}+\frac{\ell(\gamma_{1})}{\mathcal{D}(\ell(\gamma_{1}),2L,0)}\right),\end{split} \tag{63}\]
_where \((\gamma_{1},\gamma_{2},\gamma_{3})\) are taken over all triples of simple closed geodesics on \(X\) satisfying that \(\gamma_{1}\cup\gamma_{2}\) cuts off a subsurface \(Y\simeq S_{1,2}\) in \(X\) and \(\gamma_{3}\) separates \(Y\) into \(S_{1,1}\cup S_{0,3}\)._
Proof.: For any \((\Gamma_{1},\Gamma_{2})\in\mathcal{C}_{1,2}(X,L)\) such that either \(\Gamma_{1}\) or \(\Gamma_{2}\) contains \(\partial S(\Gamma_{1},\Gamma_{2})\), WLOG, one may assume that \(Y=S(\Gamma_{1},\Gamma_{2})\simeq S_{1,2}\) and \(\Gamma_{1}\) contains \(\partial Y=\gamma_{1}\cup\gamma_{2}\). Denote the rest simple closed geodesic in \(\Gamma_{1}\) by \(\gamma_{3}\). Consider the map
\[\pi:(\Gamma_{1},\Gamma_{2})\mapsto(\gamma_{1},\gamma_{2},\gamma_{3})\]
where the union \(\gamma_{1}\cup\gamma_{2}\) cuts off \(Y\simeq S_{1,2}\) in \(X\) and \(\gamma_{3}\) separates \(Y\) into \(S_{1,1}\cup S_{0,3}\) with length
\[\ell(\gamma_{1}),\ell(\gamma_{2}),\ell(\gamma_{3})\in[10\log L,L]. \tag{64}\]
Now we count all \(\Gamma_{2}\)'s satisfying \((\Gamma_{1},\Gamma_{2})\in\mathcal{C}_{1,2}(X,L)\) and \(P(\Gamma_{2})\subset Y\). Since \(\Gamma_{1},\Gamma_{2}\in\mathcal{N}_{(0,3),\star}^{(g-2,3)}(X,L)\) and \((\Gamma_{1},\Gamma_{2})\in\mathcal{C}_{1,2}(X,L)\), it is clear that \(\Gamma_{2}\) must contain at least one of \(\gamma_{1}\) and \(\gamma_{2}\).
Case-1: \(\Gamma_{2}\) contains \(\gamma_{1}\cup\gamma_{2}\) (see Figure 5 for an illustration). For this case, the remaining simple closed geodesic \(\tilde{\gamma}\) in \(\Gamma_{2}\) is of length \(\leq L\) and bounds a \(S_{0,3}\) in \(Y\) along with \(\gamma_{1}\cup\gamma_{2}\) as in Figure 5. Then it follows by (19) that the number of such \(\tilde{\gamma}\)'s is at most
\[\frac{\ell(\gamma_{1})}{\mathcal{R}(\ell(\gamma_{1}),\ell(\gamma_{2}),L)}.\]
Case-2: \(\Gamma_{2}\) contains only one of \(\gamma_{1},\gamma_{2}\) (see Figure 6 for an illustration). WLOG, one may assume that \(\Gamma_{2}\) contains only \(\gamma_{1}\). Then the rest two simple closed geodesics \(\alpha,\beta\) in \(\Gamma_{2}\), along with \(\gamma_{1}\), will bound a \(S_{0,3}\) in \(Y\) of total length \(\ell(\alpha)+\ell(\beta)\leq 2L\) as in Figure 6. Then it follows by (20) that the number of such pairs of \((\alpha,\beta)\)'s is at most
\[\frac{\ell(\gamma_{1})}{\mathcal{D}(\ell(\gamma_{1}),2L,0)}.\]
Combine these two cases, we have
\[\#\pi^{-1}(\gamma_{1},\gamma_{2},\gamma_{3})\leq 200\cdot\left(\frac{\ell( \gamma_{1})}{\mathcal{R}(\ell(\gamma_{1}),\ell(\gamma_{2}),L)}+\frac{\ell( \gamma_{1})}{\mathcal{D}(\ell(\gamma_{1}),2L,0)}\right).\]
Then the conclusion follows by taking a summation over all possible \((\gamma_{1},\gamma_{2},\gamma_{3})\)'s satisfying (64). This completes the proof.
_Remark_.: The coefficient \(200\) in the proof of Lemma 25 comes from the symmetry of the three boundary components of a pair of pants, the symmetry of \(\Gamma_{1}\) and \(\Gamma_{2}\), and is not essential. What we need is a universal positive constant.
**Lemma 26**.: _For \(X\in\mathcal{M}_{g}\) and \(L>1\),_
\[\begin{split}&\#\Big{\{}(\Gamma_{1},\Gamma_{2})\in\mathcal{C}_{1,2}(X,L),\ \Gamma_{1}\ \text{only contains one}\\ &\quad\text{component of}\ \partial S(\Gamma_{1},\Gamma_{2}),\, \Gamma_{2}\ \text{only contains the other}\ \Big{\}}\\ &\prec\sum_{(\gamma_{1},\gamma_{2},\alpha_{1},\beta_{1})}1_{[10 \log L,L]}(\ell(\gamma_{1}))\cdot 1_{[10\log L,L]}(\ell(\gamma_{2}))\\ &\quad\cdot 1_{[0,L]}(\ell(\alpha_{1}))\cdot 1_{[0,L]}(\ell(\beta_{1})) \cdot\frac{\ell(\gamma_{2})}{\mathcal{D}(\ell(\gamma_{2}),2L,0)},\end{split} \tag{65}\]
_where \((\gamma_{1},\gamma_{2},\alpha_{1},\beta_{1})\) are taken over all quadruples satisfying that \(\gamma_{1}\cup\gamma_{2}\) cuts off a subsurface \(Y\simeq S_{1,2}\) in \(X\) and \(\alpha_{1}\cup\beta_{1}\) separates \(Y\) into \(S_{0,3}\cup S_{0,3}\) with \(\gamma_{1},\gamma_{2}\) belonging to the boundaries of the two different \(S_{0,3}\)'s._
Proof.: For any \((\Gamma_{1},\Gamma_{2})\) belonging to the set in the left side of (65), WLOG, one may assume that \(Y=S(\Gamma_{1},\Gamma_{2})\), \(\partial Y=\gamma_{1}\cup\gamma_{2}\), \(\Gamma_{1}\) only contains \(\gamma_{1}\) and \(\Gamma_{2}\) only contains \(\gamma_{2}\). The rest two simple closed geodesics \(\alpha_{1},\beta_{1}\) in \(\Gamma_{1}\) will separate \(Y\) into \(S_{0,3}\cup S_{0,3}\), i.e. two copies of \(S^{\prime}_{0,3}s\). Consider the map
\[\pi:(\Gamma_{1},\Gamma_{2})\mapsto(\gamma_{1},\gamma_{2},\alpha_{1},\beta_{1}),\]
where the union \(\gamma_{1}\cup\gamma_{2}\) cuts off \(Y\simeq S_{1,2}\) in \(X\), and \(\alpha_{1}\cup\beta_{1}\) separates \(Y\) into \(S_{0,3}\cup S_{0,3}\) such that \(\gamma_{1},\gamma_{2}\) belong to the boundaries of two different \(S_{0,3}\)'s (see Figure 7 for an illustration). Moreover, their lengths satisfy
\[\ell(\gamma_{1}),\ell(\gamma_{2}),\ell(\alpha_{1}),\ell(\beta_{1})\in[10\log L,L]. \tag{66}\]
Then we count all \(\Gamma_{2}\)'s such that \((\Gamma_{1},\Gamma_{2})\in\mathcal{C}_{1,2}(X,L)\) with \(\gamma_{1}\notin\Gamma_{2},\gamma_{2}\in\Gamma_{2}\) and \(P(\Gamma_{2})\subset Y\). Such a \(\Gamma_{2}\) is uniquely determined by the rest two simple closed geodesics \(\alpha_{2},\beta_{2}\) in \(Y\) whose union separates \(Y\) into \(S_{0,3}\cup S_{0,3}\) with \(\gamma_{1},\gamma_{2}\) in two different \(S_{0,3}\)'s, of length \(\ell(\alpha_{2}),\ell(\beta_{2})\leq L\) as shown in Figure 7.
It is clear that
\[\ell(\alpha_{2})+\ell(\beta_{2})\leq 2L.\]
Then it follows by (20) that there are at most
\[\frac{\ell(\gamma_{2})}{\mathcal{D}(\ell(\gamma_{2}),2L,0)}\]
such pairs of \((\alpha_{2},\beta_{2})\)'s. This implies that
\[\#\pi^{-1}(\gamma_{1},\gamma_{2},\alpha_{1},\beta_{1})\leq\frac{200\ell( \gamma_{2})}{\mathcal{D}(\ell(\gamma_{2}),2L,0)}.\]
Sum it over all possible \((\gamma_{1},\gamma_{2},\alpha_{1},\beta_{1})\)'s satisfying (66), we complete the proof.
**Lemma 27**.: _For \(X\in\mathcal{M}_{g}\) and \(L>1\),_
\[\#\Big{\{}(\Gamma_{1},\Gamma_{2})\in\mathcal{C}_{1,2}(X,L),\text{ both }\Gamma_{1}\text{ and }\Gamma_{2}\] \[\text{ only contain the same one component of }\partial S(\Gamma_{1},\Gamma_{2})\Big{\}}\] \[\prec \sum_{(\gamma_{1},\gamma_{2},\alpha_{1},\beta_{1})}1_{[10\log L, L]}(\ell(\gamma_{1}))\cdot 1_{[0,4L]}(2\ell(\gamma_{1})+\ell(\gamma_{2}))\] \[\cdot 1_{[0,L]}(\ell(\alpha_{1}))\cdot 1_{[0,L]}(\ell(\beta_{1})) \cdot\frac{\ell(\gamma_{2})}{\mathcal{D}(\ell(\gamma_{2}),2L,0)}, \tag{67}\]
_where \((\gamma_{1},\gamma_{2},\alpha_{1},\beta_{1})\) are taken over all quadruples satisfying that \(\gamma_{1}\cup\gamma_{2}\) cuts off a subsurface \(Y\simeq S_{1,2}\) in \(X\) and \(\alpha_{1}\cup\beta_{1}\) separates \(Y\) into \(S_{0,3}\cup S_{0,3}\) with \(\gamma_{1},\gamma_{2}\) belonging to the boundaries of two different \(S_{0,3}\)'s._
Proof.: For any \((\Gamma_{1},\Gamma_{2})\) belonging to the set in the left side of (67), WLOG, one may assume that \(Y=S(\Gamma_{1},\Gamma_{2})\), \(\partial Y=\gamma_{1}\cup\gamma_{2}\), both \(\Gamma_{1}\) and \(\Gamma_{2}\) contain \(\gamma_{1}\) and do not contain \(\gamma_{2}\). Assume that \(\Gamma_{1}\) contains \(\gamma_{1},\alpha_{1},\beta_{1}\) and \(\Gamma_{2}\) contains \(\gamma_{1},\alpha_{2},\beta_{2}\). In this situation, we warn here that \(\ell(\gamma_{2})\)_may exceed_\(L\). Since \(P(\Gamma_{1})\cup P(\Gamma_{2})\) fills \(Y\), there is a connected component \(C\) of \(Y\setminus P(\Gamma_{1})\cup P(\Gamma_{2})\) such that \(C\) is topologically a cylinder and \(\gamma_{2}\) is a connected component of \(\partial C\). The other connected component of \(\partial C\), denoted by \(\eta\), is a closed piecewisely smooth geodesic loop, freely homotopical to \(\gamma_{2}\). Each geodesic arcs in \(\eta\) are different parts of arcs in \(\alpha_{1},\beta_{1},\alpha_{2},\beta_{2}\) as shown in Figure 8. Firstly it is clear that
\[\ell(\gamma_{2})\leq\ell(\eta)\leq\ell(\alpha_{1})+\ell(\beta_{1})+\ell( \alpha_{2})+\ell(\beta_{2}). \tag{68}\]
Since \(\Gamma_{1},\Gamma_{2}\in\mathcal{N}_{(0,3),\star}^{(g-2,3)}(X,L)\), we have
\[\begin{cases}\ell(\gamma_{1})+\ell(\alpha_{1})+\ell(\beta_{1})\leq 2L\\ \ell(\gamma_{1})+\ell(\alpha_{2})+\ell(\beta_{2})\leq 2L\\ \ell(\gamma_{1})\geq 10\log L\end{cases}. \tag{69}\]
It follows from (68) and (69) that
\[\ell(\gamma_{2})\leq 4L-2\ell(\gamma_{1})\leq 4L-20\log L. \tag{70}\]
Consider the map
\[\pi:(\Gamma_{1},\Gamma_{2})\mapsto(\gamma_{1},\gamma_{2},\alpha_{1},\beta_{1})\]
for \((\Gamma_{1},\Gamma_{2})\) belonging to the set in the left side of (67). Here \(\gamma_{1}\cup\gamma_{2}\) cuts off \(Y\simeq S_{1,2}\) in \(X\) and \(\alpha_{1}\cup\beta_{1}\) separates \(Y\) into \(S_{0,3}\cup S_{0,3}\) such that \(\gamma_{1},\gamma_{2}\) belong to boundaries of two different \(S_{0,3}\)'s. Moreover, their lengths satisfy
\[\ell(\gamma_{1}),\ell(\alpha_{1}),\ell(\beta_{1})\in[10\log L,L],\ 2\ell( \gamma_{1})+\ell(\gamma_{2})\leq 4L. \tag{71}\]
Then we count all possible \(\Gamma_{2}\)'s such that \((\Gamma_{1},\Gamma_{2})\in\mathcal{C}_{1,2}(X,L)\), \(\gamma_{1}\in\Gamma_{2},\gamma_{2}\notin\Gamma_{2}\) and \(P(\Gamma_{2})\subset Y\). Since \(\ell(\alpha_{2})+\ell(\beta_{2})\leq 2L\), it follows by (20) that there are at most
\[\frac{\ell(\gamma_{2})}{\mathcal{D}(\ell(\gamma_{2}),2L,0)}\]
such pairs of \((\alpha_{2},\beta_{2})\)'s. This implies that
\[\#\pi^{-1}(\gamma_{1},\gamma_{2},\alpha_{1},\beta_{1})\leq\frac{200\ell( \gamma_{2})}{\mathcal{D}(\ell(\gamma_{2}),2L,0)}.\]
Sum it over all possible \((\gamma_{1},\gamma_{2},\alpha_{1},\beta_{1})\)'s satisfying (71), we complete the proof.
Now we are ready to Proposition 24.
Proof of Proposition 24.: Following Lemma 25, Lemma 26 and Lemma 27, for \(L>1\) we have
\[\mathbb{E}_{\mathrm{WP}}^{g}\left[C_{1,2}(X,L)\right]\] \[\prec \mathbb{E}_{\mathrm{WP}}^{g}\Bigg{[}\sum_{(\gamma_{1},\gamma_{2}, \gamma_{3})}1_{[10\log L,L]}(\ell(\gamma_{1}))\cdot 1_{[10\log L,L]}(\ell(\gamma_{2})) \cdot 1_{[0,L]}(\ell(\gamma_{3}))\] \[\cdot\left(\frac{\ell(\gamma_{1})}{\mathcal{R}(\ell(\gamma_{1}), \ell(\gamma_{2}),L)}+\frac{\ell(\gamma_{1})}{\mathcal{D}(\ell(\gamma_{1}),2L, 0)}\right)\] \[+\sum_{(\gamma_{1},\gamma_{2},\alpha_{1},\beta_{1})}\Big{(}1_{[10 \log L,L]}(\ell(\gamma_{1})\cdot 1_{[10\log L,L]}(\ell(\gamma_{2}))\] \[\cdot 1_{[0,L]}(\ell(\alpha_{1}))\cdot 1_{[0,L]}(\ell(\beta_{1})) \cdot\frac{\ell(\gamma_{2})}{\mathcal{D}(\ell(\gamma_{2}),2L,0)}\] \[+ 1_{[10\log L,L]}(\ell(\gamma_{1}))\cdot 1_{[0,4L]}(2\ell(\gamma_{1})+ \ell(\gamma_{2}))\] \[\cdot 1_{[0,L]}(\ell(\alpha_{1}))\cdot 1_{[0,L]}(\ell(\beta_{1})) \cdot\frac{\ell(\gamma_{2})}{\mathcal{D}(\ell(\gamma_{2}),2L,0)}\Big{)}\Bigg{]}, \tag{72}\]
where \(\gamma_{1}\cup\gamma_{2}\) cuts off \(Y\simeq S_{1,2}\) in \(X\), \(\gamma_{3}\) separates \(Y\) into \(S_{1,1}\cup S_{0,3}\), and \(\alpha_{1}\cup\beta_{1}\) separates \(Y\) into \(S_{0,3}\cup S_{0,3}\) with \(\gamma_{1},\gamma_{2}\) in boundaries of two different \(S_{0,3}\)'s.
For \(Y\simeq S_{1,2}\) in \(X\), the completion subsurface \(X\setminus\overline{Y}\) can be of type either \(S_{g-2,2}\) or \(S_{g_{1},1}\cup S_{g_{2},1}\) with \(g_{1}+g_{2}=g-1\). By Mirzakhani's integration formula, i.e. Theorem 4, Theorem 5, Theorem 8, and Theorem 13, we have
that for \(L>1\),
\[\begin{split}&\mathbb{E}_{\mathrm{WP}}^{g}\Bigg{[}\sum_{(\gamma_{1}, \gamma_{2},\gamma_{3})}1_{[10\log L,L]}(\ell(\gamma_{1}))\cdot 1_{[10\log L,L]}(\ell( \gamma_{2}))\cdot 1_{[0,L]}(\ell(\gamma_{3}))\\ &\cdot\left(\frac{\ell(\gamma_{1})}{\mathcal{R}(\ell(\gamma_{1}),\ell(\gamma_{2}),L)}+\frac{\ell(\gamma_{1})}{\mathcal{D}(\ell(\gamma_{1}),2L,0)}\right)\Bigg{]}\\ \prec&\frac{1}{V_{g}}\int_{0\leq x,y,z\leq L}\Big{[} (1+x)(1+e^{\frac{L-x-y}{2}})+(1+x)(1+e^{\frac{2L-x}{2}})\Big{]}V_{1,1}(z)\\ &\cdot V_{0,3}(x,y,z)\left(V_{g-2,2}(x,y)+\sum_{(g_{1},g_{2})}V_{ g_{1},1}(x)V_{g_{2},1}(y)\right)xyz\cdot dxdydz\\ \prec&\frac{1}{V_{g}}\int_{0\leq x,y,z\leq L} \Big{[}(1+x)(1+e^{\frac{L-x-y}{2}})+(1+x)(1+e^{\frac{2L-x}{2}})\Big{]}\\ &\cdot(1+z^{2})\left(V_{g-2,2}+\sum_{(g_{1},g_{2})}V_{g_{1},1}V_ {g_{2},1}\right)\sinh\frac{x}{2}\sinh\frac{y}{2}\cdot z\cdot dxdydz\\ \prec&\frac{V_{g-2,2}+\sum\limits_{(g_{1},g_{2})}V_{ g_{1},1}V_{g_{2},1}}{V_{g}}\cdot\left(L^{5}e^{L}+L^{7}e^{\frac{L}{2}}+L^{6}e^{ \frac{3}{2}L}\right),\end{split} \tag{73}\]
where \((g_{1},g_{2})\) are taken over all possible \(1\leq g_{1},g_{2}\) and \(g_{1}+g_{2}=g-1\).
Similarly, by Mirzakhani's integration formula, i.e. Theorem 4, Theorem 5, Theorem 8, and Theorem 13, we have that for \(L>1\),
\[\begin{split}&\mathbb{E}_{\mathrm{WP}}^{g}\Big{[}\sum_{(\gamma_{1}, \gamma_{2},\alpha_{1},\beta_{1})}1_{[10\log L,L]}(\ell(\gamma_{1}))\cdot 1_{[10\log L,L]}(\ell( \gamma_{2}))\\ \cdot& 1_{[0,L]}(\ell(\alpha_{1}))\cdot 1_{[0,L]}( \ell(\beta_{1}))\cdot\frac{\ell(\gamma_{2})}{\mathcal{D}(\ell(\gamma_{2}),2L,0 )}\Big{]}\\ \prec&\frac{1}{V_{g}}\int_{0\leq x,y,z,w\leq L}(1+y) (1+e^{\frac{2L-y}{2}})V_{0,3}(x,z,w)V_{0,3}(y,z,w)\\ &\cdot\left(V_{g-2,2}(x,y)+\sum_{(g_{1},g_{2})}V_{g_{1},1}(x)V_{ g_{2},1}(y)\right)xyzw\cdot dxdydzdw\\ \prec&\frac{1}{V_{g}}\int_{0\leq x,y,z,w\leq L}(1+y) (1+e^{\frac{2L-y}{2}})\left(V_{g-2,2}+\sum_{(g_{1},g_{2})}V_{g_{1},1}V_{g_{2}, 1}\right)\\ &\cdot\sinh\frac{x}{2}\sinh\frac{y}{2}\cdot zw\cdot dxdydzdw\\ \prec&\frac{V_{g-2,2}+\sum\limits_{(g_{1},g_{2})}V_{ g_{1},1}V_{g_{2},1}}{V_{g}}\cdot\left(L^{5}e^{L}+L^{6}e^{\frac{3}{2}L}\right), \end{split} \tag{74}\]
where \((g_{1},g_{2})\) are taken over all possible \(1\leq g_{1},g_{2}\) and \(g_{1}+g_{2}=g-1\).
For the rest term in the right side of (72), if we set
\[\mathbf{cond}=\begin{cases}10\log L\leq x\leq L\\ 0\leq y\leq 4L-2x\\ 0\leq z,w\leq L\end{cases},\]
then it follows by Mirzakhani's integration formula, i.e. Theorem 4, Theorem 5, Theorem 8, and Theorem 13 that for \(L>1\),
\[\mathbb{E}_{\text{WP}}^{g}\Big{[}\sum_{(\gamma_{1},\gamma_{2}, \alpha_{1},\beta_{1})}1_{[10\log L,L]}(\ell(\gamma_{1}))\cdot 1_{[0,4L]}(2\ell( \gamma_{1})+\ell(\gamma_{2}))\] \[\cdot 1_{[0,L]}(\ell(\alpha_{1}))\cdot 1_{[0,L]}(\ell(\beta_{1})) \cdot\frac{200\ell(\gamma_{2})}{\mathcal{D}(\ell(\gamma_{2}),2L,0)}\Big{]}\] \[\prec \frac{1}{V_{g}}\int_{\mathbf{cond}}(1+y)(1+e^{\frac{2L-y}{2}})V_ {0,3}(x,z,w)V_{0,3}(y,z,w)\] \[\cdot\left(V_{g-2,2}(x,y)+\sum_{(g_{1},g_{2})}V_{g_{1},1}(x)V_{g _{2},1}(y)\right)xyzw\cdot dxdydzdw\] \[\prec \frac{1}{V_{g}}\int_{\mathbf{cond}}(1+y)(1+e^{\frac{2L-y}{2}})\] \[\cdot\left(V_{g-2,2}+\sum_{(g_{1},g_{2})}V_{g_{1},1}V_{g_{2},1} \right)\sinh\frac{x}{2}\sinh\frac{y}{2}\cdot zw\cdot dxdydzdw\] \[\prec \frac{V_{g-2,2}+\sum_{(g_{1},g_{2})}V_{g_{1},1}V_{g_{2},1}}{V_{g }}\cdot L^{5}\cdot\int_{\begin{subarray}{c}10\log L\leq x\leq L\\ 0\leq y\leq 4L-2x\end{subarray}}(1+e^{\frac{2L-y}{2}})e^{\frac{x+y}{2}}dxdy\] \[\prec \frac{V_{g-2,2}+\sum\limits_{(g_{1},g_{2})}V_{g_{1},1}V_{g_{2},1}} {V_{g}}\cdot L^{5}\cdot\left(e^{2L-5\log L}+Le^{\frac{3}{2}L}\right), \tag{75}\]
where \((g_{1},g_{2})\) are taken over all possible \(1\leq g_{1},g_{2}\) and \(g_{1}+g_{2}=g-1\).
By Theorem 6 and Theorem 7, we have
\[\frac{V_{g-2,2}+\sum\limits_{(g_{1},g_{2})}V_{g_{1},1}V_{g_{2},1}}{V_{g}} \prec\frac{1}{V_{g}}\left(V_{g-2,2}+\frac{W_{2g-4}}{g}\right)\prec\frac{1}{g ^{2}}. \tag{76}\]
Then combining (72), (73), (74), (75) and (76) we get
\[\mathbb{E}_{\text{WP}}^{g}\left[C_{1,2}(X,L)\right]\prec\frac{V_ {g-2,2}+\sum\limits_{(g_{1},g_{2})}V_{g_{1},1}V_{g_{2},1}}{V_{g}}\cdot(L^{5}e^ {L}+L^{7}e^{\frac{L}{2}}+L^{6}e^{\frac{3L}{2}}+e^{2L})\] \[\prec \frac{V_{g-2,2}+\frac{W_{2g-4}}{g}}{V_{g}}\cdot(L^{5}e^{L}+L^{7} e^{\frac{L}{2}}+L^{6}e^{\frac{3L}{2}}+e^{2L})\prec\frac{e^{2L}}{g^{2}}\]
as desired.
#### 4.3.3. Bounds for \(\mathbb{E}_{\mathrm{WP}}^{g}\left[C_{0,4}(X,L)\right]\)
Our aim for \(\mathbb{E}_{\mathrm{WP}}^{g}\left[C_{0,4}(X,L)\right]\) is as follows. The proof is similar as the one in bounding \(\mathbb{E}_{\mathrm{WP}}^{g}\left[C_{1,2}(X,L)\right]\).
**Proposition 28**.: _For \(L>1\) and large \(g\),_
\[\mathbb{E}_{\mathrm{WP}}^{g}\left[C_{0,4}(X,L)\right]\prec\frac{Le^{2L}}{g^{2}}.\]
When \(S(\Gamma_{1},\Gamma_{2})\simeq S_{0,4}\), two boundary geodesics of \(S(\Gamma_{1},\Gamma_{2})\) may be the same closed geodesic in \(X\), in this case, the completion \(\overline{S(\Gamma_{1},\Gamma_{2})}\simeq S_{1,2}\); otherwise \(\overline{S(\Gamma_{1},\Gamma_{2})}\simeq S(\Gamma_{1},\Gamma_{2})\simeq S_{0,4}\). Moreover, each of \(\Gamma_{1}\) and \(\Gamma_{2}\) has exactly two closed geodesics contained in the boundary of \(S(\Gamma_{1},\Gamma_{2})\). Now we define
\[\mathcal{C}_{0,4}^{0}(X,L) :=\left\{(\Gamma_{1},\Gamma_{2})\in\mathcal{C}_{0,4}(X,L),\ \overline{S(\Gamma_{1},\Gamma_{2})}\simeq S_{0,4}\right\},\] \[\mathcal{C}_{0,4}^{1}(X,L) :=\left\{(\Gamma_{1},\Gamma_{2})\in\mathcal{C}_{0,4}(X,L),\ \overline{S(\Gamma_{1},\Gamma_{2})}\simeq S_{1,2}\right\},\]
and set
\[C_{0,4}^{0}(X,L)=\#\mathcal{C}_{0,4}^{0}(X,L),\ C_{0,4}^{1}(X,L)=\#\mathcal{C} _{0,4}^{1}(X,L).\]
For \((\Gamma_{1},\Gamma_{2})\) in \(\mathcal{C}_{0,4}^{1}(X,L)\), view \(S(\Gamma_{1},\Gamma_{2})\) as the result surface of cutting \(\overline{S(\Gamma_{1},\Gamma_{2})}\) along a non-separating simple closed geodesic. Then the number of elements in \(\mathcal{C}_{0,4}^{1}(X,L)\) has same estimations as in Lemma 25, Lemma 26 and Lemma 27. Therefore, the proof of Proposition 24 yields that
**Proposition 29**.: _For \(L>1\) and large \(g\),_
\[\mathbb{E}_{\mathrm{WP}}^{g}\left[C_{0,4}^{1}(X,L)\right]\prec\frac{e^{2L}}{g ^{2}}.\]
Now we consider \(\mathcal{C}_{0,4}^{0}(X,L)\). Again we need to accurately classify elements in it according to the relative position of \((\Gamma_{1},\Gamma_{2})\) in \(S(\Gamma_{1},\Gamma_{2})\simeq S_{0,4}.\) The first one is as follows.
**Lemma 30**.: _For \(X\in\mathcal{M}_{g}\) and \(L>1\), we have_
\[\#\Big{\{}(\Gamma_{1},\Gamma_{2})\in\mathcal{C}_{0,4}^{0}(X,L): \Gamma_{1}\cup\Gamma_{2}\text{ contains }\partial S(\Gamma_{1},\Gamma_{2})\Big{\}}\] \[\prec \sum_{(\gamma_{1},\gamma_{2},\gamma_{3},\gamma_{4},\eta)}1_{[0,L ]}(\ell(\gamma_{1}))\cdot 1_{[0,L]}(\ell(\gamma_{2}))\cdot 1_{[0,L]}(\ell(\gamma_{3})) \cdot 1_{[0,L]}(\ell(\gamma_{4}))\] \[\cdot 1_{[0,L]}(\ell(\eta))\cdot 1_{[0,2L-10\log L]}(\ell(\gamma_{1})+ \ell(\gamma_{2}))\cdot 1_{[0,2L-10\log L]}(\ell(\gamma_{3})+\ell(\gamma_{4}))\] \[\cdot \frac{\ell(\gamma_{3})}{\mathcal{R}(\ell(\gamma_{3}),\ell(\gamma_ {4}),L)}, \tag{77}\]
_where \((\gamma_{1},\gamma_{2},\gamma_{3},\gamma_{4},\eta)\) are taken over all quintuples satisfying that \(\gamma_{1}\cup\gamma_{2}\cup\gamma_{3}\cup\gamma_{4}\) cuts off a subsurface \(Y\simeq S_{0,4}\) in \(X\) and \(\eta\) bounds a \(S_{0,3}\) in \(Y\) along with \(\gamma_{1},\gamma_{2}\)._
Proof.: For any \((\Gamma_{1},\Gamma_{2})\) belonging to the set in the left side of (77), WLOG, one may assume that \(Y=S(\Gamma_{1},\Gamma_{2})\), \(\partial Y=\{\gamma_{1},\gamma_{2},\gamma_{3},\gamma_{4}\}\), and \(\Gamma_{1}\) contains \(\gamma_{1},\gamma_{2}\). Then it follows that \(\Gamma_{2}\) contains \(\gamma_{3},\gamma_{4}\). Assume the rest simple closed
geodesic in \(\Gamma_{1}\) is \(\eta\) and the rest simple closed geodesic in \(\Gamma_{2}\) is \(\xi\) as shown in Figure 9. Consider the map
\[\pi:(\Gamma_{1},\Gamma_{2})\mapsto(\gamma_{1},\gamma_{2},\gamma_{3},\gamma_{4},\eta)\]
where \(\gamma_{1}\cup\gamma_{2}\cup\gamma_{3}\cup\gamma_{4}\) cuts off a subsurface \(Y\simeq S_{0,4}\) in \(X\) and \(\eta\) bounds a \(S_{0,3}\) along with \(\gamma_{1},\gamma_{2}\) in \(Y\). Moreover their lengths satisfy
\[\ell(\gamma_{1}),\ell(\gamma_{2}),\ell(\gamma_{3}),\ell(\gamma_{4}),\ell(\eta) \in[10\log L,L] \tag{78}\]
and
\[\ell(\gamma_{1})+\ell(\gamma_{2})\leq 2L-10\log L,\ell(\gamma_{3})+\ell(\gamma_ {4})\leq 2L-10\log L. \tag{79}\]
Then we count all possible \(\Gamma_{2}\)'s such that \((\Gamma_{1},\Gamma_{2})\in\mathcal{C}^{0}_{0,4}(X,L)\), \(P(\Gamma_{2})\subset Y\) and \(\{\gamma_{3},\gamma_{4}\}\subset\Gamma_{2}\). We only need to count all possible \(\xi\)'s of length \(\leq L\), each of which bounds a \(S_{0,3}\) in \(Y\) along with \(\gamma_{3},\gamma_{4}.\) It follows by (18) that there are at most
\[\frac{\ell(\gamma_{3})}{\mathcal{R}(\ell(\gamma_{3}),\ell(\gamma_{4}),L)}\]
such \(\xi\)'s. So we have that
\[\#\pi^{-1}(\gamma_{1},\gamma_{2},\gamma_{3},\gamma_{4},\eta)\leq\frac{200\ell (\gamma_{3})}{\mathcal{R}(\ell(\gamma_{3}),\ell(\gamma_{4}),L)}.\]
Then the proof is completed by taking a summation over all possible quintuples \((\gamma_{1},\gamma_{2},\gamma_{3},\gamma_{4},\eta)\)'s satisfying (78) and (79).
**Lemma 31**.: _For \(X\in\mathcal{M}_{g}\) and \(L>1\), we have_
\[\#\Big{\{}(\Gamma_{1},\Gamma_{2})\in\mathcal{C}^{0}_{0,4}(X,L): \Gamma_{1}\cup\Gamma_{2}\text{ contains}\] \[\text{exactly }3\text{ boundary geodesics of }S(\Gamma_{1},\Gamma_{2})\Big{\}}\] \[\prec \sum_{(\gamma_{1},\gamma_{2},\gamma_{3},\gamma_{4},\eta)}1_{[10 \log L,L]}(\ell(\gamma_{1}))\cdot 1_{[0,L]}(\ell(\gamma_{2}))\cdot 1_{[0,L]}(\ell( \gamma_{3}))\cdot 1_{[0,L]}(\ell(\eta))\] \[\cdot 1_{[0,4L]}(2\ell(\gamma_{1})+\ell(\gamma_{2})+\ell(\gamma_{3})+ \ell(\gamma_{4}))\cdot\frac{\ell(\gamma_{2})}{\mathcal{R}(\ell(\gamma_{2}), \ell(\gamma_{4}),L)}, \tag{80}\]
_where \((\gamma_{1},\gamma_{2},\gamma_{3},\gamma_{4},\eta)\) are taken over all quintuples satisfying that \(\gamma_{1}\cup\gamma_{2}\cup\gamma_{3}\cup\gamma_{4}\) cuts off a subsurface \(Y\simeq S_{0,4}\) in \(X\) and \(\eta\) bounds a \(S_{0,3}\) in \(Y\) along with \(\gamma_{1},\gamma_{2}\)._
Proof.: For any \((\Gamma_{1},\Gamma_{2})\) belonging to the set in the left side of (80), WLOG, one may assume that \(Y=S(\Gamma_{1},\Gamma_{2})\), \(\partial Y=\{\gamma_{1},\gamma_{2},\gamma_{3},\gamma_{4}\}\), \(\Gamma_{1}\) contains \(\gamma_{1},\gamma_{2}\) and \(\Gamma_{2}\) contains \(\gamma_{1},\gamma_{3}\). Assume that the rest simple closed geodesic in \(\Gamma_{1}\) is \(\eta\) and the rest simple closed geodesic in \(\Gamma_{2}\) is \(\xi\) as shown in Figure 10.
In this case, \(\ell(\gamma_{4})\) may exceed \(L\). However, since \(P(\Gamma_{1})\cup P(\Gamma_{2})\) fills \(Y\), there is a connected component \(C\) of \(Y\setminus P(\Gamma_{1})\cup P(\Gamma_{2})\) such that \(C\) is topologically a cylinder and \(\gamma_{4}\) is a connected component of \(\partial C\). The other connected component of \(\partial C\), is the union of some geodesic arcs on \(\eta,\xi.\) It follows that
\[\ell(\gamma_{4})\leq\ell(\eta)+\ell(\xi). \tag{81}\]
Since \(\Gamma_{1},\Gamma_{2}\in\mathcal{N}_{(0,3),\star}^{(g-2,3)}(X,L)\), then we have
\[\begin{cases}\ell(\gamma_{1})+\ell(\gamma_{2})+\ell(\eta)\leq 2L\\ \ell(\gamma_{1})+\ell(\gamma_{3})+\ell(\xi)\leq 2L\\ \ell(\gamma_{1})\geq 10\log L\end{cases}. \tag{82}\]
It follows from (81) and (82) that
\[\ell(\gamma_{1})+\ell(\gamma_{2})+\ell(\gamma_{3})+\ell(\gamma_{4})\leq 4L- \ell(\gamma_{1})\leq 4L-10\log L. \tag{83}\]
Consider the map
\[\pi:(\Gamma_{1},\Gamma_{2})\mapsto(\gamma_{1},\gamma_{2},\gamma_{3},\gamma_{4 },\eta)\]
where \(\gamma_{1}\cup\gamma_{2}\cup\gamma_{3}\cup\gamma_{4}\) cuts off a \(Y\simeq S_{0,4}\) in \(X\) and \(\eta\) cuts off a \(S_{0,3}\) in \(Y\) along with \(\gamma_{1}\cup\gamma_{2}\). Moreover their lengths satisfy
\[\begin{split}&\ell(\gamma_{1}),\ell(\gamma_{2}),\ell(\gamma_{3}), \ell(\eta)\in[10\log L,L],\\ & 2\ell(\gamma_{1})+\ell(\gamma_{2})+\ell(\gamma_{3})+\ell(\gamma_ {4})\leq 4L.\end{split} \tag{84}\]
Then we count all possible \(\Gamma_{2}\)'s such that \((\Gamma_{1},\Gamma_{2})\in\mathcal{C}_{0,4}^{0}(X,L),\ P(\Gamma_{2})\subset Y\) and \(\gamma_{1},\gamma_{3}\in\Gamma_{2}\). We only need to count all possible \(\xi\)'s of length \(\leq L\), which
bounds a \(S_{0,3}\) in \(Y\) along with \(\gamma_{1},\gamma_{3}.\) It follows by (18) that there are at most
\[\frac{\ell(\gamma_{2})}{\mathcal{R}(\ell(\gamma_{2}),\ell(\gamma_{4}),L)}\]
such \(\xi\)'s. So we have that
\[\#\pi^{-1}(\gamma_{1},\gamma_{2},\gamma_{3},\gamma_{4},\eta)\leq\frac{200\ell( \gamma_{2})}{\mathcal{R}(\ell(\gamma_{2}),\ell(\gamma_{4}),L)}.\]
Then the proof is completed by taking a summation over all possible quintuples \((\gamma_{1},\gamma_{2},\gamma_{3},\gamma_{4},\eta)\)'s satisfying (84).
**Lemma 32**.: _For \(X\in\mathcal{M}_{g}\) and \(L>1\), we have_
\[\#\Big{\{}(\Gamma_{1},\Gamma_{2})\in\mathcal{C}^{0}_{0,4}(X,L): \Gamma_{1}\cup\Gamma_{2}\text{ contains}\] \[\text{ exactly }2\text{ boundary geodesics of }S(\Gamma_{1},\Gamma_{2})\Big{\}}\] \[\prec \sum_{(\gamma_{1},\gamma_{2},\gamma_{3},\gamma_{4},\eta)}1_{[10 \log L,L]}(\ell(\gamma_{1}))\cdot 1_{[10\log L,L]}(\ell(\gamma_{2}))\cdot 1_{[0,L]}( \ell(\eta))\] \[\cdot 1_{[0,4L]}(2\ell(\gamma_{1})+2\ell(\gamma_{2})+\ell(\gamma_{3}) +\ell(\gamma_{4}))\cdot\frac{\ell(\gamma_{3})}{\mathcal{R}(\ell(\gamma_{3}), \ell(\gamma_{4}),L)}, \tag{85}\]
_where \((\gamma_{1},\gamma_{2},\gamma_{3},\gamma_{4},\eta)\) are taken over all quintuples satisfying that \(\gamma_{1}\cup\gamma_{2}\cup\gamma_{3}\cup\gamma_{4}\) cuts off a subsurface \(Y\simeq S_{0,4}\) in \(X\) and \(\eta\) bounds a \(S_{0,3}\) in \(Y\) along with \(\gamma_{1},\gamma_{2}\)._
Proof.: For any \((\Gamma_{1},\Gamma_{2})\) belonging to the set in the left side of (85), WLOG, one may assume that \(Y=S(\Gamma_{1},\Gamma_{2})\), \(\partial Y=\{\gamma_{1},\gamma_{2},\gamma_{3},\gamma_{4}\}\) and both \(\Gamma_{1},\Gamma_{2}\) contain \(\gamma_{1},\gamma_{2}\). Assume the rest simple closed geodesic in \(\Gamma_{1}\) is \(\eta\) and the rest simple closed geodesic in \(\Gamma_{2}\) is \(\xi\). Then both \(\eta\) and \(\xi\) will bound a \(S_{0,3}\) in \(Y\) along with \(\gamma_{1}\cup\gamma_{2}\) as shown in Figure 11.
In this case, both \(\ell(\gamma_{3})\) and \(\ell(\gamma_{4})\) may exceed \(L\). Since \(P(\Gamma_{1})\cup P(\Gamma_{2})\) fills \(Y\), there are two connected components \(C_{1},C_{2}\) of \(Y\setminus P(\Gamma_{1})\cup P(\Gamma_{2})\) such that both \(C_{1},C_{2}\) are topologically cylinders, \(\gamma_{3}\) is a connected component of \(\partial C_{1}\) and \(\gamma_{4}\) is a
connected component of \(\partial C_{2}\). The other connected components of \(\partial C_{1}\) and \(\partial C_{2},\) are the union of different geodesic arcs on \(\eta,\xi.\) It is clear that
\[\ell(\gamma_{3})+\ell(\gamma_{4})\leq\ell(\eta)+\ell(\xi). \tag{86}\]
Since \(\Gamma_{1},\Gamma_{2}\in\mathcal{N}_{(0,3),\star}^{(g-2,3)}(X,L),\) we have
\[\begin{cases}\ell(\gamma_{1})+\ell(\gamma_{2})+\ell(\eta)\leq 2L\\ \ell(\gamma_{1})+\ell(\gamma_{2})+\ell(\xi)\leq 2L\\ \ell(\gamma_{1})\geq 10\log L\\ \ell(\gamma_{2})\geq 10\log L\end{cases}. \tag{87}\]
Then It follows from (86) and (87) that
\[\ell(\gamma_{1})+\ell(\gamma_{2})+\ell(\gamma_{3})+\ell(\gamma_{4})\leq 4L- \ell(\gamma_{1})-\ell(\gamma_{2})\leq 4L-20\log L.\]
Consider the map
\[\pi:(\Gamma_{1},\Gamma_{2})\mapsto(\gamma_{1},\gamma_{2},\gamma_{3},\gamma_{ 4},\eta)\]
where \(\gamma_{1}\cup\gamma_{2}\cup\gamma_{3}\cup\gamma_{4}\) cuts off a \(Y\simeq S_{0,4}\) in \(X\) and \(\eta\) cuts off a \(S_{0,3}\) in \(Y\) along with \(\gamma_{1}\cup\gamma_{2}\). Moreover their lengths satisfy
\[\begin{split}\ell(\gamma_{1}),\ell(\gamma_{2}),\ell(\eta)\in[10 \log L,L],\\ 2\ell(\gamma_{1})+2\ell(\gamma_{2})+\ell(\gamma_{3})+\ell(\gamma_ {4})\leq 4L.\end{split} \tag{88}\]
Then we count all possible \(\Gamma_{2}\)'s such that \((\Gamma_{1},\Gamma_{2})\in\mathcal{C}_{0,4}^{0}(X,L),P(\Gamma_{2})\subset Y, \gamma_{1},\gamma_{2}\in\Gamma_{2}\). We only need to count all possible \(\xi\)'s of length \(\leq L,\) each of which bounds a \(S_{0,3}\) in \(Y\) along with \(\gamma_{1},\gamma_{2}.\) It follows by (18) that there are at most
\[\frac{\ell(\gamma_{3})}{\mathcal{R}(\ell(\gamma_{3}),\ell(\gamma_{4}),L)}\]
such \(\xi\)'s. So we have
\[\#\pi^{-1}(\gamma_{1},\gamma_{2},\gamma_{3},\gamma_{4},\eta)\leq\frac{200 \ell(\gamma_{3})}{\mathcal{R}(\ell(\gamma_{3}),\ell(\gamma_{4}),L)}.\]
Sum it over all possible quintuples \((\gamma_{1},\gamma_{2},\gamma_{3},\gamma_{4},\eta)\)'s satisfying (88), we complete the proof.
For \(Y\simeq S_{0,4},\) the complement subsurface \(X\setminus\overline{Y}\) can be one of five types: (1) \(S_{g-3,4};\) (2) \(S_{g_{1},1}\cup S_{g_{2},3}\) with \(g_{1}\geq 1\) and \(g_{1}+g_{2}=g-2;\) (3) \(S_{g_{1},1}\cup S_{g_{2},1}\cup S_{g_{3},2}\) with \(g_{1},g_{2},g_{3}\geq 1\) and \(g_{1}+g_{2}+g_{3}=g-1;\) (4) \(S_{g_{1},1}\cup S_{g_{2},1}\cup S_{g_{3},1}\cup S_{g_{4},1}\) with \(g_{1},g_{2},g_{3},g_{4}\geq 1\) and \(g_{1}+g_{2}+g_{3}+g_{4}=g;\) (5) \(S_{g_{1},2}\cup S_{g_{2},2}\) with \(g_{1},g_{2}\geq 1\) and \(g_{1}+g_{2}=g-2\). Set
\[\operatorname{Vol_{WP}}\left(\mathcal{M}\left(S_{g}\setminus\overline{Y},\ell (\gamma_{1})=x,\ell(\gamma_{2})=y,\ell(\gamma_{3})=z,\ell(\gamma_{4})=w\right)\right)\]
to be the Weil-Petersson volume of the moduli space of Riemann surfaces each of which is homeomorphic to \(S_{g}\setminus\overline{Y}\) with geodesic boundaries of lengths \(\ell(\gamma_{1})=x,\ell(\gamma_{2})=y,\ell(\gamma_{3})=z,\ell(\gamma_{4})=w\). Define
\[V^{\Sigma}(S_{g}\setminus\overline{Y},x,y,z,w)=\sum_{\text{all types of }S_{g}\setminus\overline{Y}}\operatorname{ Vol_{WP}}\left(\mathcal{M}\big{(}S_{g}\setminus\overline{Y},\right.\]
\[\ell(\gamma_{1})=x,\ell(\gamma_{2})=y,\ell(\gamma_{3})=z,\ell(\gamma_{4})=w \big{)}\]
and
\[V^{\Sigma}(S_{g}\setminus\overline{Y})=V^{\Sigma}(S_{g}\setminus\overline{Y},0,0, 0,0)=\sum_{\text{all types of $S_{g}\setminus\overline{Y}$}}\operatorname{Vol}_{\text{WP}}\big{(} \mathcal{M}\left(S_{g}\setminus\overline{Y}\right)\big{)}.\]
Then by Theorem 6 and Theorem 7 we have
\[V^{\Sigma}(S_{g}\setminus\overline{Y})\] \[\prec V_{g-3,4}+\sum_{g_{1}+g_{2}=g-2}V_{g_{1},1}V_{g_{2},3}+\sum_{g_ {1}+g_{2}+g_{3}=g-1}V_{g_{1},1}V_{g_{2},1}V_{g_{3},2}\] \[+\sum_{g_{1}+g_{2}+g_{3}+g_{4}=g}V_{g_{1},1}V_{g_{2},1}V_{g_{3},1 }V_{g_{4},1}+\sum_{g_{1}+g_{2}=g-2}V_{g_{1},2}V_{g_{2},2}\] \[\prec W_{2g-4}\left(1+\frac{1}{g}+\frac{1}{g^{2}}+\frac{1}{g^{3}} \right)\prec\frac{V_{g}}{g^{2}}. \tag{89}\]
Now we are ready to bound \(\mathbb{E}_{\text{WP}}^{g}\left[C_{0,4}^{0}(X,L)\right]\).
**Proposition 33**.: _For \(L>1\) and large \(g\),_
\[\mathbb{E}_{\text{WP}}^{g}\left[C_{0,4}^{0}(X,L)\right]\prec\frac{Le^{2L}}{g^ {2}}.\]
Proof.: Following Lemma 30, Lemma 31 and Lemma 32, we have that for \(L>1\),
\[\mathbb{E}_{\text{WP}}^{g}\Big{[}C_{0,4}^{0}(X,L)\Big{]}\] \[\leq \mathbb{E}_{\text{WP}}^{g}\Bigg{[}\sum_{(\gamma_{1},\gamma_{2}, \gamma_{3},\gamma_{4},\eta)}\Big{(}1_{[0,L]}(\ell(\gamma_{1}))\cdot 1_{[0,L]}(\ell(\gamma_{2}))\cdot 1 _{[0,L]}(\ell(\gamma_{3}))\cdot 1_{[0,L]}(\ell(\gamma_{4}))\] \[\cdot 1_{[0,L]}(\ell(\eta))\cdot 1_{[0,2L-10\log L]}(\ell(\gamma_{1})+\ell( \gamma_{2}))\cdot 1_{[0,2L-10\log L]}(\ell(\gamma_{3})+\ell(\gamma_{4}))\] \[\cdot \frac{\ell(\gamma_{3})}{\mathcal{R}(\ell(\gamma_{3}),\ell(\gamma_ {4}),L)}\] \[+ 1_{[10\log L,L]}(\ell(\gamma_{1}))\cdot 1_{[0,L]}(\ell(\gamma_{2})) \cdot 1_{[0,L]}(\ell(\gamma_{3}))\cdot 1_{[0,L]}(\ell(\eta))\] \[\cdot 1_{[0,4L]}(2\ell(\gamma_{1})+\ell(\gamma_{2})+\ell(\gamma_{3})+ \ell(\gamma_{4}))\cdot\frac{\ell(\gamma_{2})}{\mathcal{R}(\ell(\gamma_{2}), \ell(\gamma_{4}),L)}\] \[+ 1_{[10\log L,L]}(\ell(\gamma_{1}))\cdot 1_{[10\log L,L]}(\ell(\gamma_{2})) \cdot 1_{[0,L]}(\ell(\eta))\] \[\cdot 1_{[0,4L]}(2\ell(\gamma_{1})+2\ell(\gamma_{2})+\ell(\gamma_{3})+ \ell(\gamma_{4}))\cdot\frac{\ell(\gamma_{3})}{\mathcal{R}(\ell(\gamma_{3}), \ell(\gamma_{4}),L)}\Big{)}\Bigg{]}, \tag{90}\]
where \((\gamma_{1},\gamma_{2},\gamma_{3},\gamma_{4},\eta)\) are taken over all quintuples satisfying that \(\gamma_{1}\cup\gamma_{2}\cup\gamma_{3}\cup\gamma_{4}\) cuts off a subsurface \(Y\simeq S_{0,4}\) in \(X\) and \(\eta\) bounds a \(S_{0,3}\) in \(Y\) along with \(\gamma_{1},\gamma_{2}\). Set
\[\mathbf{cond}_{1}=\begin{cases}0\leq x,y,z,w,v\leq L\\ x+y\leq 2L-10\log L\\ z+w\leq 2L-10\log L\end{cases}.\]
Then by Mirzakhani's integration formula Theorem 4, Theorem 5, Theorem 8 and Theorem 13, we have that for \(L>1\),
\[\begin{split}&\mathbb{E}_{\text{WP}}^{g}\Bigg{[}\sum_{(\gamma_{1}, \gamma_{2},\gamma_{3},\gamma_{4},\eta)}\Big{(}1_{[0,L]^{5}}\left(\ell(\gamma_{1} ),\ell(\gamma_{2}),\ell(\gamma_{3}),\ell(\gamma_{4}),\ell(\eta)\right)\\ &\cdot 1_{[0,2L-10\log L]^{2}}\left(\ell(\gamma_{1})+\ell(\gamma_{2} ),\ell(\gamma_{3})+\ell(\gamma_{4})\right)\cdot\frac{\ell(\gamma_{3})}{ \mathcal{R}(\ell(\gamma_{3}),\ell(\gamma_{4}),L)}\Bigg{)}\Bigg{]}\\ \prec&\frac{1}{V_{g}}\int_{\mathbf{cond}_{1}}(1+z)( 1+e^{\frac{L-z-w}{2}})V_{0,3}(x,y,v)V_{0,3}(z,w,v)\\ &\cdot V^{\Sigma}(S_{g}\setminus\overline{Y},x,y,z,w)\cdot xyzwv \cdot dxdydzdwdv\\ \prec&\frac{1}{V_{g}}\int_{\mathbf{cond}_{1}}(1+z)( 1+e^{\frac{L-z-w}{2}})\cdot V^{\Sigma}(S_{g}\setminus\overline{Y})\\ &\cdot\sinh\frac{x}{2}\sinh\frac{y}{2}\sinh\frac{z}{2}\sinh\frac{ w}{2}\cdot v\cdot dxdydzdwdv\\ \prec&\frac{V^{\Sigma}(S_{g}\setminus\overline{Y})}{ V_{g}}\cdot L^{3}\cdot\int_{\begin{subarray}{c}x+y\leq 2L-10\log L\\ z+w\leq 2L-10\log L\end{subarray}}(1+e^{\frac{L-z-w}{2}})e^{\frac{x+y+z+w}{2}} dxdydzdw\\ \prec&\frac{V^{\Sigma}(S_{g}\setminus\overline{Y})}{ V_{g}}\cdot L^{3}\cdot(L^{2}e^{2L-10\log L}+L^{3}e^{\frac{3L}{2}-5\log L}). \end{split} \tag{91}\]
Set
\[\mathbf{cond}_{2}=\begin{cases}0\leq y,z,v\leq L,\quad 10\log L\leq x\leq L \\ 2x+y+z+w\leq 4L\end{cases}\]
and
\[\mathbf{cond}_{3}=\begin{cases}0\leq v\leq L,\quad 10\log L\leq x,y\leq L \\ 2x+2y+z+w\leq 4L\end{cases}.\]
Similarly, by Mirzakhani's integration formula Theorem 4, Theorem 5, Theorem 8 and Theorem 13, we have
\[\begin{split}&\mathbb{E}_{\text{WP}}^{g}\Bigg{[}\sum_{(\gamma_{1},\gamma_{2},\gamma_{3},\gamma_{4},\eta)}\Big{(}1_{[10\log L,L]}(\ell(\gamma_{1 }))\cdot 1_{[0,L]^{3}}(\ell(\gamma_{2}),\ell(\gamma_{3}),\ell(\eta))\\ &\cdot 1_{[0,4L]}(2\ell(\gamma_{1})+\ell(\gamma_{2})+\ell(\gamma_{3} )+\ell(\gamma_{4}))\cdot\frac{\ell(\gamma_{2})}{\mathcal{R}(\ell(\gamma_{2}), \ell(\gamma_{4}),L)}\Big{)}\Bigg{]}\\ \prec&\frac{1}{V_{g}}\int_{\mathbf{cond}_{2}}(1+y)(1+e^{ \frac{L-y-w}{2}})\cdot V^{\Sigma}(S_{g}\setminus\overline{Y})\\ &\cdot\sinh\frac{x}{2}\sinh\frac{y}{2}\sinh\frac{z}{2}\sinh\frac{w}{2} \cdot v\cdot dxdydzdwdv\\ \prec&\frac{V^{\Sigma}(S_{g}\setminus\overline{Y})}{V_{g}} \cdot L^{3}\cdot\int_{\begin{subarray}{c}x,y,z\leq L\\ x+y+z+w\leq 4L-10\log L\end{subarray}}(1+e^{\frac{L-y-w}{2}})e^{\frac{x+y+z+w}{2}} dxdydzdw\\ \prec&\frac{V^{\Sigma}(S_{g}\setminus\overline{Y})}{V_{g}} \cdot L^{3}\cdot(L^{3}e^{2L-5\log L}+L^{3}e^{\frac{3L}{2}})\end{split} \tag{92}\]
and
\[\mathbb{E}_{\mathrm{WP}}^{g}\left[1_{[10\log L,L]^{2}}(\ell(\gamma_{1 }),\ell(\gamma_{2}))\cdot 1_{[0,L]}(\ell(\eta))\right.\] \[\left.\cdot 1_{[0,4L]}(2\ell(\gamma_{1})+2\ell(\gamma_{2})+\ell( \gamma_{3})+\ell(\gamma_{4}))\cdot\frac{\ell(\gamma_{3})}{\mathcal{R}(\ell( \gamma_{3}),\ell(\gamma_{4}),L)}\right)\right]\] \[\prec \frac{1}{V_{g}}\int_{\mathbf{cond}_{3}}(1+z)(1+e^{\frac{L-z-w}{2 }})\cdot V^{\Sigma}(S_{g}\setminus\overline{Y},x,y,z,w)\] \[\cdot\sinh\frac{x}{2}\sinh\frac{y}{2}\sinh\frac{z}{2}\sinh\frac{w }{2}\cdot v\cdot dxdydzdwdv\] \[\prec \frac{V^{\Sigma}(S_{g}\setminus\overline{Y})}{V_{g}}\cdot L^{3} \cdot\int_{\begin{subarray}{c}x,\;y\;\leq\;L\\ x+\;y+z+w\;\leq\;4L-20\log L\end{subarray}}(1+e^{\frac{L-z-w}{2}})e^{\frac{x +y+z+w}{2}}dxdydzdw\] \[\prec \frac{V^{\Sigma}(S_{g}\setminus\overline{Y})}{V_{g}}\cdot L^{3} \cdot(L^{3}e^{2L-10\log L}+L^{3}e^{\frac{3L}{2}}). \tag{93}\]
Then combining (90), (91), (92) and (93) we have for \(L>1\),
\[\mathbb{E}_{\mathrm{WP}}^{g}\left[C_{0,4}^{0}(X,L)\right]\prec\frac{V^{\Sigma }(S_{g}\setminus\overline{Y})}{V_{g}}\cdot Le^{2L}. \tag{94}\]
Therefore, by (89) and (94) we obtain
\[\mathbb{E}_{\mathrm{WP}}^{g}\left[C_{0,4}^{0}(X,L)\right]=O\left(\frac{Le^{2L }}{g^{2}}\right)\]
as desired.
Now we are ready to prove Proposition 28.
Proof of Proposition 28.: Since \(\mathcal{C}_{0,4}(X,L)=\mathcal{C}_{0,4}^{0}(X,L)\cup\mathcal{C}_{0,4}^{1}(X,L)\), it follows by Proposition 29 and Proposition 33 that
\[\mathbb{E}_{\mathrm{WP}}^{g}\left[C_{0,4}(X,L)\right]\leq\mathbb{E}_{\mathrm{ WP}}^{g}\left[C_{0,4}^{1}(X,L)\right]+\mathbb{E}_{\mathrm{WP}}^{g}\left[C_{0,4}^{0}(X,L)\right]\prec\frac{Le^{2L}}{g^{2}}.\]
This completes the proof.
### Estimations of \(\mathbb{E}_{\mathrm{WP}}^{g}\left[D(X,L)\right]\)
For this part, we always assume that \(g>2\). We will show that as \(g\to\infty\),
\[\mathbb{E}_{\mathrm{WP}}^{g}\left[D(X,L_{g})\right]=o\left(\mathbb{E}_{ \mathrm{WP}}^{g}\left[N_{(0,3),\star}^{(g-2,3)}(X,L_{g})\right]^{2}\right).\]
More precisely,
**Proposition 34**.: _For \(L>1\), we have_
\[\mathbb{E}_{\mathrm{WP}}^{g}\left[D(X,L)\right]\prec\frac{e^{2L}}{g^{2}L^{6}}.\]
Proof.: For \((\Gamma_{1},\Gamma_{2})\in\mathcal{D}(X,L)\), the two pairs of pants \(P(\Gamma_{1})\) and \(P(\Gamma_{2})\) will share one or two simple closed geodesic boundary components. For the first case, assume that \(\Gamma_{1}=(\gamma_{1},\gamma_{2},\eta)\) and \(\Gamma_{2}=(\gamma_{3},\gamma_{4},\eta)\). For the second case, assume that \(\Gamma_{1}=(\gamma_{1},\alpha,\beta)\) and \(\Gamma_{2}=(\gamma_{2},\alpha,\beta)\). By the definition of \(\mathcal{N}_{(0,3),\star}^{(g-2,3)}(X,L)\), any two simple closed geodesics in \(\Gamma_{1}\) have total length \(\leq 2L-10\log L\). So does \(\Gamma_{2}\). We have
\[\begin{split}& D(X,L)\\ &\prec\sum_{(\gamma_{1},\gamma_{2},\gamma_{3},\gamma_{4},\eta)}1_{ [10\log L,L]}(\ell(\eta))\cdot 1_{[0,2L-10\log L]^{2}}\Big{(}\ell(\gamma_{1})+\ell( \gamma_{2}),\ell(\gamma_{3})+\ell(\gamma_{4})\Big{)}\\ &+\sum_{(\gamma_{1},\gamma_{2},\alpha,\beta)}1_{[0,L]^{4}}\Big{(} \ell(\gamma_{1}),\ell(\gamma_{2}),\ell(\alpha),\ell(\beta)\Big{)}.\end{split} \tag{95}\]
Here \((\gamma_{1},\gamma_{2},\gamma_{3},\gamma_{4},\eta)\) are taken over all quintuples satisfying that \(\gamma_{1}\cup\gamma_{2}\cup\gamma_{3}\cup\gamma_{4}\) cuts off a subsurface \(Y\simeq S_{0,4}\) in \(X\) and \(\eta\) bounds a \(S_{0,3}\) in \(Y\) along with \(\gamma_{1},\gamma_{2}\); and while \((\gamma_{1},\gamma_{2},\alpha,\beta)\) are taken over all quadruples satisfying that \(\gamma_{1}\cup\gamma_{2}\) cuts off a subsurface \(Y\simeq S_{1,2}\) in \(X\) and \(\alpha\cup\beta\) separates \(Y\) into \(S_{0,3}\cup S_{0,3}\) with \(\gamma_{1},\gamma_{2}\) belonging to the boundaries of the two different \(S_{0,3}\)'s. Set
\[\mathbf{cond}_{4}=\begin{cases}&0\leq v\leq L\\ &x+y\leq 2L-10\log L\\ &z+w\leq 2L-10\log L\end{cases}.\]
By Mirzakhani's integration formula Theorem 4, Theorem 5, Theorem 8 and (89), for \(L>1\) we have
\[\begin{split}&\mathbb{E}_{\mathrm{WP}}^{g}\Bigg{[}\sum_{( \gamma_{1},\gamma_{2},\gamma_{3},\gamma_{4},\eta)}1_{[10\log L,L]}(\ell(\eta) )\\ &\qquad\qquad\qquad\qquad\times 1_{[0,2L-10\log L]^{2}}\Big{(}\ell( \gamma_{1})+\ell(\gamma_{2}),\ell(\gamma_{3})+\ell(\gamma_{4})\Big{)}\Bigg{]} \\ &\prec\frac{1}{V_{g}}\int_{\mathbf{cond}_{4}}V_{0,3}(x,y,v)V_{0,3 }(z,w,v)\\ &\cdot\sum_{\text{type of }S_{g}\setminus\overline{Y},Y\simeq S_{0,4}} \mathrm{Vol}\left(\mathcal{M}(S_{g}\setminus\overline{Y},\ell(\gamma_{1})=x, \ell(\gamma_{2})=y,\ell(\gamma_{3})=z,\ell(\gamma_{4})=w)\right)\\ &\cdot xyzwv\cdot dxdydzdwdv\\ &\prec\frac{1}{g^{2}}\int_{\mathbf{cond}_{4}}\sinh\frac{x}{2}\sinh \frac{y}{2}\sinh\frac{z}{2}\sinh\frac{w}{2}\cdot v\cdot dxdydzdwdv\\ &\prec\frac{e^{2L}}{g^{2}L^{6}}.\end{split} \tag{96}\]
Where in the last inequality we apply
\[\int_{x>0,y>0,x+y\leq 2L-10\log L}\sinh\frac{x}{2}\sinh\frac{y}{2}dxdy\]
\[\begin{split}&\quad\prec\int_{x>0,y>0,x+y\leq 2L-10\log L}e^{\frac{x+y}{2}}dxdy \\ &\quad\prec\int_{0}^{2L-10\log L}\left(e^{\frac{x}{2}}\int_{0}^{2L-1 0\log L-x}e^{\frac{y}{2}}dy\right)dx\asymp\frac{e^{L}}{L^{4}}.\end{split}\]
For the remaining term, it follows by Mirzakhani's integration formula Theorem 4, Theorem 5, Theorem 8 and (76) that for \(L>1\),
\[\begin{split}&\quad\mathbb{E}_{\mathrm{WP}}^{g}\left[\sum_{(\gamma_{1}, \gamma_{2},\alpha,\beta)}1_{[0,L]^{4}}\Big{(}\ell(\gamma_{1}),\ell(\gamma_{2}),\ell(\alpha),\ell(\beta)\Big{)}\right]\\ \prec&\frac{1}{V_{g}}\int_{[0,L]^{4}}V_{0,3}(x,z,w)V _{0,3}(y,z,w)\\ &\quad\cdot\left(V_{g-2,2}(x,y)+\sum_{(g_{1},g_{2})}V_{g_{1},1}( x)V_{g_{2},1}(y)\right)\cdot xyzw\cdot dxdydzdw\\ \prec&\frac{V_{g-2,2}+\sum_{(g_{1},g_{2})}V_{g_{1},1 }V_{g_{2},1}}{V_{g}}\int_{[0,L]^{4}}\sinh\frac{x}{2}\sinh\frac{y}{2}\cdot zw \cdot dxdydzdw\\ \prec&\frac{L^{4}e^{L}}{g^{2}}.\end{split} \tag{97}\]
Combine (95), (96) and (97), we have
\[\mathbb{E}_{\mathrm{WP}}^{g}\left[D(X,L)\right]\prec\frac{1}{g^{2}}\left( \frac{e^{2L}}{L^{6}}+L^{4}e^{L}\right)\prec\frac{e^{2L}}{g^{2}L^{6}}.\]
This completes the proof.
### Finish of the proof
Now we are ready to complete the proof of Theorem 19.
Proof of Theorem 19.: Take \(L=L_{g}=\log g-\log\log g+\omega(g)>1\) with \(\omega(g)=o(\log\log g)\). By (37) and (38) we have
\[\begin{split}&\quad\mathrm{Prob}_{\mathrm{WP}}^{g}\left(X\in \mathcal{M}_{g};\ N_{(0,3),\star}^{(g-2,3)}(X,L_{g})=0\right)\\ \leq&\frac{\left|\mathbb{E}_{\mathrm{WP}}^{g}\left[ B(X,L_{g})\right]-\mathbb{E}_{\mathrm{WP}}^{g}\left[N_{(0,3),\star}^{(g-2,3)}(X,L_{g}) \right]^{2}\right|}{\mathbb{E}_{\mathrm{WP}}^{g}\left[N_{(0,3),\star}^{(g-2,3 )}(X,L_{g})\right]^{2}}\\ +&\frac{\mathbb{E}_{\mathrm{WP}}^{g}\left[A(X,L_{g}) \right]+\mathbb{E}_{\mathrm{WP}}^{g}\left[C(X,L_{g})\right]+\mathbb{E}_{ \mathrm{WP}}^{g}\left[D(X,L_{g})\right]}{\mathbb{E}_{\mathrm{WP}}^{g}\left[N_ {(0,3),\star}^{(g-2,3)}(X,L_{g})\right]^{2}}.\end{split} \tag{98}\]
By Proposition 20 and Proposition 21 we have
\[\frac{\left|\mathbb{E}_{\mathrm{WP}}^{g}\left[B(X,L_{g})\right]-\mathbb{E}_{ \mathrm{WP}}^{g}\left[N_{(0,3),\star}^{(g-2,3)}(X,L_{g})\right]^{2}\right|}{ \mathbb{E}_{\mathrm{WP}}^{g}\left[N_{(0,3),\star}^{(g-2,3)}(X,L_{g})\right]^{2 }}=O\left(\frac{\log L_{g}}{L_{g}}\right). \tag{99}\]
By (39) and Proposition 20, for \(L=L_{g}=\log g-\log\log g+\omega(g)\) we have
\[\frac{\mathbb{E}_{\mathrm{WP}}^{g}\left[A(X,L_{g})\right]}{\mathbb{E}_{\mathrm{ WP}}^{g}\left[N_{(0,3),\star}^{(g-2,3)}(X,L_{g})\right]^{2}}\prec\frac{1}{\mathbb{E}_{ \mathrm{WP}}^{g}\left[N_{(0,3),\star}^{(g-2,3)}(X,L_{g})\right]}=O\left(\frac{ 1}{e^{\omega(g)}}\right). \tag{100}\]
By Proposition 20, Proposition 23, Proposition 24 and Proposition 28, fix \(0<\epsilon<\frac{1}{2}\), we have
\[\frac{\mathbb{E}_{\mathrm{WP}}^{g}\left[C(X,L_{g})\right]}{\mathbb{E}_{ \mathrm{WP}}^{g}\left[N_{(0,3),\star}^{(g-2,3)}(X,L_{g})\right]^{2}}\prec \left(\frac{L_{g}^{65}e^{\epsilon L_{g}}}{g}+\frac{L_{g}e^{6L_{g}}}{g^{9}}+ \frac{1+L_{g}}{L_{g}^{2}}\right)=O\left(\frac{1}{L_{g}}\right). \tag{101}\]
By Proposition 20 and Proposition 34 we have
\[\frac{\mathbb{E}_{\mathrm{WP}}^{g}\left[D(X,L_{g})\right]}{\mathbb{E}_{ \mathrm{WP}}^{g}\left[N_{(0,3),\star}^{(g-2,3)}(X,L_{g})\right]^{2}}=O\left( \frac{1}{L_{g}^{8}}\right). \tag{102}\]
Therefore if \(\lim\limits_{g\to\infty}\omega(g)=\infty\), by (98), (99), (100), (101) and (102), we have
\[\lim\limits_{g\to\infty}\mathrm{Prob}_{\mathrm{WP}}^{g}\left(X\in\mathcal{M}_ {g};\ N_{(0,3),\star}^{(g-2,3)}(X,L_{g})=0\right)=0.\]
It follows that (35) holds by (36). This finishes the proof of Theorem 19.
|
2309.09896 | Existence and Morse Index of two free boundary embedded geodesics on
Riemannian 2-disks with convex boundary | We prove that a free boundary curve shortening flow on closed surfaces with a
strictly convex boundary remains noncollapsed for a finite time in the sense of
the reflected chord-arc profile introduced by Langford-Zhu. This shows that
such flow converges to free boundary embedded geodesic in infinite time, or
shrinks to a round half-point on the boundary. As a consequence, we prove the
existence of two free boundary embedded geodesics on a Riemannian $2$-disk with
a strictly convex boundary. Moreover, we prove that there exists a simple
closed geodesic with Morse Index $1$ and $2$. This settles the free boundary
analog of Grayson's theorem. | Dongyeong Ko | 2023-09-18T16:00:33Z | http://arxiv.org/abs/2309.09896v2 | Existence and Morse index of two free boundary embedded geodesics on Riemannian 2-disks with convex boundary
###### Abstract.
We prove that a free boundary curve shortening flow on closed surfaces with a strictly convex boundary remains noncollapsed for a finite time in the sense of the reflected chord-arc profile introduced by Langford-Zhu. This shows that such flow converges to free boundary embedded geodesic in infinite time, or shrinks to a round half-point on the boundary. As a consequence, we prove the existence of two free boundary embedded geodesics on a Riemannian 2-disk with a strictly convex boundary. Moreover, we prove that there exists a simple closed geodesic with Morse Index 1 and 2. This settles the free boundary analog of Grayson's theorem.
## 1. Introduction
The search of closed geodesics on surfaces by min-max methods was initiated by Birkhoff [9]. The following classical theorem on the existence of three embedded geodesics on spheres was initially proven by Lusternik-Schnirelmann [34], and the embeddedness part is proven by Grayson [19] by curve shortening flow.
**Theorem 1.1** (Lusternik-Schnirelmann [34], Grayson [19]).: _A closed Riemannian \(2\)-sphere \((S^{2},g)\) contains at least three simple closed geodesics._
In the free boundary setting, Lusternik-Schnirelmann [34] proved that a bounded domain in \(\mathbb{R}^{n}\) with a convex and smooth boundary admits at least \(n\) distinct geodesic chords which meet the boundary orthogonally. Bos [10] extended the existence of orthogonal geodesic chords to Riemannian manifolds via Birkhoff's discrete curve shortening process. He also proved the necessity of the convexity assumption by giving an example of a non-convex domain in \(\mathbb{R}^{2}\) (See Figure 1 in [10]). The construction of free boundary (immersed) geodesics in various settings have been developed by Gluck-Ziller [18] and Zhou [42] by the discrete curve shortening process. Li [31] introduced the chord shortening flow and provided a simpler approach to prove Lusternik-Schnirelmann's existence result. Moreover, the recent work of Donato-Montezuma [14] constructed a free boundary embedded geodesic or a geodesic loop in the nonnegative sectional curvature scenario via Almgren-Pitts min-max theory (See also [4]). However, to the author's knowledge, the existence of free boundary embedded geodesics on Riemannian 2-disks with convex boundary is not known. The following is our first main result on this:
**Theorem 1.2**.: _There are at least \(2\) free boundary embedded geodesics on a Riemannian \(2\)-disk \((D^{2},\partial D^{2},g)\) with a strictly convex boundary._
For \(a>b>0\), since the ellipse \(E:=\{(x,y)|x^{2}/a^{2}+y^{2}/b^{2}=1\}\) achieves only two free boundary geodesics, the existence theorem of two free boundary embedded geodesics (Theorem 1.1) is optimal.
A Riemannian metric \(g\) is called _bumpy_ if every closed geodesic is nondegenerate i.e. there is no free boundary geodesic that admits a non-trivial Jacobi field. The arguments in Abraham [1] and Ambrozio-Carlotto-Sharp [3] shows that the set of bumpy metrics is generic in \(C^{r}\)-sense for \(r\geq 5\). We have
**Corollary 1.3**.: _There are at least two free boundary embedded geodesics with distinct length on the Riemannian \(2\)-disk endowed with a bumpy metric and strictly convex boundary._
We note here that we do not need to impose the condition ensures the nonexistence of interior simple closed geodesics because the mass cancellation such as neck-pinching over the limiting process does not happen in the curve setting. This is different from the min-max construction of free boundary minimal disks in Haslhofer-Ketover [22].
We compare our result with the problem on its higher dimensional analogue. There is a conjecture on the existence of three free boundary minimal disks on Riemannian \(3\)-balls with strictly convex boundary and nonnegative Ricci curvature. Struwe [41] constructed one free boundary immersed minimal disk via mapping approach. Gruter and Jost [20] constructed a free boundary minimal surface by developing Simon-Smith min-max theory in the free boundary setting (See also Jost [25]). We refer to [12], [16], [29], [30], [32], [33] for seminal works on the construction of free boundary minimal disks. Recently, Haslhofer-Ketover [22] proved this conjecture for generic metric and obtained at least \(2\) free boundary minimal disks by combining the ideas from min-max theory, free boundary mean curvature flow and degree theory.
To obtain the existence of a free boundary embedded geodesic, we apply the free boundary curve shortening flow for the tightening procedure. The study of free boundary curve shortening flow (FBCSF), the Neumann boundary problem of curve shortening flow, dates back to the work of Huisken [23] on free boundary mean curvature flow on the graph setting in general dimensions. After this work, free boundary mean curvature flow is extensively studied in [2], [39] and [40]. One of the most recent development of free boundary curve shortening flow is the full determination of the long time behavior of the free boundary curve shortening flow on convex domain in \(\mathbb{R}^{2}\) of Langford-Zhu [28]. We generalize this result to the surfaces with strictly convex boundary.
Huisken [24] found a simpler proof of the long-time behavior of curve shortening flows for closed and embedded curves by developing the idea of distance comparison principle. The essence of their idea is to prove that the ratio between extrinsic and intrinsic distance has monotonicity so that does not collapse along the flow (See also [5]). Edelen [15] extended Huisken's work to the surface setting. He proved that the ratio along the flow decreases at worst exponentially while the ratio may not have the monotonicity. Langford-Zhu [28] developed the distance comparison principle in the free boundary setting by introducing the notion of reflected chord-arc profile. Our generalization of the idea of reflected chord-arc profile to the surface setting proves that the chord-arc profile achieves the lower bound weighted by the exponential terms by time over the lower bound in the Euclidean setting, where our approach relies on Edelen's work. The noncollapsing property gives the following:
**Theorem 1.4**.: _Let \((N,\partial N,g)\) be a closed Riemannian surface with strictly convex boundary and \(\{\Gamma_{t}\}_{t\in[0,T)}\) be a maximal free boundary curve shortening flow starting from a properly embedded closed interval \(\Gamma_{0}\) in \(N\). Then either:_
1. \(T=\infty\)_, in which case_ \(\Gamma_{t}\) _converges smoothly as_ \(t\to\infty\) _to an embedded geodesic in_ \(N\) _which meets_ \(\partial N\) _orthogonally; or_
2. \(T<\infty\)_, in which case_ \(\Gamma_{t}\) _converges uniformly to some single round half-point_ \(z\in\partial N\) _smoothly in the sense of the blow up limit of the curve converges to the unit semi-circle._
Marques-Neves ([35], [36]) proved the upper and lower bound of Morse Index of min-max minimal hypersurfaces on closed manifolds with dimension \(3\leq n+1\leq 7\) by proving deformation theorems. In the smooth setting of curves, we are forced to construct the interpolation deformation between two families of simple closed curves with controlled length, where the deformation arises from the curve shortening flow. The author [26] proved the Morse Index bound of the simple closed geodesics on Riemannian \(2\)-spheres (see also [13]). Moreover, the author [27] proved the Morse Index bound of min-max capillary embedded geodesics on certain Riemannian \(2\)-disks by constructing the interpolation based on the curve shortening flow with a strictly fixed boundary.
**Theorem 1.5**.: _Suppose \((D^{2},\partial D^{2},g)\) is a Riemannian \(2\)-disk with convex boundary endowed with a bumpy metric. Then for each \(k=1,2\), there exists a free boundary embedded geodesic \(\gamma_{k}\) with_
\[index(\gamma_{k})=k\]
_and these two geodesics satisfy \(|\gamma_{1}|<|\gamma_{2}|\)._
**Corollary 1.6**.: _For a \(2\)-Riemannian disk \((D^{2},\partial D^{2},g)\) with a convex boundary, for \(k=1,2\), there exists a free boundary embedded geodesic \(\gamma_{k}\) with_
\[index(\gamma_{k})\leq k\leq index(\gamma_{k})+nullity(\gamma_{k}).\]
By the work of Smale [38] on the diffeomorphisms of \(2\)-sphere, the space of embedded intervals on \(D^{2}\) after identifying the point curves retract onto \(\mathbb{R}P^{2}\). Lusternik-Schnirelmann arguments, for instance in [11], suggests that the number of critical points of a smooth real-valued function has the lower bound of one plus the maximal cup-length. In our setting, the cohomology ring of \(\mathbb{R}P^{2}\) has a maximal length \(2\), there are at least \(3\) critical points. The point curves can be regarded as a stable critical point and we expect there are at least \(2\) free boundary embedded geodesics.
We prove the existence of two free boundary embedded geodesics by considering one and two parameter min-max construction. The free boundary curve shortening flow provides the way to tight the minimizing sequences and the classical Lusternik-Schnirelmann arguments gives the existence.
Our interpolation arguments rely on the quantitative estimate of \(F\)-distance bound in [26] along the squeezing homotopy arising from the free boundary curve shortening flow. The main modification we needed to make was the construction of free boundary mean convex foliation when the geodesic is strictly stable. We obtain a free boundary mean convex foliation by the first eigenfunction of the stability operator with Robin boundary condition (See also Proposition 2.4 in [22]).
The organization of the paper is as follows. In Section 2, we collect the basic notions on free boundary curve shortening flow, the reflected chord-arc profile and
the second variation of a free boundary embedded geodesic. In Section 3, we discuss the variation of the chord-arc profile on surfaces. In Section 4, we prove the non-collapsedness of the free boundary curve shortening flow at finite times. In Section 5, we consider the min-max construction of free boundary embedded geodesics. In Section 6, we prove the Morse Index bound of free boundary embedded geodesics.
## Acknowledgments
The author wishes to thank his advisor Daniel Ketover for his constant support and valuable discussions. The author also thanks to Otis Chodosh for helpful explanations relating to this work. The author also thanks to Jonathan Zhu for valuable discussion relating to this work. The author was partially supported by NSF grant DMS-1906385.
## 2. Preliminaries
### Free boundary curve shortening flow
We consider an oriented Riemannian surface with boundary \((N^{2},\partial N^{2},g)\) with a convex \(C^{2}\) boundary and a properly immersed family of curves with boundary \(\gamma:M^{1}\times I\to N^{2}\). We denote \(J\) by the counterclockwise rotation by \(\pi/2\) and take the convention that the orientation of the unit normal vector field on \(\nu\) is given to satisfy \(\gamma^{\prime}/|\gamma^{\prime}|=J\nu\). We call a properly immersed family of curves with boundary \(\{\Gamma_{t}\}\) satisfies a _free boundary curve shortening flow_ if \(\gamma:M^{1}\times I\to N^{2}\) with \(\Gamma_{t}=\gamma(M,t)\) satisfies
\[\begin{cases}\partial_{t}\gamma=\kappa\nu\text{ in }\mathring{M}\times I \\ \langle\nu,\mathcal{N}^{\partial N}\rangle=0\text{ on }\partial M\times I,\end{cases}\]
where \(\kappa(\cdot,t)\) is a geodesic curvature of \(\Gamma_{t}\) with respect to the unit normal vector field \(\nu\), and \(\mathcal{N}^{\partial N}\) is the inward unit normal vector field on \(\partial N\). We define \(\kappa^{\partial N}\) to be a geodesic curvature of boundary \(\partial N\). We consider the setting that \(\gamma(\cdot,t)\) are embeddings and have two boundary points on \(\partial N\). Since \(\partial N\) is convex, there is no loss of generality to assume that \(\Gamma_{t}\) does not have touching points in the interior of the curve on the boundary by the maximum principle.
We recall some notions related to _completed chord-arc profile_ in [28]. For a given (unit length) parametrization of curve \(\gamma:M\to N\) and \(x,y\in\gamma\), denote \(d(x,y)\) and \(l(x,y)\) to be a distance in \(N\) and the arclength between \(x\) and \(y\), respectively. We define the _reflected distance_\(\tilde{d}(x,y)\) and _reflected arclength_\(\tilde{l}(x,y)\) between two points in \(x,y\in M\) by
\[\tilde{d}(x,y)=\min_{z\in\partial N}(d(x,z)+d(y,z))\]
and
\[\tilde{l}(x,y)=\min_{s\in\partial\gamma}(l(x,s)+l(y,s)).\]
The _reflected chord-arc profile_\(\tilde{\psi}_{\gamma}\) of \(\gamma\) is denoted by
\[\tilde{\psi}_{\gamma}(\delta)=\inf\{\tilde{d}(x,y):x,y\in\gamma,\tilde{l}(x,y )=\delta\},\]
and the extended chord-arc profile \(\boldsymbol{\psi}_{\gamma}\) is defined by
\[\boldsymbol{\psi}_{\gamma}(\delta)=\min\{\psi_{\gamma}(\delta),\tilde{\psi}_ {\gamma}(\delta)\}.\]
We consider a connected, properly immersed curve-with-boundary \(\gamma\) in \(N\) whose endpoints are on \(\partial N\). Also in case of we need the doubling of \(N\), then we denote
the doubling of \(N\) by \(\tilde{N}\). We denote the formal double \(\mathbf{M}=(M\sqcup M)/\partial M\) and write \(\mathbf{x}=(x,sign(\mathbf{x}))\in\mathbf{M}\) where \(sign(\mathbf{x})\) distinguishes to which copy of \(M\) it belongs. We also denote continuous curve \(\mathbf{\gamma}:\mathbf{M}\to N\) by \(\mathbf{\gamma}(\mathbf{x})=\gamma(x)\). Then we define the _completed arclength_ by
\[\mathbf{l}(\mathbf{x},\mathbf{y})=\begin{cases}l(\gamma(x),\gamma(y)),\text{ if }sign(\mathbf{x})= sign(\mathbf{y})\\ \tilde{l}(\gamma(x),\gamma(y)),\text{ if }sign(\mathbf{x})\neq sign(\mathbf{y}).\end{cases}\]
Also we define the _completed distance_ function \(\mathbf{d}(\mathbf{x},\mathbf{y})\) on \(\mathbf{M}\times\mathbf{M}\) by
\[\mathbf{d}(\mathbf{x},\mathbf{y})=\begin{cases}d(\gamma(x),\gamma(y)),\text{ if } sign(\mathbf{x})=sign(\mathbf{y})\\ \tilde{d}(\gamma(x),\gamma(y)),\text{ if }sign(\mathbf{x})\neq sign(\mathbf{y}).\end{cases}\]
Then we now define the _completed chord-arc profile_\(\mathbf{\psi}\) of \(\Gamma\) by
\[\mathbf{\psi}(\delta)=\inf\{\mathbf{d}(\mathbf{x},\mathbf{y}):\mathbf{x},\mathbf{y}\in\mathbf{M},\mathbf{l}( \mathbf{x},\mathbf{y})=\delta\}.\]
Also we denote \(\mathbf{\psi}(\delta,t)\) by the completed chord-arc profile of \(\Gamma_{t}=\gamma(M,t)\). We call the completed chord-arc profile \(\mathbf{\psi}\) as _a classical profile_ if \(sign(\mathbf{x})=sign(\mathbf{y})\) and _a reflected profile_ otherwise. We define \(L=|\gamma|\) and \(\mathbf{L}=2|\gamma|\), and we denote the length functions depending on time \(t\) by \(L(t)=|\Gamma_{t}|\) and \(\mathbf{L}(t)=2|\Gamma_{t}|\).
We also follow the setting of an auxiliary function introduced in [28]. First, we will obtain the control of the chord-arc profile by a \(C^{2}\)-function \(\varphi\in C^{2}([0,1])\) satisfying the following:
1. \(\varphi(1-\eta)=\varphi(\eta)\) for all \(\eta\in[0,1]\).
2. \(0<\varphi^{\prime}<1\).
3. \(\varphi\) is strictly concave.
Note that \(\varphi(\mathbf{l}/\mathbf{L})\) is smooth away from the diagonal \(\mathbf{D}\) in \(\mathbf{M}\times\mathbf{M}\) and well-defined since \(0<\mathbf{l}(\mathbf{x},\mathbf{y})/\mathbf{L}\leq 1/2\) for \(\mathbf{x},\mathbf{y}\in\mathbf{M}\).
In case of considering a time dependent auxiliary function, we consider the following function; We define a \(C^{2}\)-function \(\varphi:[0,1]\times[0,T)\to\mathbb{R}\) satisfying the following conditions for every time \(t\in[0,T)\).
1. \(\varphi(1-\eta,t)=\varphi(\eta,t)\) for all \(\eta\in[0,1]\).
2. \(|\partial_{\eta}\varphi(\eta,t)|<1\) for \(\eta\in[0,1]\) and \(t\in[0,T)\).
3. \(\varphi(\cdot,t)\) is strictly concave.
We consider the auxiliary functions \(Z\) and \(\tilde{Z}\) on \(M\times M\) given by
\[Z(x,y) =d(\gamma(x),\gamma(y))-\mathbf{L}\varphi\Big{(}\frac{l(\gamma(x), \gamma(y))}{\mathbf{L}}\Big{)},\] \[\tilde{Z}(x,y) =\tilde{d}(\gamma(x),\gamma(y))-\mathbf{L}\varphi\Big{(}\frac{\tilde{ l}(\gamma(x),\gamma(y))}{\mathbf{L}}\Big{)}.\]
If we consider the auxiliary function on \(M\times M\times\partial N\) given by
\[\overline{Z}(x,y,z)=d(\gamma(x),z)+d(\gamma(y),z)-\mathbf{L}\varphi\Big{(}\frac{ \tilde{l}(\gamma(x),\gamma(y))}{\mathbf{L}}\Big{)}\]
then \(\tilde{Z}(x,y)=\min_{z\in\partial N}\overline{Z}(x,y,z)\). We denote our completed auxiliary function \(\mathbf{Z}\) on \(\mathbf{M}\times\mathbf{M}\) by
\[\mathbf{Z}(\mathbf{x},\mathbf{y})=\mathbf{d}(\mathbf{x},\mathbf{y})-\mathbf{L}\varphi\Big{(}\frac{\mathbf{l}( \mathbf{x},\mathbf{y})}{\mathbf{L}}\Big{)}=\begin{cases}Z(\mathbf{x},\mathbf{y})\text{ if }sign(\mathbf{x})= sign(\mathbf{y})\\ \tilde{Z}(\mathbf{x},\mathbf{y})\text{ if }sign(\mathbf{x})\neq sign(\mathbf{y}).\end{cases} \tag{1}\]
We also denote \(d(\cdot,\cdot,t)\), \(\tilde{d}(\cdot,\cdot,t)\), \(\boldsymbol{d}(\cdot,\cdot,t)\) and \(l(\cdot,\cdot,t)\), \(\tilde{l}(\cdot,\cdot,t)\), \(\boldsymbol{l}(\cdot,\cdot,t)\) by the distances, lengths of \(\Gamma_{t}\). We consider the auxiliary functions at time \(t\) as
\[Z(x,y,t) =d(\gamma(x),\gamma(y),t)-\boldsymbol{L}(t)\varphi\Big{(}\frac{l( \gamma(x),\gamma(y),t)}{\boldsymbol{L}}\Big{)},\] \[\tilde{Z}(x,y,t) =\tilde{d}(\gamma(x),\gamma(y),t)-\boldsymbol{L}(t)\varphi\Big{(} \frac{\tilde{l}(\gamma(x),\gamma(y),t)}{\boldsymbol{L}}\Big{)}.\]
Then the auxiliary function \(\overline{Z}\) at time \(t\) is defined by
\[\overline{Z}(x,y,z,t)=d(\gamma(x),z,t)+d(\gamma(y),z,t)-\boldsymbol{L}\varphi \Big{(}\frac{\tilde{l}(\gamma(x),\gamma(y),t)}{\boldsymbol{L}}\Big{)}.\]
Then the completed auxiliary function \(\boldsymbol{Z}\) at time \(t\) is
\[\boldsymbol{Z}(\boldsymbol{x},\boldsymbol{y},t)=\boldsymbol{d}(\boldsymbol{x },\boldsymbol{y},t)-\boldsymbol{L}\varphi\Big{(}\frac{\boldsymbol{l}( \boldsymbol{x},\boldsymbol{y},t)}{\boldsymbol{L}}\Big{)}=\begin{cases}Z(x,y,t)\text{ if }sign(\boldsymbol{x})=sign(\boldsymbol{y})\\ \tilde{Z}(x,y,t)\text{ if }sign(\boldsymbol{x})\neq sign(\boldsymbol{y}).\end{cases} \tag{2}\]
### Second Variation, Morse Index and the first eigenfunction
We consider a free boundary embedded geodesic \(\gamma\) on a Riemannian 2-disk \((D^{2},\partial D^{2},g)\). Let us denote \(f\) by a smooth section of the normal bundle of \(\gamma\). Then the second variation of the length of \(\gamma\) is defined as
\[Q(f,f) :=\int_{\gamma}(|\nabla_{\gamma}f|^{2}-Kf^{2})ds-(\kappa(p_{1})f^{2}( p_{1})+\kappa(p_{2})f^{2}(p_{2}))\] \[=-\int_{\gamma}(fLf)ds+f(p_{1})(-\nabla_{\gamma}f(p_{1})-\kappa( p_{1})f(p_{1}))+f(p_{2})(\nabla_{\gamma}f(p_{2})-\kappa(p_{2})f(p_{2})). \tag{3}\]
where \(K\) is a Gaussian curvature on \(D^{2}\), \(\kappa\) is a geodesic curvature of \(\partial D^{2}\) and \(L=\Delta_{\gamma}+K\) is the Jacobi operator of \(\gamma\). The boundary condition of the Jacobi operator is
\[(-1)^{i}\nabla_{\gamma}f(p_{i})-\kappa(p_{i})f(p_{i})=0\]
for \(i=1,2\). Let us consider the increasing sequence of eigenvalues \(\{\lambda_{i}\}\) and associated eigenfunctions \(\{\phi_{i}\}\) of the following equation with Robin boundary condition which correspond to the eigenvalues of the stability operator \(Q\):
\[\begin{cases}L\phi_{i}+\lambda_{i}\phi_{i}=0\text{ on }\gamma\\ \frac{\partial\phi_{i}}{\partial\eta}-\kappa\phi_{i}=0\text{ on }\partial\gamma, \end{cases} \tag{4}\]
where \(\eta\) is an outward unit vector on \(\gamma\). We define Morse index of \(\gamma\) to be a maximal dimension of a subspace of a space of \(C^{\infty}(\gamma)\) where \(Q\) is negative definite. i.e. the number of negative eigenvalues of the stability operator \(Q\). Note that the first eigenfunction \(\phi_{1}\) of (4) can be chosen to be strictly positive by the standard elliptic theory.
## 3. The spatial variation of the chord-arc profile on surfaces
In this section, we calculate the variation of the chord-arc profile on surfaces, in particular, we find inequalities which holds if the auxiliary function \(\boldsymbol{Z}\) achieves a zero minimizer at some pair of points \((\boldsymbol{x}_{0},\boldsymbol{y}_{0})\in(\boldsymbol{M}\times\boldsymbol{ M})\setminus\boldsymbol{D}\). This is a generalization of Section 4 of [28] to surfaces. We denote \(\alpha_{(\boldsymbol{x},\boldsymbol{y})}\) by the (possibly broken) geodesic realizing \(\boldsymbol{d}(\boldsymbol{x},\boldsymbol{y})\) parametrized (with a constant speed) by \(\alpha_{(\boldsymbol{x},\boldsymbol{y})}(0)=\boldsymbol{\gamma}(\boldsymbol{ x})\) and \(\alpha_{(\boldsymbol{x},\boldsymbol{y})}(1)=\boldsymbol{\gamma}(\boldsymbol{y})\) between \(\boldsymbol{\gamma}(\boldsymbol{x})\) and \(\boldsymbol{\gamma}(\boldsymbol{y})\). We omit the subscription \((\boldsymbol{x},\boldsymbol{y})\) for
simplicity and denote this curve by \(\alpha\). Let us consider the doubled curve \(\boldsymbol{\alpha}\) of \(\alpha\) in \(\tilde{N}\) if \(sign(\boldsymbol{x})\neq sign(\boldsymbol{y})\), namely the curve connecting \(\boldsymbol{\gamma}(\boldsymbol{x})\) and \(z\), and \(z\) and \(\boldsymbol{\gamma}(\boldsymbol{y})\) in \(\tilde{N}\), and \(\boldsymbol{\alpha}=\alpha\) otherwise. Also denote \(\partial_{s}\) by the arclength parameter on \(\boldsymbol{\gamma}\). Also denote \([\boldsymbol{\gamma}(\boldsymbol{x}):\boldsymbol{\gamma}(\boldsymbol{y})]\) as the shorter portion of \(\boldsymbol{\gamma}\setminus\{\boldsymbol{\gamma}(\boldsymbol{x}), \boldsymbol{\gamma}(\boldsymbol{y})\}\).
Before deducing the inequalities, we observe the nonexistence of the intersection point between \(\boldsymbol{\alpha}\) and \([\boldsymbol{\gamma}(\boldsymbol{x}):\boldsymbol{\gamma}(\boldsymbol{y})]\) other than boundary points, which follows from the arguments in the proof of Theorem 4.1 in [15]. Note that two curves \(\boldsymbol{\alpha}\) and \([\boldsymbol{\gamma}(\boldsymbol{x}):\boldsymbol{\gamma}(\boldsymbol{y})]\) represent \(\boldsymbol{d}(\boldsymbol{x},\boldsymbol{y})\) and \(\boldsymbol{l}(\boldsymbol{x},\boldsymbol{y})\), respectively. We can directly apply the argument in the reflected profile cases.
**Proposition 3.1**.: _There is no interior intersection point between \([\boldsymbol{\gamma}(\boldsymbol{x}):\boldsymbol{\gamma}(\boldsymbol{y})]\) and \(\boldsymbol{\alpha}\)._
We denote the region bounded by \(\boldsymbol{\alpha}\) and \([\boldsymbol{\gamma}(\boldsymbol{x}):\boldsymbol{\gamma}(\boldsymbol{y})]\) by \(A_{(\boldsymbol{x},\boldsymbol{y})}\). By Proposition 3.1, \(A_{(\boldsymbol{x},\boldsymbol{y})}\) is a topological disk in \(\tilde{N}\). We first consider a classical profile case. We follow and modify the arguments in the proof of Theorem 4.1 of [15] and Proposition 4.2 of [28].
**Proposition 3.2**.: _Suppose \(0=\min_{(x,y)\in M\times M\setminus D}Z(x,y)=Z(x_{0},y_{0})\). At \((x_{0},y_{0})\), we have_
\[0\leq-4\frac{\varphi^{\prime\prime}}{\boldsymbol{L}}-\frac{\kappa(\gamma(x_{0} ))}{d}\langle\alpha^{\prime}(0),\nu(\gamma(x_{0}))\rangle+\frac{\kappa(\gamma( y_{0}))}{d}\langle\alpha^{\prime}(1),\nu(\gamma(y_{0}))\rangle-(1-\varphi^{ \prime 2})\int_{\alpha}K. \tag{5}\]
Proof.: Consider a variation \(\alpha_{x,y}\) of \(\alpha\) which are family of curves moved by \(\partial_{x}\) and \(\partial_{y}\) at endpoints \(\gamma(x_{0})\) and \(\gamma(y_{0})\) with corresponding vector fields \(V\) and \(W\), respectively, satisfies the following:
\[V(0)=-\partial_{s}(\gamma(x_{0})),\,V(1)=0\] \[W(0)=0,\,W(1)=\partial_{s}(\gamma(y_{0})).\]
Denote \(\rho(\gamma(x),\gamma(y))=|\alpha_{x,y}|\). Then by definition, \(\rho(\gamma(x),\gamma(y))\geq d(\gamma(x),\gamma(y))\). We define \(\hat{Z}(x,y)\) by
\[\hat{Z}(x,y)=\rho(\gamma(x),\gamma(y))-\boldsymbol{L}\varphi\Big{(}\frac{l( \gamma(x),\gamma(y))}{\boldsymbol{L}}\Big{)}.\]
Note that \(\hat{Z}\geq Z\) for every \((x,y)\in(M\times M)\setminus D\) and \(\hat{Z}(x_{0},y_{0})=Z(x_{0},y_{0})\). Moreover, \(\hat{Z}\) achieves a local minimizer and
\[\partial_{x}\hat{Z}=\partial_{y}\hat{Z}=0 \tag{6}\]
at \((x_{0},y_{0})\). (6) yields
\[-\partial_{x}\rho=\partial_{y}\rho=\varphi^{\prime}\Big{(}\frac{l(\gamma(x_{0} ),\gamma(y_{0}))}{\boldsymbol{L}}\Big{)}\]
and this implies
\[\Big{\langle}-V(0),-\frac{\alpha^{\prime}(0)}{d}\Big{\rangle}=\Big{\langle}W(1 ),\frac{\alpha^{\prime}(1)}{d}\Big{\rangle}=\varphi^{\prime}\Big{(}\frac{l(x_{ 0},y_{0})}{\boldsymbol{L}}\Big{)}. \tag{7}\]
Then let us consider the second variation of \(\hat{Z}\) by the second variation formula of the length functional:
\[\partial_{x}^{2}\hat{Z} =\partial_{x}^{2}\rho-\frac{1}{\boldsymbol{L}}\varphi^{\prime\prime}\] \[=\frac{1}{d}\Big{\{}\int_{0}^{1}(\langle(V^{\perp})^{\prime},(V^{ \perp})^{\prime}\rangle-Rm(V^{\perp},\alpha^{\prime},V^{\perp},\alpha^{\prime} ))+\langle\nabla_{V}V,\alpha^{\prime}\rangle|_{0}^{1}\Big{\}}-\frac{1}{ \boldsymbol{L}}\varphi^{\prime\prime}\] \[=\frac{1}{d}(I(V^{\perp},V^{\perp})-\kappa(\gamma(x_{0}))\langle \alpha^{\prime}(0),\nu(\gamma(x_{0}))\rangle)-\frac{1}{\boldsymbol{L}}\varphi ^{\prime\prime}.\]
By the similar way, we have
\[\partial_{x}\partial_{y}\hat{Z} =\frac{1}{d}I(V^{\perp},W^{\perp})+\frac{1}{\boldsymbol{L}} \varphi^{\prime\prime}\] \[\partial_{y}^{2}\hat{Z} =\frac{1}{d}(I(W^{\perp},W^{\perp})+\kappa(\gamma(y_{0}))\langle \alpha^{\prime}(0),\nu(\gamma(y_{0}))\rangle)-\frac{1}{\boldsymbol{L}}\varphi ^{\prime\prime}.\]
By (7), we first have
\[|(V^{\perp}\pm W^{\perp})(0)|=|(V^{\perp}\pm W^{\perp})(1)|=\sqrt{1-\varphi^{ \prime 2}}. \tag{8}\]
By Proposition 3.1, there is no interior intersection point between \(\alpha\) and \([\gamma(x):\gamma(y)]\), and note that \(A_{(x,y)}\) is a topological disk. Since \(\gamma\) is embedded and separates \(N\) into two topological disks, we see that
\[\langle\nu(\gamma(x_{0})),\alpha^{\prime}(0)\rangle=\langle\nu(\gamma(y_{0})), -\alpha^{\prime}(1)\rangle \tag{9}\]
and so two terms in (9) have the same sign. From (8) and (9), we can set \((V^{\perp}-W^{\perp})(t)\) to be a parallel transport of \((V^{\perp}-W^{\perp})(0)\) along \(\alpha\) for \(t\in[0,1]\). Now Since \(\hat{Z}\) achieves local minimum at \((x_{0},y_{0})\), we have
\[0 \leq(\partial_{x}-\partial_{y})^{2}\hat{Z} \tag{10}\] \[=\frac{1}{d}(I(V^{\perp}-W^{\perp},V^{\perp}-W^{\perp})-\kappa( \gamma(x_{0}))\langle\alpha^{\prime}(0),\nu(\gamma(x_{0}))\rangle+\kappa( \gamma(y_{0}))\langle\alpha^{\prime}(1),\nu(\gamma(y_{0}))\rangle)-4\frac{ \varphi^{\prime\prime}}{\boldsymbol{L}}.\]
Then we have
\[\frac{1}{d(x_{0},y_{0})}I(V^{\perp}-W^{\perp},V^{\perp}-W^{\perp})=-(1-\varphi ^{\prime 2})\int_{\alpha}K \tag{11}\]
since \((V^{\perp}-W^{\perp})(t)\) is a parallel transport over \(\alpha\). By applying (11) into (10), we obtain (5).
Now we see the case of the reflected profile.
**Proposition 3.3**.: _Suppose \(\boldsymbol{Z}\geq 0\) with \(0=\tilde{Z}(x_{0},y_{0})=\overline{Z}(x_{0},y_{0},z_{0})\) for some \(((x_{0},y_{0}),z_{0})\in((\hat{M}\times\hat{M})\setminus D)\times N\). Let \(t_{0}=\alpha^{-1}(z_{0})\). Then at \(((x_{0},y_{0}),z_{0})\),_
\[0\leq -(1-\varphi^{\prime 2})\int_{\alpha}K-\frac{\kappa(\gamma(x_{0}))}{ \tilde{d}}\langle\alpha^{\prime}(0),\nu(\gamma(x_{0}))\rangle+\frac{\kappa( \gamma(y_{0}))}{\tilde{d}}\langle\alpha^{\prime}(1),\nu(\gamma(y_{0}))\rangle\] \[-\frac{2\kappa^{\partial N}(z_{0})}{\left\langle\frac{\alpha^{ \prime}_{+}(t_{0})-\alpha^{\prime}_{-}(t_{0})}{d},\mathcal{N}^{\partial N}(z_ {0})\right\rangle}-\frac{4}{\boldsymbol{L}}\varphi^{\prime\prime}. \tag{12}\]
Proof.: Note that, since \(\partial N\) is convex, both pieces \(\alpha([0,t_{0}])\) and \(\alpha([t_{0},1])\) are transversal to \(\partial N\). We take the orientation of \(\partial_{z}\) near \(z_{0}\in\partial N\) which satisfies
\[\Big{\langle}\partial_{z}(z_{0}),\frac{\alpha^{\prime}_{-}(t_{0})}{\tilde{d}} \Big{\rangle}>0 \tag{13}\]
on \(\partial N\). Now we consider a variation \(\alpha_{x,y,z}\) of \(\alpha\) which are family of curves moved by \(\partial_{x}\), \(\partial_{y}\) and \(\partial_{z}\) near \(\gamma(x_{0})\), \(\gamma(y_{0})\) and \(z_{0}\) with the corresponding vector fields \(V\), \(W\) and \(X\) respectively, which satisfy the following:
\[V(0) =\partial_{s}(\gamma(x_{0}))\text{, }V(t)=0\text{ for }t\in[t_{0},1],\] \[W(t) =0\text{ for }t\in[0,t_{0}]\text{, }W(1)=\partial_{s}(\gamma(y_{0})),\] \[X(0) =X(1)=0\text{, }X(t_{0})=\partial_{z}(z_{0}).\]
We define \(\rho(x,y,z)=|\alpha_{x,y,z}|\) and define \(\hat{Z}(x,y,z)\) for \((x,y,z)\in((\mathring{M}\times\mathring{M})\setminus D)\times N\) by
\[\hat{Z}(x,y,z)=\rho(x,y,z)-\mathbf{L}\varphi\Big{(}\frac{\tilde{l}(\gamma(x), \gamma(y))}{\mathbf{L}}\Big{)}.\]
Then \(\hat{Z}\geq Z\) for every \((x,y,z)\in((\mathring{M}\times\mathring{M})\setminus D)\times N\) and \(\hat{Z}(x_{0},y_{0},z_{0})=Z(x_{0},y_{0},z_{0})\). Moreover, \(\hat{Z}(x_{0},y_{0},z_{0})\) is a local minimizer of \(\hat{Z}\) and \(\partial_{x}\hat{Z}=\partial_{y}\hat{Z}=\partial_{z}\hat{Z}=0\). Let us first consider the first variations. This gives
\[\partial_{x}\rho=\partial_{y}\rho=\varphi^{\prime} \tag{15}\] \[\partial_{z}\rho=0 \tag{14}\]
and we have
\[\Big{\langle}V(0),-\frac{\alpha^{\prime}(0)}{\tilde{d}}\Big{\rangle}= \Big{\langle}W(1),\frac{\alpha^{\prime}(1)}{\tilde{d}}\Big{\rangle} =\varphi^{\prime}\Big{(}\frac{l(x_{0},y_{0})}{\mathbf{L}}\Big{)} \tag{17}\] \[\Big{\langle}X(t_{0}),\frac{\alpha^{\prime}_{-}(t_{0})-\alpha^{ \prime}_{+}(t_{0})}{\tilde{d}}\Big{\rangle} =0. \tag{16}\]
Then by (13) and (17), there exists \(\theta_{0}\in(0,\pi/2)\) such that
\[\Big{\langle}X(t_{0}),\frac{\alpha^{\prime}_{-}(t_{0})}{\tilde{d}}\Big{\rangle} =\Big{\langle}X(t_{0}),\frac{\alpha^{\prime}_{+}(t_{0})}{\tilde{d}}\Big{\rangle} =\cos\theta_{0}. \tag{18}\]
We calculate the second variation by \(x\):
\[\partial_{x}^{2}\hat{Z} =\partial_{x}^{2}\rho-\frac{1}{\mathbf{L}}\varphi^{\prime\prime}\] \[=\frac{1}{\tilde{d}}\Big{\{}\int_{0}^{t_{0}}(((V^{\perp})^{\prime },(V^{\perp})^{\prime})-Rm(V^{\perp},\alpha^{\prime},V^{\perp},\alpha^{ \prime}))+\langle\nabla_{V}V,\alpha^{\prime}\rangle|_{0}^{t_{0}}\Big{\}}- \frac{1}{\mathbf{L}}\varphi^{\prime\prime} \tag{19}\] \[=\frac{1}{\tilde{d}}(I(V^{\perp},V^{\perp})-\kappa(\gamma(x_{0}) )\langle\alpha^{\prime}(0),\nu(\gamma(x_{0}))\rangle)-\frac{1}{\mathbf{L}}\varphi ^{\prime\prime}.\]
Similarly we have
\[\partial_{y}^{2}\hat{Z} =\frac{1}{\tilde{d}}(I(W^{\perp},W^{\perp})+\kappa(\gamma(y_{0}) \langle\alpha^{\prime}(1),\nu(\gamma(y_{0}))\rangle)-\frac{1}{\mathbf{L}}\varphi ^{\prime\prime} \tag{21}\] \[\partial_{x}\partial_{y}\hat{Z} =\frac{1}{\tilde{d}}I(V^{\perp},W^{\perp})-\frac{1}{\mathbf{L}} \varphi^{\prime\prime}\] (22) \[\partial_{z}^{2}\hat{Z} =\frac{1}{\tilde{d}}(I(X^{\perp},X^{\perp})+\kappa^{\partial N}(z )\langle\alpha^{\prime}_{-}(t_{0})-\alpha^{\prime}_{+}(t_{0}),\mathcal{N}^{ \partial N}(z_{0})\rangle). \tag{20}\]
Also we obtain
\[\partial_{x}\partial_{z}\hat{Z} =\partial_{x}\partial_{z}\rho\] \[=\frac{1}{\tilde{d}}\Big{\{}\int_{0}^{t_{0}}(\langle(V^{\perp})^{ \prime},(X^{\perp})^{\prime}\rangle-Rm(V^{\perp},\alpha^{\prime},X^{\perp}, \alpha^{\prime}))+\langle\nabla_{V}X,\alpha^{\prime}\rangle|_{0}^{t_{0}} \Big{\}} \tag{23}\] \[=\frac{1}{\tilde{d}}I(V^{\perp},X^{\perp}),\] (24) \[\partial_{y}\partial_{z}\hat{Z} =\frac{1}{\tilde{d}}I(W^{\perp},X^{\perp}).\]
For the brevity of notation, we put \(c=\sqrt{1-\varphi^{\prime 2}}/\sin\theta_{0}\). From (16) and (18), we obtain
\[|(V^{\perp}\pm W^{\perp}+cX^{\perp})(0)|=|(V^{\perp}\pm W^{\perp}+cX^{\perp})( t_{0})|=|(V^{\perp}\pm W^{\perp}+cX^{\perp})(1)|=\sqrt{1-\varphi^{\prime 2}}. \tag{25}\]
By applying Proposition 3.1, we can argue the same reasoning as in the proof of Proposition 3.2. Namely, we obtain
\[\langle\nu(\gamma(x_{0})),\alpha^{\prime}(0)\rangle=\langle\nu(\gamma(y_{0})),-\alpha^{\prime}(1)\rangle \tag{26}\]
and we know two terms in (26) have the same sign again. Hence by (25) and (26), we can take \(V,W,X\) on \([0,1]\) such that \((V^{\perp}+W^{\perp}+cX^{\perp})(t)\) to be a parallel transport of \((V^{\perp}+W^{\perp}+cX^{\perp})(0)\) and \((V^{\perp}+W^{\perp}+cX^{\perp})(t_{0})\) on \([0,t_{0})\) and \([t_{0},1]\), respectively. Then we have
\[\frac{1}{\tilde{d}}I(V^{\perp}+W^{\perp}+cX^{\perp},V^{\perp}+W^{\perp}+cX^{ \perp})=-(1-\varphi^{\prime 2})\int_{\alpha}K \tag{27}\]
Since \((x_{0},y_{0},z_{0})\) is a minimizer of \(\hat{Z}\), we have
\[0 \leq(\partial_{x}+\partial_{y}+c\partial_{z})^{2}\hat{Z}|_{(x_{0 },y_{0},z_{0})}\] \[=\frac{1}{\tilde{d}}(I(V^{\perp}+W^{\perp}+cX^{\perp},V^{\perp}+W ^{\perp}+cX^{\perp})-\kappa(\gamma(x_{0}))\langle\alpha^{\prime}(0),\nu( \gamma(x_{0}))\rangle\] \[+\kappa(\gamma(y_{0}))\langle\alpha^{\prime}(1),\nu(\gamma(y_{0} ))\rangle+c^{2}\kappa^{\partial N}(z)\langle\alpha^{\prime}_{-}(t_{0})-\alpha ^{\prime}_{+}(t_{0}),\mathcal{N}^{\partial N}(z_{0})\rangle)-\frac{4}{\mathbf{L} }\varphi^{\prime\prime} \tag{29}\] \[=-(1-\varphi^{\prime 2})\int_{\alpha}K-\frac{\kappa(\gamma(x_{0}))}{ \tilde{d}}\langle\alpha^{\prime}(0),\nu(\gamma(x_{0}))\rangle+\frac{\kappa( \gamma(y_{0}))}{\tilde{d}}\langle\alpha^{\prime}(1),\nu(\gamma(y_{0}))\rangle\] \[+\frac{c^{2}\kappa^{\partial N}(z)}{\tilde{d}}\langle\alpha^{ \prime}_{-}(t_{0})-\alpha^{\prime}_{+}(t_{0}),\mathcal{N}^{\partial N}(z_{0}) \rangle-\frac{4}{\mathbf{L}}\varphi^{\prime\prime}, \tag{28}\]
where \(c=\sqrt{1-\varphi^{2}}/\sin\theta_{0}\). (28) follows by summing up (19)-(24) and (29) comes from (17). Notice that
\[\frac{1}{2}\Big{\langle}\frac{\alpha^{\prime}_{+}(t_{0})-\alpha^{\prime}_{-}(t _{0})}{\tilde{d}},\mathcal{N}^{\partial N}(z_{0})\Big{\rangle}=\sin\theta_{0}. \tag{30}\]
By applying (30) to (29), we obtain (12).
Now we consider the completed profile. We can still apply the arguments in [28] to prove that the first derivatives of \(\mathbf{Z}\) vanishes even it achieves the minimum at boundary points. We directly apply Lemma 4.4 and Lemma 4.5 in [28] to our estimates in Proposition 3.1 and Proposition 3.2.
**Proposition 3.4**.: _Suppose \(0=\min_{\boldsymbol{M}\times\boldsymbol{M}}\boldsymbol{Z}=\boldsymbol{Z}( \boldsymbol{x}_{0},\boldsymbol{y}_{0})\) for some \((\boldsymbol{x}_{0},\boldsymbol{y}_{0})\in(\boldsymbol{M}\times\boldsymbol{M} )\setminus\boldsymbol{D}\), then there exists \((\boldsymbol{x},\boldsymbol{y})\in(\boldsymbol{M}\times\boldsymbol{M})\setminus \boldsymbol{D}\) such that \(\boldsymbol{Z}(\boldsymbol{x},\boldsymbol{y})=0\) and either of the following holds:_
1. \(\text{sign}(\boldsymbol{x})=\text{sign}(\boldsymbol{y})\) _and_ \[0\leq-4\frac{\varphi^{\prime\prime}}{\boldsymbol{L}}-\frac{\kappa(\gamma(x))}{ d}\langle\alpha^{\prime}(0),\nu(\gamma(x))\rangle+\frac{\kappa(\gamma(y))}{d} \langle\alpha^{\prime}(1),\nu(\gamma(y))\rangle-(1-\varphi^{\prime 2})\int_{\alpha}K\] _or_ 2. \(\text{sign}(\boldsymbol{x})\neq\text{sign}(\boldsymbol{y})\)_,_ \(\boldsymbol{x},\boldsymbol{y}\in\mathring{\boldsymbol{M}}\)_,_ \(\boldsymbol{Z}(\boldsymbol{x},\boldsymbol{y})=\tilde{Z}(x,y)=\overline{Z}(x, y,z)\)_,_ \(t_{0}=\alpha^{-1}(z)\)_._ \[0\leq -(1-\varphi^{\prime 2})\int_{\alpha}K-\frac{\kappa(\gamma(x))}{ \tilde{d}}\langle\alpha^{\prime}(0),\nu(\gamma(x))\rangle+\frac{\kappa(\gamma (y))}{\tilde{d}}\langle\alpha^{\prime}(1),\nu(\gamma(y))\rangle\] \[-\frac{2\kappa^{\partial N}(z)}{\left\langle\frac{\alpha^{\prime }_{+}(t_{0})-\alpha^{\prime}_{-}(t_{0})}{\tilde{d}},\mathcal{N}^{\partial N}(z )\right\rangle}-\frac{4}{\boldsymbol{L}}\varphi^{\prime\prime}.\]
## 4. Noncollapsing and the long time behavior of the flow
Based on the estimates we obtained in Proposition 3.4, we obtain the non-collapsing properties of complete chord-arc profile under the free boundary curve shortening flow on surfaces, which are generalizations of Theorem 5.3 and Theorem 5.4 in [28]. We define \([\Gamma_{t}(x):\Gamma_{t}(y)]\) as the portion of \(\Gamma_{t}\) connecting \(\Gamma_{t}(x)\) and \(\Gamma_{t}(y)\) for \(x,y\in M\). Also denote \([\boldsymbol{\Gamma}_{t}(\boldsymbol{x}):\boldsymbol{\Gamma}_{t}(\boldsymbol{y})]\) as the shorter portion of \(\boldsymbol{\Gamma}_{t}\setminus\{\boldsymbol{\Gamma}_{t}(\boldsymbol{x}), \boldsymbol{\Gamma}_{t}(\boldsymbol{y})\}\) as in Section 3. The same arguments with the proof of Proposition 5.1 in [28] follows the evolution of the chord-arc profile on surfaces.
**Proposition 4.1**.: _Assume that \(\boldsymbol{Z}(\cdot,\cdot,0)\geq 0\) and \(\boldsymbol{Z}(\cdot,\cdot,0)>0\) on off-diagonal points. We denote \(t_{0}=\sup\{t\in[0,T):Z(\cdot,\cdot,t)\geq 0\}<T\). Then there exist \(\boldsymbol{x},\boldsymbol{y}\in(\boldsymbol{M}\times\boldsymbol{M})\setminus \boldsymbol{D}\) such that \(\boldsymbol{Z}(\boldsymbol{x},\boldsymbol{y},t_{0})=0\) and either of the following holds:_
1. \(\text{sign}(\boldsymbol{x})=\text{sign}(\boldsymbol{y})\) _and_ (31) \[0\geq 4\frac{\varphi^{\prime\prime}}{\boldsymbol{L}}+(1-\varphi^{ \prime 2})\int_{\alpha}K+2\Big{(}\varphi-\varphi^{\prime}\frac{\boldsymbol{l}}{ \boldsymbol{L}}\Big{)}\int_{\Gamma_{t}}\kappa^{2}ds+\varphi^{\prime}\int_{[ \Gamma_{t}(x):\Gamma_{t}(y)]}\kappa^{2}ds-\boldsymbol{L}\partial_{t}\varphi\] _or_ 2. \(\text{sign}(\boldsymbol{x})\neq\text{sign}(\boldsymbol{y})\)_,_ \(\boldsymbol{x},\boldsymbol{y}\in\mathring{\boldsymbol{M}}\)_,_ \(\boldsymbol{Z}(\boldsymbol{x},\boldsymbol{y})=\tilde{Z}(x,y)=\overline{Z}(x, y,z)\)_._
3. (32) \[0\geq 4\frac{\varphi^{\prime\prime}}{\boldsymbol{L}}+(1-\varphi^{ \prime 2})\int_{\alpha}K+2\Big{(}\varphi-\varphi^{\prime}\frac{\boldsymbol{l}}{ \boldsymbol{L}}\Big{)}\int_{\Gamma_{t}}\kappa^{2}ds+\varphi^{\prime}\int_{[ \boldsymbol{\Gamma}_{t}(x):\boldsymbol{\Gamma}_{t}(y)]}\kappa^{2}ds-\boldsymbol {L}\partial_{t}\varphi\] \[+\frac{2\kappa^{\partial N}(z)}{\left\langle\frac{\alpha^{\prime}_{ +}(t_{0})-\alpha^{\prime}_{-}(t_{0})}{d},\mathcal{N}^{\partial N}(z)\right\rangle}.\]
We now discuss the lower bounds of the chord-arc profile on surfaces. The following lemma gives rise to the estimate of total curvature between two points along the boundary when the distance between two points is small. For \(z_{1},z_{2}\in\partial N\), denote \([z_{1}:z_{2}]\) as the smaller portion of \(\partial N\setminus\{z_{1},z_{2}\}\).
**Lemma 4.2**.: _There exists \(\epsilon_{0}=\epsilon_{0}(N)>0\) such that the following holds: Suppose \(\epsilon\in(0,\epsilon_{0})\). Let \(\Gamma\) be a curve in \((N,\partial N,g)\) which meets \(\partial N\) orthogonally at \(\partial\Gamma=\{z_{0},z_{1}\}\) with length \(L\).
\(\min(\frac{\epsilon}{100},\frac{|\partial N|}{8})\) and \(z\in\partial N\) is a point achieving \(\tilde{d}(x,y)=d(x,z)+d(z,y)\) for some \(x,y\in\Gamma\), then_
\[\int_{[z_{0}:z_{1}]}\kappa\leq\frac{2\epsilon}{5}\text{ and }\int_{[z_{0}:z]} \kappa\leq\frac{2\epsilon}{5}. \tag{33}\]
Proof.: We regard \(N\) as a convex domain of a closed manifold \(\hat{N}\). We parametrize \(\Gamma\) with arclength by \(\Gamma:[0,|\Gamma|]\to N\). Fix a point \(z_{0}\in\partial N\cap\Gamma\) and consider an exponential map \(\exp_{z_{0}}:B_{a}(0)\to\hat{N}\) for some \(a>0\). We also consider a geodesic normal coordinate \(x=(x_{1},x_{2})\) such that \(x:=X\circ\exp_{z_{0}}^{-1}\) where \(X=(X_{1},X_{2})\) is an Euclidean coordinate. In the geodesic normal coordinate, note that \(g_{ij}=\delta_{ij}+O(r^{2})\) and Christoffel symbols satisfy \(\Gamma_{ij}^{k}=O(r)\) where \(r=\sqrt{X_{1}^{2}+X_{2}^{2}}\). Denote \(\theta(s)=\tan^{-1}(\dot{x}_{2}(\Gamma(s))/\dot{x}_{1}(\Gamma(s)))\) as an angle in the geodesic normal coordinate. We consider the local geodesic equation \(|\nabla_{\hat{\Gamma}}\dot{\Gamma}|^{\perp}=\kappa_{\Gamma}N_{\Gamma}\) in terms of local coordinates:
\[\sum_{k=1,2}\Big{(}\ddot{x}_{k}+\sum_{i,j=1,2}\dot{x}_{i}\dot{x}_{j}\Gamma_{ij }^{k}\Big{)}\frac{\partial}{\partial x_{k}}=(\kappa+O(r))\Big{(}-\dot{x}_{2} \frac{\partial}{\partial x_{1}}+\dot{x}_{1}\frac{\partial}{\partial x_{2}} \Big{)}. \tag{34}\]
From (34), we obtain
\[\frac{\partial\theta(s)}{\partial s}=\frac{\ddot{x}_{2}\dot{x}_{1}-\ddot{x}_{ 1}\dot{x}_{2}}{\dot{x}_{1}^{2}+\dot{x}_{2}^{2}}=\kappa(s)+O(r). \tag{35}\]
By (35) and the definition of the geodesic normal coordinate, there exists \(\epsilon_{0}=\epsilon_{0}(N)>0\) such that the arguments in Lemma 3.5 in [6] to bound the length of \([z_{0}:z_{1}]\) in terms of \(d(z_{0},z_{1})\) works directly. We follow the arguments in the proof of Lemma 5.2 in [28] and have
\[|[z_{0}:z_{1}]|\leq\frac{2\epsilon}{5C}\text{ and }|[z_{0}:z]|\leq\frac{2 \epsilon}{5C}. \tag{36}\]
Note that the constant in the proof of Lemma 5.2 in [28] is not sharp and we took a more strict upper bound. Then we finally obtain
\[\int_{[z_{0}:z_{1}]}\kappa^{\partial N}ds\leq C|[z_{0}:z_{1}]|\leq C\cdot \frac{2\epsilon}{5C}=\frac{2\epsilon}{5}\]
and another inequality of (33) also follows.
Note that there exists \(L_{0}=L_{0}(N,g)\) such that for a topological disk \(A\subseteq N\) with \(|\partial A|\leq L_{0}\), the isoperimetric inequality holds \(C^{\prime}|A|\leq|\partial A|^{2}\) for some explicit constant \(C^{\prime}\) by Proposition 2.1 in [7]. Denote \(K_{0}=\sup_{x\in N}|K|\).
**Theorem 4.3**.: _Suppose \(L(t)\to 0\) as \(t\to T\). Take \(\epsilon_{1}=\epsilon_{1}(N)>0\). Given any \(\epsilon\in(0,\epsilon_{1})\), there exists \(c_{\epsilon}\) such that the following holds: let \(\{\Gamma_{t}\}_{t\in[0,T)}\) be a free boundary curve shortening flow on \(N\). Suppose \(L(0)(1+C)\leq\min(\frac{\epsilon}{100},\frac{L_{0}}{3})\), where \(C=\sup_{\partial N}\kappa^{\partial N}\). Given any \(c_{0}\in(0,c_{\epsilon})\), if the inequality_
\[\boldsymbol{\psi}(\delta,t)\geq\begin{cases}c_{0}\boldsymbol{L}(t)\Big{\{} \sin\Big{(}(\pi-\epsilon)\frac{\delta}{\boldsymbol{L}(t)}+\frac{\epsilon}{2} \Big{)}+64(\frac{\delta}{\boldsymbol{L}(t)}-\frac{1}{4})^{3}\sin\frac{\epsilon} {2}\Big{\}}e^{-K_{0}t}&\text{ if }0\leq\delta\leq\frac{\boldsymbol{L}(t)}{4}\\ c_{0}\boldsymbol{L}(t)\sin\Big{(}(\pi-\epsilon)\frac{\delta}{\boldsymbol{L}(t)}+ \frac{\epsilon}{2}\Big{)}e^{-K_{0}t}&\text{ otherwise.}\end{cases} \tag{37}\]
_holds at \(t=0\), then it holds for all \(t\in[0,T)\)._
Proof.: We take \(\epsilon_{1}=\min(\epsilon_{0},L_{0},\frac{2}{C^{\prime}K_{0}},\frac{\pi}{20})\). Take \(\varphi\in C^{2}([0,\frac{1}{2}]\times[0,T))\) as following:
\[\varphi(\zeta,t)=\begin{cases}c_{0}\{\sin((\pi-\epsilon)\zeta+\frac{\epsilon}{ 2})+64(\zeta-\frac{1}{4})^{3}\sin\frac{\epsilon}{2}\}e^{-K_{0}t}&\text{ if }\zeta\in[0,\frac{1}{4}]\\ c_{0}\{\sin((\pi-\epsilon)\zeta+\frac{\epsilon}{2})\}e^{-K_{0}t}&\text{ otherwise}\end{cases} \tag{38}\]
and extend (38) to \(\varphi\in C^{2}([0,1]\times[0,T))\) to satisfy \(\varphi(1-\zeta,t)=\varphi(\zeta,t)\) for \(\zeta\in[0,1/2]\) and \(t\in[0,T)\). We will choose \(c_{0}>0\) later in the proof. Note that \(\varphi(0,t)=0\) and \(\partial_{\zeta}\varphi(1/2,t)=0\) for every \(t\in[0,T)\). We define the auxiliary function \(\mathbf{Z}\) with \(\varphi\) defined in (38). Notice that the initial condition holds by (37) and we argue by contradiction. Denote \(t_{0}:=\sup\{t\in[0,T):\mathbf{Z}(\cdot,\cdot,t)\geq 0\}\) and assume \(t_{0}<T\). Suppose \(\mathbf{Z}(\mathbf{x},\mathbf{y},t_{0})=0\) and note that this satisfies the conditions of Proposition 4.1. Denote \(\Gamma_{t_{0}}\cap\partial N=\{z_{0},z_{1}\}\). First we estimate
\[\Theta:=\int_{\Gamma_{t_{0}}}\kappa ds\text{ and }\omega:=\int_{[\mathbf{ \Gamma}_{t_{0}}(x):\mathbf{\Gamma}_{t_{0}}(y)]}\kappa ds.\]
Denote the region surrounded by \([z_{0}:z_{1}]\) and \(\Gamma_{t_{0}}\) by \(A_{t_{0}}\).
Since \(\Gamma_{t_{0}}\) meets \(\partial N\) orthogonally, by Gauss-Bonnet theorem we have
\[\Theta =2\pi-\int_{[z_{0}:z_{1}]}\kappa^{\partial N}-\int_{A_{t_{0}}}K- 2\cdot\frac{\pi}{2} \tag{39}\] \[\geq\pi-\frac{2\epsilon}{5}-\int_{A_{t_{0}}}K,\]
where we obtain (39) by applying Lemma 4.2.
Now we deduce the lower bound of \(\omega\) and we consider the case of \(\operatorname{sign}(\mathbf{x})=\operatorname{sign}(\mathbf{y})\) first. By Proposition 3.1, \(A_{(\mathbf{x},\mathbf{y})}\) is a topological disk. We denote \(\beta=\cos^{-1}\varphi^{\prime}\) and \(\beta\) is an interior angle between two curves at \(x\) and \(y\) of \(A_{(\mathbf{x},\mathbf{y})}\) by (7). We obtain the following by Gauss-Bonnet theorem:
\[\omega=\int_{[\mathbf{\Gamma}_{t_{0}}(\mathbf{x}):\mathbf{\Gamma}_{t_{0}}( \mathbf{y})]}\kappa ds =2\pi-2(\pi-\beta)-\int_{A_{(\mathbf{x},\mathbf{y})}}K \tag{40}\] \[=2\cos^{-1}\varphi^{\prime}-\int_{A_{(\mathbf{x},\mathbf{y})}}K.\]
Now we consider the case \(\operatorname{sign}(\mathbf{x})\neq\operatorname{sign}(\mathbf{y})\). We apply Proposition 3.1 again and obtain that \(A_{(\mathbf{x},\mathbf{y})}\) is a topological disk in \(\tilde{N}\). Moreover, denote \(\beta=\cos^{-1}\varphi^{\prime}\) to be an interior angle between two curves \([\mathbf{\Gamma}_{t_{0}}(\mathbf{x}):\mathbf{\Gamma}_{t_{0}}(\mathbf{y})]\) and \(\mathbf{\alpha}\) at \(\mathbf{x}\) and \(\mathbf{y}\) by (16). We separate \(A_{\mathbf{\Gamma}_{t_{0}}}\) by
\[A_{1}=A_{(\mathbf{x},\mathbf{y})}\cap N\text{ and }A_{2}=A_{(\mathbf{x},\mathbf{y})}\cap( \tilde{N}\setminus N).\]
Note that \(A_{1}\) and \(A_{2}\) are both topological disks. Without loss of generality, \(z_{0}\in[\Gamma_{t_{0}}(x):\Gamma_{t_{0}}(y)]\). Since \(\Gamma_{t_{0}}\) orthogonally meets \(\partial N\), by applying (18) and Gauss-Bonnet theorem we have
\[\omega=\int_{[\mathbf{\Gamma}_{t_{0}}(x):\mathbf{\Gamma}_{t_{0}}( y)]}\kappa ds =\int_{[\mathbf{\Gamma}_{t_{0}}(x):z_{0}]}\kappa ds+\int_{[z_{0}: \mathbf{\Gamma}_{t_{0}}(y)]}\kappa ds\] \[=\Big{(}2\pi-\theta_{0}-\pi/2-(\pi-\beta)-\int_{A_{1}}K-\int_{[z _{0}:z]}\kappa^{\partial N}\Big{)}\] \[+\Big{(}2\pi-(\pi-\theta_{0})-\pi/2-(\pi-\beta)-\int_{A_{2}}K- \int_{[z_{0}:z]}\kappa^{\partial N}\Big{)}\] \[=2\cos^{-1}\varphi^{\prime}-\int_{A_{(\mathbf{x},\mathbf{y})}}K-2\int_{[ z_{0}:z]}\kappa^{\partial N} \tag{41}\] \[\geq 2\cos^{-1}\varphi^{\prime}-\int_{A_{(\mathbf{x},\mathbf{y})}}K- \frac{4\epsilon}{5},\]
where we obtain (41) by applying Lemma 4.2.
By the isoperimetric inequality and our choice of \(L_{0}\), we obtain the following estimate of the area of \(A_{t_{0}}\) and \(A_{(\mathbf{x},\mathbf{y})}\):
\[|A_{t_{0}}| \leq C^{\prime}(L(t_{0})+|[z_{0}:z_{1}]|)^{2}\] \[\leq C^{\prime}(\frac{\epsilon}{100}+\frac{\epsilon}{2})^{2}\leq C ^{\prime}\epsilon^{2}, \tag{43}\] \[|A_{(\mathbf{x},\mathbf{y})}| \leq C^{\prime}(2L(t_{0}))^{2}\leq\frac{C^{\prime}\epsilon^{2}}{ 100}. \tag{42}\]
We obtain (42) by applying (36) and (43) by applying \(|\alpha|\leq[\mathbf{\Gamma}_{t_{0}}(x):\mathbf{\Gamma}_{t_{0}}(y)]\leq L(t_{0})\). Moreover, by (42) and (43), we have the control of total Gaussian curvature of \(A_{t_{0}}\) and \(A_{(\mathbf{x},\mathbf{y})}\):
\[\int_{A_{t_{0}}}K \leq K_{0}|A_{t_{0}}|\leq C^{\prime}K_{0}\epsilon^{2}, \tag{45}\] \[\int_{A_{(\mathbf{x},\mathbf{y})}}K \leq K_{0}|A_{(\mathbf{x},\mathbf{y})}|\leq\frac{1}{100}C^{\prime}K_{0} \epsilon^{2}. \tag{44}\]
By our choice of \(\epsilon_{1}\), (39)-(41), (44) and (45), we have
\[\Theta\geq\pi-\epsilon\text{ and }\omega\geq 2\Big{(}\cos^{-1}\varphi^{ \prime}-\frac{\epsilon}{2}\Big{)}. \tag{46}\]
By our choice of \(\varphi\), \(|\varphi^{\prime}|\leq c_{0}\{(\pi-\epsilon)+12\sin\frac{\epsilon}{2}\}\) holds. By taking sufficiently small \(c_{0}\), we have \(\cos^{-1}\varphi^{\prime}\geq\frac{\epsilon}{2}\) and our estimate of \(\omega\) in (46) is proper. We apply Holder inequality and obtain
\[\int_{\Gamma_{t}}\kappa^{2}ds\geq|\Gamma_{t}|^{-1}\Big{(}\int_{\Gamma_{t}}| \kappa|ds\Big{)}^{2}\geq\frac{2}{\mathbf{L}}\Theta^{2}\geq\frac{2}{\mathbf{L}}(\pi- \epsilon)^{2} \tag{47}\]
and
\[\int_{[\mathbf{\Gamma}_{t}(x):\mathbf{\Gamma}_{t}(y)]}\kappa^{2}ds \geq|[\mathbf{\Gamma}_{t}(x):\mathbf{\Gamma}_{t}(y)]|^{-1}\Big{(} \int_{[\mathbf{\Gamma}_{t}(x):\mathbf{\Gamma}_{t}(y)]}|\kappa|ds\Big{)}^{2}\] \[\geq\frac{1}{l}\omega^{2}\geq\frac{4}{l}\Big{(}\cos^{-1}\varphi^{ \prime}-\frac{\epsilon}{2}\Big{)}^{2}. \tag{48}\]
Note that We apply (47) and (48) to (31) and (32), in either case we have
\[0 \geq 4\frac{\varphi^{\prime\prime}}{\mathbf{L}}+(1-\varphi^{\prime 2})\int_{ \alpha}K+2\Big{(}\varphi-\varphi^{\prime}\frac{\mathbf{l}}{\mathbf{L}}\Big{)}\int_{ \Gamma_{t}}\kappa^{2}ds+\varphi^{\prime}\int_{[\Gamma_{t}(x):\Gamma_{t}(y)]} \kappa^{2}ds-\mathbf{L}\partial_{t}\varphi\] \[\geq 4\frac{\varphi^{\prime\prime}}{\mathbf{L}}+(1-\varphi^{\prime 2}) \int_{\alpha}K+\frac{4}{\mathbf{L}}\Big{(}\varphi-\varphi^{\prime}\frac{\mathbf{l}}{\bm {L}}\Big{)}(\pi-\epsilon)^{2}+\frac{4}{l}\Big{(}\cos^{-1}\varphi^{\prime}- \frac{\epsilon}{2}\Big{)}^{2}\varphi^{\prime}-\mathbf{L}\partial_{t}\varphi\] \[\geq 4\frac{\varphi^{\prime\prime}}{\mathbf{L}}-K_{0}d+\frac{4}{\mathbf{ L}}\Big{(}\varphi-\varphi^{\prime}\frac{\mathbf{l}}{\mathbf{L}}\Big{)}(\pi-\epsilon)^{2}+ \frac{4}{l}\Big{(}\cos^{-1}\varphi^{\prime}-\frac{\epsilon}{2}\Big{)}^{2} \varphi^{\prime}-\mathbf{L}\partial_{t}\varphi\] \[=4\frac{\varphi^{\prime\prime}}{\mathbf{L}}-K_{0}\mathbf{L}\varphi+\frac {4}{\mathbf{L}}\Big{(}\varphi-\varphi^{\prime}\frac{\mathbf{l}}{\mathbf{L}}\Big{)}(\pi- \epsilon)^{2}+\frac{4}{l}\Big{(}\cos^{-1}\varphi^{\prime}-\frac{\epsilon}{2} \Big{)}^{2}\varphi^{\prime}-\mathbf{L}\partial_{t}\varphi. \tag{49}\]
If \(\epsilon\in(0,\pi/20)\), then the following holds by direct calculation:
\[(\pi-\epsilon)\cos\Big{(}(\pi-\epsilon)\frac{1}{4}+\frac{\epsilon}{2}\Big{)}> 12\sin\frac{\epsilon}{2}. \tag{50}\]
Moreover, since \(\cos^{-1}\varphi^{\prime}\to\pi/2\) uniformly as \(c_{0}\to 0\) in \([0,1/4]\), we can take \(c_{0}\) sufficiently small to satisfy
\[\Big{(}\cos^{-1}\varphi^{\prime}-\frac{\epsilon}{2}\Big{)}^{2}-(\pi- \epsilon)^{2}\zeta^{2}\geq 1 \tag{51}\]
for \(\zeta\in[0,1/4]\). Let us consider the following term for \(\zeta\in(0,1/4]\):
\[\varphi^{\prime\prime}+(\pi-\epsilon)^{2}\varphi+\zeta^{-1}\varphi ^{\prime}\Big{(}\Big{(}\cos^{-1}\varphi^{\prime}-\frac{\epsilon}{2}\Big{)}^{ 2}-(\pi-\epsilon)^{2}\zeta^{2}\Big{)}\] \[=c_{0}e^{-K_{0}t}\Big{(}384\Big{(}\zeta-\frac{1}{4}\Big{)}\sin \frac{\epsilon}{2}\Big{)}+c_{0}\zeta^{-1}e^{-K_{0}t}\Big{(}(\pi-\epsilon)\cos \Big{(}(\pi-\epsilon)\zeta+\frac{\epsilon}{2}\Big{)}+192\Big{(}\zeta-\frac{1 }{4}\Big{)}^{2}\sin\frac{\epsilon}{2}\Big{)}\] \[\Big{(}\Big{(}\cos^{-1}\varphi^{\prime}-\frac{\epsilon}{2}\Big{)} ^{2}-(\pi-\epsilon)^{2}\zeta^{2}\Big{)} \tag{53}\] \[=48c_{0}e^{-K_{0}t}\sin\frac{\epsilon}{2}\Big{(}4\Big{(}\zeta- \frac{1}{4}\Big{)}+1\Big{)}^{2}\geq 0. \tag{52}\]
(52) comes from (50), (51) and the monotone decreasing property of the Cosine function. For \(\zeta\in[1/4,1/2]\), both of the following holds for small \(c_{0}>0\):
\[\varphi^{\prime\prime}+(\pi-\epsilon)^{2}\varphi =0 \tag{55}\] \[\Big{(}\cos^{-1}\varphi^{\prime}-\frac{\epsilon}{2}\Big{)}^{2}-( \pi-\epsilon)^{2}\zeta^{2} >0. \tag{54}\]
We also have the following identity in time derivative terms:
\[K_{0}\partial_{t}\varphi+\varphi=0. \tag{56}\]
By applying (53)-(56) into (49), we obtain the contradiction.
**Theorem 4.4**.: _Suppose \(L(t)\nrightarrow 0\) as \(t\to T\) where \(T<\infty\). Let \(\{\Gamma_{t}\}_{t\in[0,T)}\) be a free boundary curve shortening flow on \(N\). If \(\mathbf{L}_{T}:=\lim_{t\to T}\mathbf{L}(t)>0\), then if the
inequality_
\[\mathbf{\psi}(\delta,t)\geq c_{0}\mathbf{L}(t)e^{\left(-\frac{4\pi^{2}}{L_{T}^{2}}-K_{0} \right)t}\sin\Big{(}\frac{\pi\delta}{\mathbf{L}(t)}\Big{)}\]
_holds at \(t=0\), then it holds for all \(t\in[0,T)\)._
Proof.: We modify the proof of Theorem 5.4 in [28]. We adopt the modified time coordinate \(\tau:=\int_{0}^{t}\frac{1}{\mathbf{L}(s)^{2}}ds\). We take \(\varphi:[0,1]\times[0,T)\) by
\[\varphi(\zeta,t)=c_{0}e^{-4\pi^{2}\tau(t)-K_{0}t}\sin(\pi\zeta).\]
As before, denote \(t_{0}:=\sup\{t\in[0,T):\mathbf{Z}(\cdot,\cdot,t)\geq 0\}\) and assume \(t_{0}<T\). Then there exists \((\mathbf{x},\mathbf{y})\in(\mathbf{M}\times\mathbf{M})\backslash\mathbf{D}\) such that \(\mathbf{Z}(\mathbf{x}_{0},\mathbf{y}_{0},t_{0})=\min_{(\mathbf{x},\mathbf{y})\in(\mathbf{M}\times\mathbf{ M})\backslash\mathbf{D}}\mathbf{Z}(\mathbf{x},\mathbf{y},t_{0})\). By Proposition 4.1, in either case we have
\[0 \geq 4\frac{\varphi^{\prime\prime}}{\mathbf{L}}+(1-\varphi^{\prime 2}) \int_{\alpha}K+2\Big{(}\varphi-\varphi^{\prime}\frac{\mathbf{l}}{\mathbf{L}}\Big{)} \int_{\Gamma_{t}}\kappa^{2}ds+\varphi^{\prime}\int_{[\Gamma_{t}(x):\Gamma_{t}( y)]}\kappa^{2}ds-\mathbf{L}\partial_{t}\varphi\] \[\geq 4\frac{\varphi^{\prime\prime}}{\mathbf{L}}-K_{0}\mathbf{L}\varphi+2 \Big{(}\varphi-\varphi^{\prime}\frac{\mathbf{l}}{\mathbf{L}}\Big{)}\int_{\Gamma_{t}} \kappa^{2}ds+\varphi^{\prime}\int_{[\Gamma_{t}(x):\Gamma_{t}(y)]}\kappa^{2}ds -\mathbf{L}\partial_{t}\varphi\] \[>4\frac{\varphi^{\prime\prime}}{\mathbf{L}}-K_{0}\mathbf{L}\varphi-\mathbf{L} \partial_{t}\varphi, \tag{57}\]
where (57) follows from the strict concavity of \(\varphi(\cdot,t)\). But our choice of \(\varphi\) gives
\[4\frac{\varphi^{\prime\prime}}{\mathbf{L}}-K_{0}\mathbf{L}\varphi-\mathbf{L }\partial_{t}\varphi =-4\pi^{2}\frac{\varphi}{\mathbf{L}}-K_{0}\mathbf{L}\varphi+\mathbf{L}(4\pi^{ 2}\tau^{\prime}(t)+K_{0})\varphi\] \[=-4\pi^{2}\frac{\varphi}{\mathbf{L}}-K_{0}\mathbf{L}\varphi+\mathbf{L}\Big{(} 4\pi^{2}\frac{1}{\mathbf{L}^{2}}+K_{0}\Big{)}\varphi=0 \tag{58}\]
(57) and (58) give the contradiction. And since \(\mathbf{L}\) is a decreasing function in time \(t\), the claim of the Theorem follows.
Let us denote \(\lambda:M\to\mathbb{R}\) by the arclength to the nearer endpoint in the sense of internal distance in \(\Gamma_{t}\). Together with the proof of Proposition 5.5 in [28], Theorem 4.3 and Theorem 4.4 gives the following boundary avoidance estimate.
**Proposition 4.5**.: _Let \(\{\Gamma_{t}\}_{t\in[0,T)}\) be a free boundary curve shortening flow on \(N\). Given any \(\delta>0\), there exists \(\epsilon=\epsilon(\Gamma_{0},N,\delta)>0\) such that_
\[\lambda(x,t)>\delta\Rightarrow d(\gamma(x,t),\partial N)>\epsilon.\]
As in [28], Proposition 4.5 and arguments of the proof of Theorem 6.1 in [28] gives the following long-time behavior.
**Theorem 4.6**.: _Let \((N,\partial N,g)\) be a closed Riemannian surface with convex boundary and \(\{\Gamma_{t}\}_{t\in[0,T)}\) be a maximal free boundary curve shortening flow starting from a properly embedded closed interval \(\Gamma_{0}\) in \(N\). Then either:_
1. \(T=\infty\)_, in which case_ \(\Gamma_{t}\) _converges smoothly as_ \(t\to\infty\) _to an embedded geodesic in_ \(N\) _which meets_ \(\partial N\) _orthogonally; or_
2. \(T<\infty\)_, in which case_ \(\Gamma_{t}\) _converges uniformly to some single half-round point_ \(z\in\partial N\) _smoothly in the sense of the blow up limit of the curve converges to the unit semi-circle._
## 5. The family of curves and tightening procedure
In this section, we formulate the min-max construction of free boundary embedded geodesics on Riemannian \(2\)-disks with convex boundary. We discuss the smooth min-max setting and obtain the existence result via proper pull-tight procedure. Theorem B in [38] (see also Appendix of Hatcher's work [21]) proves that the space of (unparametrized) embedded intervals on \(D^{2}\) whose endpoints are on \(\partial D^{2}\) relative to the space of point curves retracts onto \(\mathbb{R}P^{2}\). Let us denote the space of embedded curves by \(\Sigma\) and denote by \(\Sigma_{0}\) the space of point curves. We denote \(\mathcal{S}=\Sigma/\Sigma_{0}\). We consider two distinct nontrivial relative homology classes \(h_{1}\) and \(h_{2}\) on the space of embedded intervals:
\[h_{i}:=H_{i}(\mathcal{S},\mathbb{Z}_{2})=\mathbb{Z}_{2}.\]
Let us consider the \(\mathbb{Z}_{2}\)-Cohomology ring \(H^{*}(\mathcal{S},\mathbb{Z}_{2})\) of \(\mathcal{S}\). We denote \(\alpha\) to be a generator of the first cohomology ring. Then the cohomology ring is:
\[H^{*}(\mathcal{S},\mathbb{Z}_{2})=\mathbb{Z}_{2}[\alpha]/\alpha^{2}.\]
Let \(IV_{1}(D^{2})\) be a space of integral varifolds on \((D^{2},g)\) and endow an \(F\)-metric on the space of varifolds as in \(2.1(19)(20)\) in [37].
For each \(i=1,2\), we now define the \(i\)-sweepout. We denote an \(i\)-dimensional simplicial complex by \(X\). If
\[\Phi^{*}(\omega)\neq 0,\]
then we say that \(\Phi:X\to\mathcal{S}\) detects \(\omega\in H^{i}(\mathcal{S},\mathbb{Z}_{2})\). We define \(\Phi\) to be an \(i\)_-sweepout_ endowed with a smooth topology if \(\Phi\) detects \(i\)-th cup product \(\alpha^{i}\in H^{i}(\mathcal{S},\mathbb{Z}_{2})\). We define the _width_ of \(i\)-parameter sweepouts as
\[\omega_{i}(D^{2}):=\inf_{\Phi\in S_{i}}\sup_{x\in X}|\Phi(x)|=L_{i}, \tag{59}\]
for \(i\in\{1,2\}\). By the definition of \(i\)-sweepouts in (59), \(\omega_{1}(D^{2})\leq\omega_{2}(D^{2})\) holds.
We define a _minimizing sequence_ to be the sequence of \(i\)-sweepouts such that \(\lim_{j\to\infty}\sup_{x\in X}|\Phi_{j}(x)|=L_{i}\). We denote sequence of curves \(\Phi_{j}(x_{j})\) to be a min-max sequence if \(|\Phi_{j}(x_{j})|\) converges to \(L_{i}\), where \(x_{j}\in X\) and \(\{\Phi_{j}(x)\}\) is a minimizing sequence. We define the _critical set_\(\Lambda(\{\Phi_{j}\})\) to be a set of stationary varifolds achieved by the limit of min-max sequence induced by \(\{\Phi_{j}(x)\}\). We denote the set of critial geodesic by \(W_{L_{i}}\) which is a set of stationary varifolds whose support is a free boundary embedded geodesic and length is \(L_{i}\).
We follow Abraham's proof of the bumpy metric theorem for curves [1] and Theorem 9 in Ambrozio-Carlotto-Sharp [3] of the free boundary minimal surface version which proves the genericity of bumpy metric in \(C^{r}\)-Baire sense. The compactness of free boundary embedded geodesics with bounded length follows from arguments in Appendix A in [27].
By tightening argument via free boundary curve shortening flow of Theorem 4.6 on long-time behavior of free boundary curve shortening flow, and applying the arguments in the proof of Theorem 2.1 in [26], we have the following existence of free boundary embedded geodesics achieving the width:
**Theorem 5.1**.: _Suppose \((D^{2},\partial D^{2},g)\) to be a smooth Riemannian \(2\)-disk with a strictly convex boundary. For \(i=1,2\) and any minimizing sequence \(\{\Phi_{j}\}\) of \(i\)-sweepouts, there is a deformed minimizing sequence \(\{\hat{\Phi}_{j}\}\) of \(\{\Phi_{j}\}\) satisfying the
following property. For any \(s>0\), there is some \(0<a<L_{i}\) satisfying_
\[\{\hat{\Phi}_{j}(x)\in IV_{1}(D^{2}):|\hat{\Phi}_{j}(x)|\geq L_{i}-a\}\subset \bigcup_{\gamma\in\Lambda(\{\Phi_{j}\})\cap W_{L_{i}}}B_{s}^{F}(\gamma)\]
_for all sufficiently large \(j\), where \(B_{s}^{F}(\gamma)\) is a \(F\)-metric ball with center \(\gamma\). Moreover, the multiplicity of geodesics in the critical set is \(1\)._
**Remark 5.2**.: _We still can run the tightening procedure even initial curves do not meet orthogonally with the boundary \(\partial D^{2}\). Geometrically, we can slightly deform the curves near the boundary to make the intersection angles the right angle. Moreover, we can apply the flow in the 'weak' sense. We may consider the free boundary curve shortening flow as a Neumann boundary problems for nonlinear parabolic PDEs (See Chapter 10 of [17]). Then the weak solution of the flow has a orthogonal boundary condition and smoothness at any positive time before the maximum existence time \(t\in(0,T)\). The flow is still a \(C^{0}\)-solution at \(t\in[0,T)\)._
By applying the classical Lusternik-Schnirelmann argument, we prove that if two widths are the same, then there exists an \(S^{1}\)-cycle of free boundary embedded geodesics. This gives the existence of two free boundary geodesics on any Riemannian disk with a strictly convex boundary. Moreover, if the metric on \((D^{2},\partial D^{2},g)\) is bumpy, then we deduce the existence of two free boundary embedded geodesics with distinct lengths. The argument in the proof of Corollary 2.2 in [26] applies directly.
**Theorem 5.3**.: _Suppose \((D^{2},\partial D^{2},g)\) to be a Riemannian \(2\)-disk with strictly convex boundary. If \(w_{1}(D^{2})=w_{2}(D^{2})\), then there exist infinitely many distinct free boundary embedded geodesics in \((D^{2},\partial D^{2},g)\)._
**Corollary 5.4**.: _On the Riemannian disk endowed with a bumpy metric and strictly convex boundary, there are at least two free boundary embedded geodesics with length \(L_{1}\) and \(L_{2}\)._
## 6. Morse Index Bound
In this section, we obtain the generic Morse Index bound of free boundary embedded geodesics on Riemannian \(2\)-disk. We mainly follow the idea in Section 6 and Section 7 of [26] which proves the Morse Index of simple closed geodesics on bumpy spheres based on the interpolation technique based on quantitative \(F\)-distance estimate. We focus on the necessary modification to prove the Morse Index bound in the free boundary setting. Throughout this section, we suppose that \((D^{2},\partial D^{2},g)\) is endowed with a bumpy metric. Suppose \(D^{2}\) is embedded in some closed surface \(\tilde{D}^{2}\).
Let us fix a free boundary embedded geodesic \(\gamma\). We adopt the Fermi coordinate \(c:[0,L]\times(-h,h)\to D^{2}\) on \(N_{h}(\gamma)\) on the tubular neighborhood \(N_{h}(\gamma)\) of the fixed geodesic \(\gamma\). Moreover, we adopt the metric perturbation in Section 5.2 in [26] and follow the notation therein. By choosing a sufficiently small \(\beta>0\) and sufficiently large \(M>\sup_{D^{2}}|K|\) in Proposition 5.2 of [26] and applying the corresponding deformation (17) in [26], we obtain the strict stability of the free boundary embedded geodesic \(\gamma_{g_{\beta}}\) in the perturbed metric.
**Proposition 6.1**.: _There exists small \(\beta>0\) and \(M>\sup_{D^{2}}|K|\) satisfying the following: \(\gamma_{g_{\beta}}\) is a strictly stable geodesic and the ambient Gaussian curvature is strictly negative._
Proof.: Denote \(\partial\gamma_{g_{\beta}}=\{p_{1},p_{2}\}\). Then by the change of the second fundamental form by conformal deformation following [8], for \(i=1,2\), we have a geodesic curvature of boundary on \(\kappa_{g_{\beta}}^{\partial D^{2}}\) as
\[\kappa_{g_{\beta}}^{\partial D^{2}}(p_{i})=e^{-\phi_{\beta}}\Big{(}\kappa_{g}^ {\partial D^{2}}(p_{i})-\frac{\partial\phi_{\beta}}{\partial\nu}\Big{)}=e^{- \phi_{\beta}}\kappa_{g}^{\partial D^{2}}(p_{i}).\]
Also note that \(K_{g_{\beta}}(x)=K(x)-M<0\) on \(x\in\gamma_{g_{\beta}}\) by (19) of [26] and this proves the latter conclusion of the claim. Now we prove that we can choose \(\beta\) and \(M\) such that the second variation (3) of \(\gamma_{g_{\beta}}\) is positive definite. It suffices to show that by taking the suitable \(\beta\) and \(M\), the eigenvalue \(\{\lambda_{k}\}\) with associated eigenfunctions \(\{\phi_{k}\}\) of the following equation with Robin boundary condition from (4) are all positive:
\[\begin{cases}(\Delta_{\gamma_{g_{\beta}}}+K-M)\phi_{k}+\lambda_{k}\phi_{k}=0 \text{ on }\gamma_{g_{\beta}}\\ \frac{\partial\phi_{k}}{\partial\eta}(p_{i})-e^{-\phi_{\beta}}\kappa(p_{i}) \phi_{k}(p_{i})=0\text{ for }i=1,2.\end{cases} \tag{60}\]
By the condition (iii) in Proposition 5.2 in [26], we can take \(e^{-\phi_{\beta}}\) to be arbitrarily close to \(1\) by taking sufficiently small \(\beta\). By taking sufficiently large \(M\) and sufficiently small \(\beta\), we can obtain all the eigenvalues \(\{\lambda_{k}\}\) to be strictly positive by standard elliptic theory.
We now discuss the free boundary mean convex neighborhood of the geodesic \(\gamma\) which is strictly stable and with negative ambient Gaussian curvature to adopt the squeezing lemma in [26]. We adopt the idea in Proposition 2.4 of [22] which constructs the free boundary mean convex neighborhood of the free boundary embedded minimal surface in \(3\)-ball with a strict convex boundary. Consider the first eigenvalue \(\lambda_{1}>0\) and the associated eigenfunction \(\phi_{1}\in C^{\infty}(\gamma)\) with \(\int_{\gamma}\phi_{1}^{2}ds=1\) which is a solution of the equation (4). Without loss of generality, we can set \(\phi_{1}\) to be strictly positive in \(\gamma\). We define
\[C_{\gamma}:=\frac{\max_{\gamma}\phi_{1}}{\min_{\gamma}\phi_{1}}\geq 1 \tag{61}\]
and call \(C_{\gamma}\) by _Harnack constant_ of \(\gamma\).
**Proposition 6.2**.: _Suppose \(\gamma\) is a strictly stable geodesic on a Riemannian \(2\)-disk \((D^{2},\partial D^{2},g)\) with strictly convex boundary and with negative ambient Gaussian curvature. Then there is a neighborhood \(N_{h}(\gamma)\) foliated by a free boundary mean convex foliation \(\{\gamma_{t}\}_{t\in[-\epsilon,\epsilon]}\) of \(\gamma\) satisfying the following:_
1. \(\gamma_{0}=\gamma\)_,_
2. \(\gamma_{t}\) _has a mean curvature vector towards_ \(\gamma_{0}\) _for_ \(t\in[-\epsilon,\epsilon]\)_,_
3. \(C_{t}:=\max d_{x\in\gamma_{t}}(x,\gamma)/\min d_{x\in\gamma_{t}}(x,\gamma) \leq 2C_{\gamma}\) _for_ \(t\in[-\epsilon,0)\cup(0,\epsilon]\)_._
Proof.: Let us denote the unit normal vector on \(\gamma\) by \(\nu\). Then we consider normal vector field \(\phi_{1}\nu\) generated by the first eigenfuntion \(\phi_{1}\) on \(\gamma\) and extend to \(X\in\mathcal{X}_{\tan}(D^{2})\). Then denote \(\psi_{t}\) to be a flow generated by \(X\) and define \(\psi_{t}(\gamma)=\gamma_{t}\). For small \(|t|\), we can expand the geodesic curvature
\[\kappa_{\gamma_{t}}=\lambda_{1}\phi_{1}|t|+O(t^{2}) \tag{62}\]
toward \(\gamma\). Moreover, by the definition of \(C_{\gamma}\) in (61) and our setting of \(X\),
\[\lim_{t\to 0}C_{t}=C_{\gamma}. \tag{63}\]
By (62)-(63), there exists \(\epsilon>0\) such that \(\{\gamma_{t}\}_{t\in[-\epsilon,\epsilon]}\) is a foliation satisfying the conditions (2)-(3) in the statement. The Robin boundary condition in the equation (4) gives the orthogonality at boundary of mean convex foliation.
Now we can apply the ideas to construct the squeezing homotopy via flow by Theorem 4.6 and its pullback homotopy in the general geodesic case. We obtain the following lemma:
**Lemma 6.3**.: _Let \((D^{2},\partial D^{2},g)\) be a Riemannian \(2\)-disk with a strictly convex boundary endowed with a bumpy metric. Let \(\gamma\) be a free boundary embedded geodesic, and \(X\) be a simplicial complex with finite dimension \(k\). There exists \(\delta_{0}=\delta_{0}((D^{2},\partial D^{2},g),\gamma)>0\) with the following property:_
_For \(0<\delta<\delta_{0}\), if \(\Phi:X\to IV_{1}(D^{2})\) is a continuous map in the smooth topology such that_
\[\sup\{F(\Phi(x),\gamma):x\in X\}<\delta,\]
_then there is a homotopy \(H:[0,1]\times X\to IV_{1}(D^{2})\) such that \(H(0,x)=\Phi(x)\) and \(H(1,x)=\gamma\) so that \(\Phi\) is nullhomotopic._
Now we discuss the modification in the quantitative \(F\)-distance estimate along the squeezing map. We only need to modify the arguments of Lemma 6.2 in [26] which proves the upper bound of \(F\)-distance along the squeezing homotopy in Lemma 5.4 of [26] when \(\gamma\) is a strictly stable geodesic with ambient negative curvature. We rewrite the free boundary version of Lemma 6.1 in [26] first.
**Lemma 6.4** (Lemma 6.1, [26]).: _Suppose \(\gamma\) is a strictly stable geodesic on Riemannian \(2\)-disk \((D^{2},\partial D^{2},g)\) with a strictly convex boundary which achieves negative ambient Gaussian curvature in \(N_{h}(\gamma)\). Then there exists \(C=C(|\gamma|)>0\) satisfying the following property: For \(0<\epsilon<h^{2}\), if an embedded curve \(\alpha\) whose each boundary point is on the each component of \(\partial D^{2}\cap N_{h}(\gamma)\) satisfies \(|\alpha|<|\gamma|+\epsilon\), then_
\[F(\alpha,\gamma)<C(|\gamma|)(h+\sqrt{\epsilon}).\]
For a given free boundary mean convex foliation \(\{\gamma_{t}\}_{t\in[-z,z]}\), we denote \(\gamma_{\leq z^{\prime}}\) by
\[\gamma_{\leq z^{\prime}}:=\{x\in D^{2}|x\in\gamma_{t}\text{ for some }|t|\leq z ^{\prime}\}.\]
**Lemma 6.5**.: _Let \(\gamma\) be a strictly stable geodesic on \((D^{2},\partial D^{2},g)\) with a strictly convex boundary, Gaussian curvature \(K(z)<0\) for \(z\in N_{h}(\gamma)\), and \(X\) be a \(k\)-dimensional simplicial complex. Then there exists \(\epsilon_{0}=\epsilon_{0}(\gamma)>0\) and \(C=C(\gamma)>0\) satisfying the following property: For \(0<\epsilon<\epsilon_{0}\), if \(\Phi:X\to IV_{1}(D^{2})\) is a continuous map in the smooth topology such that_
\[\sup\{F(\Phi(x),\gamma):x\in X\}<\epsilon, \tag{64}\]
_then there is a homotopy \(H:[0,1]\times X\to IV_{1}(D^{2})\) such that \(H(0,x)=\Phi(x)\), \(H(1,x)=\gamma\) and the following \(F\)-distance estimate holds along \(H\):_
\[\sup\{F(H(t,x),\gamma):x\in X\text{ and }t\in[0,1]\}<C(\gamma)\sqrt{\epsilon}. \tag{65}\]
Proof.: We consider the \(N_{h_{0}}(\gamma)\) can be foliated by a free boundary mean convex foliation \(\{\gamma_{t}\}_{t\in[-z_{0},z_{0}]}\) of \(\gamma\) thanks to Proposition 6.1 and suppose the ambient Gaussian curvature in \(N_{h_{0}}(\gamma)\) is negative. Take \(\epsilon_{0}=h_{0}^{2}/10\). Then by Lemma 3.1 of [26], Hausdorff distance between \(\Phi(x)\) and \(\gamma\) is smaller than \(h_{0}\) and \(\Phi(x)\) is supported in \(N_{h_{0}}(\gamma)\).
Now we assume the condition (64). By Lemma 3.1 in [26], \(\Phi(x)\) is supported in \(N_{\sqrt{10\epsilon}}(\gamma)\). We take \(z\in(0,z_{0})\) by \(z:=\inf\{z^{\prime}\in(0,z)\,|\,N_{\sqrt{10\epsilon}}(\gamma)\subseteq\gamma_{ \leq z^{\prime}}\}\). Note that \(\gamma_{\leq z}\subseteq N_{2C_{\gamma}\sqrt{10\epsilon}}(\gamma)\) by Proposition 6.2. Moreover, \(|\Phi(x)|<|\gamma|+\epsilon\) by the definition of the \(F\)-distance.
We consider the squeezing homotopy \(H\) which is the composition of free boundary curve shortening flow and squeezing map of graphical curves in the smaller scale. The length is monotonically decreasing along the free boundary curve shortening flow part in Lemma 6.3 and we still can apply the length bound of (40) and (41) in [26] for the length bound over the squeezing map. Saying again,
\[|H(t,x)|<|\gamma|+\epsilon \tag{66}\]
for \(x\in X\) and \(t\in[0,1]\).
Recall \(supp(\Phi(x))\subset N_{\sqrt{10\epsilon}}\subseteq\gamma_{\leq z}\). We apply the avoidance principle of the free boundary curve shortening flow between \(\gamma_{z}\) and \(\Phi(x)\). Then we have
\[supp(H(t,x))\subset\gamma_{\leq z}\subseteq N_{2C_{\gamma}\sqrt{10\epsilon}} (\gamma). \tag{67}\]
By applying Lemma 6.4 together with (66) and (67), we obtain the \(F\)-distance upper bound (65) along the homotopy.
We denote \(W_{L_{i},j}\) to be the set of the elements in \(W_{L_{i}}\) whose support has Morse index less than or equal to \(j\). We also have the pulling-tight procedure toward free boundary embedded geodesics with Morse Index upper bound, which is the free boundary version of Theorem 2.3 of [26]:
**Theorem 6.6**.: _Let \((D^{2},\partial D^{2},g)\) be a Riemannian \(2\)-disk with a strictly convex boundary endowed with a bumpy metric. For any minimizing sequence \(\{\Phi_{j}\}\) which is an \(i\)-sweepout, there is a deformed minimizing sequence \(\{\hat{\Phi}_{j}\}\) of \(\{\Phi_{j}\}\) satisfying the following property. For any small \(s>0\), there is some \(0<a<L_{i}\) satisfying_
\[\{\hat{\Phi}_{j}(x)\in IV_{1}(D^{2}):|\hat{\Phi}_{j}(x)|\geq L_{i}-a\}\in \bigcup_{\gamma\in\Lambda(\{\Phi_{j}\})\cap W_{L_{i},i}}B_{s}^{F}(\gamma) \tag{68}\]
_for all sufficiently large \(j\). Moreover, the multiplicity of geodesics in the critical set is \(1\)._
By applying the arguments in the remaining parts of Section 6 and Section 7 in [26], we obtain the following Morse Index bound of free boundary embedded geodesic obtained by smooth min-max construction and free boundary curve shortening flow on surfaces.
**Theorem 6.7**.: _Suppose \((D^{2},\partial D^{2},g)\) is a Riemannian \(2\)-disc with convex boundary endowed with a bumpy metric. Then for each \(k=1,2\), there exists a free boundary embedded geodesic \(\gamma_{k}\) with_
\[index(\gamma_{k})=k\]
_and these two geodesics satisfy \(|\gamma_{1}|<|\gamma_{2}|\)._
**Corollary 6.8**.: _For a \(2\)-Riemannian disk \((D^{2},\partial D^{2},g)\) with a convex boundary, for \(k=1,2\), there exists a free boundary embedded geodesic \(\gamma_{k}\) with_
\[index(\gamma_{k})\leq k\leq index(\gamma_{k})+nullity(\gamma_{k}).\] |
2303.01249 | Language-Universal Adapter Learning with Knowledge Distillation for
End-to-End Multilingual Speech Recognition | In this paper, we propose a language-universal adapter learning framework
based on a pre-trained model for end-to-end multilingual automatic speech
recognition (ASR). For acoustic modeling, the wav2vec 2.0 pre-trained model is
fine-tuned by inserting language-specific and language-universal adapters. An
online knowledge distillation is then used to enable the language-universal
adapters to learn both language-specific and universal features. The linguistic
information confusion is also reduced by leveraging language identifiers
(LIDs). With LIDs we perform a position-wise modification on the multi-head
attention outputs. In the inference procedure, the language-specific adapters
are removed while the language-universal adapters are kept activated. The
proposed method improves the recognition accuracy and addresses the linear
increase of the number of adapters' parameters with the number of languages in
common multilingual ASR systems. Experiments on the BABEL dataset confirm the
effectiveness of the proposed framework. Compared to the conventional
multilingual model, a 3.3% absolute error rate reduction is achieved. The code
is available at: https://github.com/shen9712/UniversalAdapterLearning. | Zhijie Shen, Wu Guo, Bin Gu | 2023-02-28T14:43:49Z | http://arxiv.org/abs/2303.01249v1 | Language-Universal Adapter Learning with Knowledge Distillation for End-to-End Multilingual Speech Recognition
###### Abstract
In this paper, we propose a language-universal adapter learning framework based on a pre-trained model for end-to-end multilingual automatic speech recognition (ASR). For acoustic modeling, the wav2vec 2.0 pre-trained model is fine-tuned by inserting language-specific and language-universal adapters. An online knowledge distillation is then used to enable the language-universal adapters to learn both language-specific and universal features. The linguistic information confusion is also reduced by leveraging language identifiers (LIDs). With LIDs we perform a position-wise modification on the multi-head attention outputs. In the inference procedure, the language-specific adapters are removed while the language-universal adapters are kept activated. The proposed method improves the recognition accuracy and addresses the linear increase of the number of adapters' parameters with the number of languages in common multilingual ASR systems. Experiments on the BABEL dataset confirm the effectiveness of the proposed framework. Compared to the conventional multilingual model, a 3.3% absolute error rate reduction is achieved. The code is available at: [https://github.com/shen9712/UniversalAdapterLearning](https://github.com/shen9712/UniversalAdapterLearning).
Zhijie Shen, Wu Guo, Bin Gu The Department of Electronic Engineering and Information Science (EEIS)
University of Science and Technology of China
Hefei, China Automatic speech recognition, multilingual, adapter, knowledge distillation
## 1 Introduction
With the widespread of end-to-end automatic speech recognition (ASR) frameworks [1, 2, 3], multilingual ASR has become a research hotspot. This is mainly due to easy training and deployment procedures in real-world applications, especially in low-resourced scenarios.
One of the key challenges in multilingual ASR is language confusion. The most intuitive idea to address this issue is using language-specific parameters in the model training [4, 5, 6, 7]. For instance, in [4], the lower LSTM layers were shared among multiple languages as a common feature extractor, whereas the upper layers were language specific. [5] introduced the informed mixture-of-experts layers in which each expert was assigned to one language.
The adapter-based modeling technique has been successfully used for domain adaptation in computer vision [8], natural language processing [9], and machine translation [10]. Adapters make effectively domain-specific (language-specific in our case) adjustments to the activations in a network. In multilingual ASR, [6] investigated using adapter modules for nine Indian languages in an RNN-T model. Furthermore, The Adapter-and-Adjust framework was introduced in [7], where both language-specific (LSA) and common adapters were applied to an encoder-decoder network. The LSA was focused on adapting the shared network weights to a particular language, whereas the common adapter was used to learn shared knowledge. Nevertheless, the number of LSA's parameters grows linearly with the number of languages, which limits large-scale multilingual modeling. Also, the common adapter is trained based on imbalanced multilingual data, hence it is biased towards dominant languages.
To address the above-mentioned issues, we propose to merge the language-specific and language-agnostic information into one language-universal adapter (LUA). In our approach, the LSA and LUA are first inserted into the wav2vec 2.0 pre-trained model [11] in the training procedure. The wav2vec 2.0 pre-trained model is used due to its high performance in low-resourced ASR. The LSA captures language-specific features which are then transferred to LUA through online knowledge distillation (KD). This results in the improved robustness of LUA to data imbalance and domain (language) shifts. Note that only LUA is used for inference.
It is generally acknowledged that incorporating the language identifier (LID) is beneficial for multilingual ASR. For instance, [12, 13, 14] showed that training and testing conditioned on LID improves the performance and reduces language confusions. LID can be also applied in different positions of the network. For example, [13] simply concatenated a one-hot vector to the input features of the encoder network, whereas [14] concatenated LID to multi-head attention inputs in the transformer model.
In this paper, in order to further reduce language con
fusion, we propose to use LID as multi-head attention prefixes. This is inspired by the prefix-tuning method [15]. This method performs a position-wise modification on the multi-head attention outputs, which is more effective than injecting LID into the input features or hidden features.
## 2 Methods
Figure 1 shows the training framework of the proposed method. It is built on a wav2vec 2.0 pre-trained model with two key novel contributions: (1) Language-universal adapter, and (2) Multi-head attention with LID prefixes.
### Fine-tuning the wav2vec 2.0 model for ASR
Wav2vec 2.0 is a transformer-based model trained to extract contextualized representations from raw audio signals [11]. This model consists of three sub-modules including a feature encoder, transformer encoder, and quantization module. Feature encoder is a multi-layer CNN that processes the input signal into low-level features. Based on this representation, the transformer module is further applied to produce contextualized representation. The quantization module discretizes the low-level features into a trainable codebook. To train the model, parts of the low-level features are masked from the transformer module, and the objective is to identify the quantized version of the masked features based on its context.
The objective of the wav2vec 2.0 is to use the learned representations to improve ASR performance. It requires less data for training, allowing its application for low-resourced ASR. To this end, the model trained as described above is fine-tuned for ASR with speech-transcription paired data. We build a contrastive system using the wav2vec 2.0 pre-trained model, where a randomly initialized linear projection layer is added on top of the contextual encoder and the Connectionist Temporal Classification (CTC) loss [16] is minimized.
### Language-universal adapter
In the proposed method, adapters are inserted in each transformer layer. Similar to [7], we use two types of adapter modules including the Language-Specific Adapter (LSA) and Language-Universal Adapter (LUA) (Fig. 1(a)), These adapters are inserted next to the multi-head attention and feed-forward blocks. LSA contains separate parameters for each language (Fig. 1(c)), whereas LUA is a single standard adapter and it is shared across languages.
In [7], the summation of the adapters' outputs is considered the final adapter output. Here, however, we use online knowledge distillation to transfer knowledge from LSA to LUA. This enables LUA to learn both common features and language-specific features (see Algorithm 1). Each training batch consists of two forward passes and one backward pass. In the forward passes, LSA and LUA are activated to produce language-specific and language-agnostic features, respectively. In the backward pass, the model is optimized by minimizing the CTC loss corresponding to each forward pass. In addition to the CTC losses, the mean squared error (MSE) loss between the LSA and LUA feature maps is obtained for knowledge distillation:
\[\mathcal{L}_{ad}=\frac{1}{P}\sum_{i=1}^{P}\text{MSE}(\phi_{i}(\mathbf{x}_{i}), \psi_{i}(\mathbf{W}_{i}\mathbf{x}_{i}+\mathbf{b}_{i})) \tag{1}\]
where \(P\) is the number of positions to insert adapters (\(P\) is equal to twice the number of layers), \(\phi_{i}\) denotes LSA, and \(\psi_{i}\) denotes LUA. An additional linear layer (\(\mathbf{W}_{i}\) and \(\mathbf{b}_{i}\)) is also introduced after LUA and before loss calculation. This is because for \(L\) languages LSA has \(L\) times as many parameters as LUA and it is difficult for LUA to directly learn from LSA.
In addition to \(\mathcal{L}_{ad}\), the MSE loss of the predicted logits of the output layer is used for further knowledge distillation:
\[\mathcal{L}_{out}=\text{MSE}(\mathbf{z}_{\phi},\mathbf{z}_{\psi}) \tag{2}\]
where \(\mathbf{z}_{\phi}\) and \(\mathbf{z}_{\psi}\) are the predicted logits of using the backbone model and LSA, and the backbone model and LUA, re
Figure 1: The language-universal adapter learning framework: (a) The adapter modules are inserted in transformer layers. The final loss contains the adapter-based distillation loss, output-based distillation loss, CTC loss (with LSA), and CTC loss (with LUA); (b) Each adapter consists of a layer normalization, down-projection, non-linearity, and up-projection; (c) LSA, where the corresponding adapter is activated given the input’s language; (d) Multi-head attention with LID prefixes.
spectively. Finally, the combination of the CTC losses and the above-mentioned auxiliary loss is used to train the model:
\[\mathcal{L}_{mt}\!=\!-\!\mathrm{log}p_{ctc}(\mathbf{y}|\mathbf{z}_{\phi})\!-\! \mathrm{log}p_{ctc}(\mathbf{y}|\mathbf{z}_{\psi})\!+\!\alpha\mathcal{L}_{ad}\! +\!\beta\mathcal{L}_{out} \tag{3}\]
For decoding, the LSA and its corresponding linear layer are dropped and only the backbone model and LUA are used.
```
0:\(D\): multilingual data
0:\(\theta\), \(\phi\) and \(\psi\): backbone model, LSA and LUA
0:\(\alpha\), \(\beta\) and \(\lambda\): step size hyperparameters Randomly initialize \(\theta\), \(\phi\) and \(\psi\); Copy wav2vec 2.0 pre-trained encoder parameters in \(\theta\) while not done do Sample batch of multilingual utterances \(x\sim D\) Compute \(\mathbf{z}_{\phi}\) and \(\phi_{i}(\mathbf{x}_{i})\) by \(\theta\) and \(\phi\) forwarding Compute \(\mathbf{z}_{\psi}\) and \(\psi_{i}(\mathbf{W}_{i}\mathbf{x}_{i}+\mathbf{b}_{i})\) by \(\theta\) and \(\psi\) forwarding Compute CTC posteriors \(p_{ctc}(\mathbf{y}|\mathbf{z}_{\phi})\) and \(p_{ctc}(\mathbf{y}|\mathbf{z}_{\psi})\) Compute adapter-based distillation loss \(\mathcal{L}_{ad}\) in (1) Compute output-based ditllation loss \(\mathcal{L}_{out}\) in (2) Compute the multi-task loss \(\mathcal{L}_{mt}\) using \(p_{ctc}(\mathbf{y}|\mathbf{z}_{\phi})\), \(p_{ctc}(\mathbf{y}|\mathbf{z}_{\psi})\), \(\mathcal{L}_{ad}\), \(\mathcal{L}_{out}\), \(\alpha\) and \(\beta\) in (3) Update model \(u\!\leftarrow\!u\!-\!\lambda\bigtriangledown_{u}\!\mathcal{L}_{mt}\), where \(u\in\theta,\phi\) and \(\psi\) endwhile
```
**Algorithm 1** Learning Language Universal Adapter
### Multi-head attention with LID prefixes
Here, we propose a novel method to leverage LID by prepending the LID embedding vectors to the multi-head attention keys and values. In this approach, for each training sample, LID is firstly represented as one-hot vectors. It is then parametrized by an embedding layer followed by two linear layers with tanh activations, producing the prefixes:
\[\left[\mathbf{P}_{k},\mathbf{P}_{v}\right]=\mathbf{W}_{2}\tanh\left(\mathbf{ W}_{1}\text{Embed}\left(LID\right)+\mathbf{b}_{1}\right)+\mathbf{b}_{2} \tag{4}\]
Two sets of prefixes \(\mathbf{P}_{k},\mathbf{P}_{v}\in R^{d}\) are concatenated with the original key and value in time. The attention is then performed on the new prefixed keys and values and the computation of each head becomes:
\[\text{head} =\text{Attn}\left(\mathbf{x}\mathbf{W}_{q},\text{concat}\left( \mathbf{P}_{k},\mathbf{x}\mathbf{W}_{k}\right),\text{concat}\left(\mathbf{P} _{v},\mathbf{x}\mathbf{W}_{v}\right)\right)\] \[=\text{softmax}\left(\mathbf{x}\mathbf{W}_{q}\text{concat}\left( \mathbf{P}_{k},\mathbf{x}\mathbf{W}_{k}\right)^{T}\right)\begin{pmatrix} \mathbf{P}_{v}\\ \mathbf{x}\mathbf{W}_{v}\end{pmatrix}\] \[=(1-\gamma(\mathbf{x}))\text{Attn}\left(\mathbf{x}\mathbf{W}_{q},\mathbf{x}\mathbf{W}_{k},\mathbf{x}\mathbf{W}_{v}\right)+\] \[\gamma(\mathbf{x})\text{Attn}\left(\mathbf{x}\mathbf{W}_{q}, \mathbf{P}_{k},\mathbf{P}_{v}\right) \tag{5}\]
where \(N_{h}\) denotes the number of heads, \(\mathbf{P}_{k}^{(i)}\), \(\mathbf{P}_{v}^{(i)}\in R^{d/N_{h}}\) denote the prefixes, and \(\mathbf{W}_{q}^{(i)}\), \(\mathbf{W}_{k}^{(i)}\), \(\mathbf{W}_{v}^{(i)}\) are used to project inputs to queries, keys, and values respectively. Furthermore, \(\gamma(\mathbf{x})\) is a scalar that represents the sum of normalized attention weights on the prefixes:
\[\gamma(\mathbf{x})\!=\!\frac{\sum_{i}\exp\left(\mathbf{x}\mathbf{W}_{q} \mathbf{P}_{k}^{T}\right)_{i}}{\sum_{i}\exp\left(\mathbf{x}\mathbf{W}_{q} \mathbf{P}_{k}^{T}\right)_{i}\!+\!\sum_{j}\exp\left(\mathbf{x}\mathbf{W}_{q} \mathbf{W}_{k}^{T}\mathbf{x}^{T}\right)_{j}} \tag{6}\]
Therefore, this method is equivalent to performing a position-wise modification on the multi-head attention outputs. Once training is complete, the embedding and linear layers are dropped, and only \(\mathbf{P}_{k}\) and \(\mathbf{P}_{v}\) of each language are stored.
## 3 Experimental Setup
### Datasets
We use the BABEL dataset and six languages are selected including Pashto (ps), Vietnamese (vi), Haitian (ht), Lao (lo), Kurdish (ku), and Tok Pisin (tp) [17]. The dev folder of the BABEL dataset is used as the test set since "eval" has not been open-sourced. Also, 10% of the training set is used as dev data. Samples from all languages are pooled and shuffled to form the multilingual training set. All audio is resampled to 16kHz to satisfy the wav2vec 2.0 input requirements. The statistics of the dataset are presented in Table 1.
### Model configuration
The backbone wav2vec 2.0 model has the same configuration as in [18]. We fine-tune the Base and Large models which were pre-trained on 960h Librispeech data and data from 53 languages, respectively. The feature encoder is frozen during the fine-tuning process. Each LSA and LUA in the same transformer layer has the same configuration. For both Base and Large models, the adapter dimension is set to 256, and adapters and LID prefixes are applied to the top 6 transformer layers. The LID prefix re-parameterization module contains two linear projections with the output dimensions of 800 and 768(Base)/1024(Large), respectively. Characters of all target languages are concatenated to create the output vocabulary for multilingual training. Extra tokens are appended to the outputs including an unknown token, a padding token, a blank token, and a mask token, yielding 285 output nodes.
For the hyper-parameters of the multi-task loss calculation, \(\alpha\) and \(\beta\) are set to 0.1(Base)/0.05(Large) and 0.1(Base)/0.1(Large), respectively. All models are trained with a total of 140,000 updates, a learning rate of 1e-4, and a batch size of 25,600,000 tokens using the Fairseq toolkit. For rescoring, the 4-gram KenLM [19] is trained using the transcriptions from the training set for each language. Character Error Rate (CER) is given as a performance metric.
\begin{table}
\begin{tabular}{l c c c} \hline
**Language** & **Train** & **Dev** & **Test** \\ \hline Pashto & 70.6 & 7.6 & 10.0 \\ Vietnamese & 79.1 & 8.6 & 11.0 \\ Haitian & 59.8 & 7.1 & 10.6 \\ Lao & 58.5 & 7.0 & 10.6 \\ Kurdish & 37.4 & 4.4 & 10.2 \\ Tok Pisin & 35.4 & 4.0 & 10.0 \\ \hline \end{tabular}
\end{table}
Table 1: Data statistics per language (in hours).
The contrastive systems are as follows. **Mono**: the pre-trained model is fine-tuned as described in Subsection 2.1, using data from each language respectively [18]. **Multi**: The model is fine-tuned directly on this training set. The training hyperparameters are the same as described in Subsection 3.2.
## 4 Results
Table 2 presents the CER performance of the proposed methods and the contrastive systems. For the Base model, we first compare the monolingual (System A0) and multilingual (System A1) models. Table 2 shows that the CERs of the monolingual and multilingual systems are close to each other except for Tok Pisin. For Tok Pisin, the CER of the multilingual baseline is much worse than the monolingual baseline. This can be attributed to the imbalanced training data [6]. It is also seen that leveraging the LID prefixes in System A2 significantly reduces the CER (from 26.8% to 24.7% on average). Comparing the performance of models with LUA (System A3) and LSA (System A4) shows that LSA leads to lower CER. Table 3 also indicates that LSA increases the number of parameters by 31% (in contrast, LUA increases it by 2.5%).
System A5 (without KD) uses LUA and LSA in training and only LUA for decoding. Our results show that System A5 is overperformed by System A4 (which uses LSA for training and decoding). However, after incorporating the KD losses \(L_{ad}\) and \(L_{out}\), System A7 outperforms System A4 and it achieves an absolute CER reduction of 3.3% compared to the multilingual baseline (System A1).
Furthermore, for the Large model, the proposed framework (System B2) outperforms the multilingual baseline (System B1), with a 2.8% absolute performance gain. The monolingual system (System B0) can obtain a slightly better result than the proposed method at a cost of a much larger model size, as shown in Table 3.
## 5 Conclusions
This paper introduced a language-universal adapter learning framework for multilingual speech recognition, based on the online knowledge distillation and multi-head attention with LID prefixes. Experiments confirmed the proposed framework outperforms the conventional multilingual approaches.
\begin{table}
\begin{tabular}{l c c c c c c c} \hline
**Model** & **ps** & **vi** & **ht** & **lo** & **ku** & **tp** & **avg** \\ \hline \hline KD & **22.1** & **20.4** & **22.1** & **22.4** & **37.3** & **16.4** & **23.5** \\ Sum & 23.4 & 22.7 & 23.5 & 24.1 & 39.9 & 17.7 & 25.2 \\ \hline \end{tabular}
\end{table}
Table 4: CER of knowledge distillation-based (KD) and summation-based (Sum) adapters.
\begin{table}
\begin{tabular}{l c c c c} \hline
**Model** & **Mono** & **Multi** & **LSA** & **LUA** \\ \hline Base & 94M*6 & 94M & 123M & 97M \\ Large & 316M*6 & 316M & 354M & 322M \\ \hline \end{tabular}
\end{table}
Table 3: The number of parameters of the baseline models and the proposed framework.
\begin{table}
\begin{tabular}{l c c c c c c} \hline
**Model** & **ps** & **vi** & **ht** & **lo** & **ku** & **tp** & **avg** \\ \hline \hline AD Mono & 23.0 & 22.1 & 22.1 & 24.8 & 40.2 & 17.8 & 25.0 \\ \hline A1 Multi & 23.6 & 22.0 & 23.5 & 23.8 & 38.5 & 29.4 & 26.8 \\ A2 A1 + LID Prefixes & 23.5 & 21.6 & 23.5 & 23.7 & 38.9 & 17.2 & 24.7 \\ A3 A2 + LUA & 23.5 & 21.9 & 23.9 & 23.2 & 38.6 & 17.3 & 24.7 \\ A4 A2 + LSA & 22.4 & 21.3 & 22.5 & 23.0 & 38.1 & 16.9 & 24.0 \\ A5 A2 + LSA + LUA & 23.3 & 21.6 & 23.7 & 23.2 & 38.8 & 16.9 & 24.6 \\ A6 A5 + \(\mathcal{L}_{ad}\) & 22.6 & 21.0 & 22.8 & 23.0 & 38.7 & 16.8 & 24.2 \\ A7 A6 + \(\mathcal{L}_{out}\) & **22.1** & **20.4** & **22.1** & **22.4** & **37.3** & **16.4** & **23.5** \\ \hline \end{tabular}
\end{table}
Table 2: CER of the proposed framework and the baselines. Here, System A3, A5, A6 and A7 use LUA for decoding, and System A4 uses LSA for decoding.
\begin{table}
\begin{tabular}{l c c c c} \hline
**Model** & **ps** & **vi** & **ht** & **lo** & **ku** & **tp** & **avg** \\ \hline \hline DRFixes & **22.1** & **20.4** & **22.1** & **22.4** & **37.3** & **16.4** & **23.5** \\ Input & 22.5 & 20.9 & 22.7 & 22.8 & 38.4 & 17.6 & 24.2 \\ Top-6 & 22.3 & 20.9 & 22.5 & 22.6 & 37.9 & 16.6 & 23.8 \\ Attention & 23.3 & 21.1 & 23.4 & 23.4 & 38.7 & 17.5 & 24.6 \\ \hline \end{tabular}
\end{table}
Table 5: CER of different approaches to leverage LID. |
2309.12508 | A Diffusion-Model of Joint Interactive Navigation | Simulation of autonomous vehicle systems requires that simulated traffic
participants exhibit diverse and realistic behaviors. The use of prerecorded
real-world traffic scenarios in simulation ensures realism but the rarity of
safety critical events makes large scale collection of driving scenarios
expensive. In this paper, we present DJINN - a diffusion based method of
generating traffic scenarios. Our approach jointly diffuses the trajectories of
all agents, conditioned on a flexible set of state observations from the past,
present, or future. On popular trajectory forecasting datasets, we report state
of the art performance on joint trajectory metrics. In addition, we demonstrate
how DJINN flexibly enables direct test-time sampling from a variety of valuable
conditional distributions including goal-based sampling, behavior-class
sampling, and scenario editing. | Matthew Niedoba, Jonathan Wilder Lavington, Yunpeng Liu, Vasileios Lioutas, Justice Sefas, Xiaoxuan Liang, Dylan Green, Setareh Dabiri, Berend Zwartsenberg, Adam Scibior, Frank Wood | 2023-09-21T22:10:20Z | http://arxiv.org/abs/2309.12508v2 | # A Diffusion-Model of Joint Interactive Navigation
###### Abstract
Simulation of autonomous vehicle systems requires that simulated traffic participants exhibit diverse and realistic behaviors. The use of prerecorded real-world traffic scenarios in simulation ensures realism but the rarity of safety critical events makes large scale collection of driving scenarios expensive. In this paper, we present DJINN - a diffusion based method of generating traffic scenarios. Our approach jointly diffuses the trajectories of all agents, conditioned on a flexible set of state observations from the past, present, or future. On popular trajectory forecasting datasets, we report state of the art performance on joint trajectory metrics. In addition, we demonstrate how DJINN flexibly enables direct test-time sampling from a variety of valuable conditional distributions including goal-based sampling, behavior-class sampling, and scenario editing.
## 1 Introduction
Accurate simulations are critical to the development of autonomous vehicles (AVs) because they facilitate the safe testing of complex driving systems [15]. One of the most popular methods of simulation is virtual replay [46], in which the performance of autonomous systems are evaluated by replaying previously recorded traffic scenarios. Although virtual replay is a valuable tool for AV testing, recording diverse scenarios is expensive and time consuming, as safety-critical traffic behaviors are rare [17]. Methods for producing synthetic traffic scenarios of specific driving behaviors are therefore essential to accelerate AV development and simulation quality.
Producing these synthetic traffic scenarios involves generating the joint future motion of all the agents in a scene, a task which is closely related to the problem of trajectory forecasting. Due to the complexity of learning a fully autonomous end-to-end vehicle controller, researchers often opt to split the problem into three main tasks [52]: perception, trajectory forecasting, and planning. In trajectory forecasting, the future positions of all agents are predicted up to a specified future time based on the agent histories and the road information. Due to the utility of trajectory forecasting models in autonomous vehicle systems along with the availability of standard datasets and benchmarks to measure progress [4; 53], a variety of effective trajectory forecasting methods are now available. Unfortunately, most methods produce _deterministic_ sets of trajectory forecasts _per-agent_[47; 9] which are difficult to combine to produce realistic joint traffic scenes [30].
Generative models of driving behavior have been proposed as an alternative to deterministic trajectory forecasting methods for traffic scene generation [40; 46]. These models re-frame trajectory forecasting as modeling the joint distribution of future agent state conditioned on past observations and map context. However, given that the distribution of traffic scenes in motion forecasting datasets are similar to real-world driving, modelling the data distribution does not ensure that models will generate rare, safety critical events.
To alleviate these issues we propose DJINN, a model which generatively produces joint traffic scenarios with _flexible conditioning_. DJINN is a diffusion model over the joint states of all agents in the scene. Similar to [30], our model is conditioned on a flexible set of agent states. By modifying the conditioning set at test-time, DJINN is able to draw traffic scenarios from a variety of conditional distributions of interest. These distributions include sampling scenes conditioned on specific goal states or upsampling trajectories from sparse waypoints. Additionally, the joint diffusion structure of DJINN enables test-time diffusion guidance. Utilizing these methods enables further control over the conditioning of traffic scenes based on behavior modes, agent states, or scene editing.
We evaluate the quality of sampled trajectories with both joint and ego-only motion forecasting on the Argoverse [4] and INTERACTION [53] datasets. We report excellent ego-only motion forecasting and outperform Scene Transformer on joint motion forecasting metrics. We further demonstrate both DJINN's flexibility and compatibility with various forms of test-time diffusion guidance by generating goal-directed samples, examples of cut-in driving behaviors, and editing replay logs.
## 2 Related Work
**Trajectory Forecasting:** A wide variety of methods have been proposed to address the problem of trajectory forecasting. Two attributes which divide this area of work are the output representation type and the agents for which predictions are made. The most common class of models deterministically predict the distribution of ego agent trajectories using a weighted trajectory set either with or without uncertainties. Due to the applicability of this representation as the input for real-time self-driving planners, there are numerous prior methods of this type. Some approaches rasterize the scene into a birdview image and use CNNs to predict a discrete set of future trajectories for the ego agent [5; 3; 32]. The convolutional architecture of these methods captures local information around the agent well, but the birdview image size and resolution limit the ability to capture high speed and long-range interactions. To address these challenges, other prior approaches encode agent states directly either by using RNNs [47; 39; 27], polyline encoders [7; 9] or 1D convolutions [24]. Agent features can be combined with roadgraph information in a variety of ways including graph convolutional networks [24; 1] or attention [29; 27].
To control the distribution of predicted trajectories, several methods have utilized mode or goal conditioning. One approach is to directly predict several goal targets before regressing trajectories to those targets [54; 9; 51; 8]. An alternate approach is to condition on trajectory prototypes [3] or latent embeddings [47].
Predicting joint traffic scenes using per-agent marginal trajectory sets is challenging due to the exponential growth of trajectory combinations. Recent approaches aim to rectify this by producing joint weighted sets of trajectories for all agents in a scene. M21 [45] generates joint trajectory sets by producing "reactor" trajectories which are conditioned on marginal "influencer" trajectories. Scene Transformer [30], which uses a similar backbone architecture to our method, uses a transformer [48] network to jointly produce trajectory sets for all agents in the scene.
As an alternative to deterministic predictions, multiple methods propose generative models of agent trajectories. A variety of generative model classes have been employed including Normalizing Flows [36], GANs [10; 38] or CVRNNs [40; 46]. Joint generative behavior models can either produce entire scenarios in one shot [10; 38; 36], or produce scenarios by autoregressively "rolling-out" agent trajectories [40; 46].
**Diffusion Models:** Diffusion models, proposed by Sohl-Dickstein et al. [41] and improved by Ho et al. [12] are a class of generative models which approximate the data distribution by reversing a forward process which gradually adds noise to the data. The schedule of noise addition can be discrete process or represented by a continuous time differential equation [44; 21].We utilize the diffusion parameterization introduced in EDM [21] in our work for its excellent performance and separation of training and sampling procedures.
This class of models have shown excellent sample quality in a variety of domains including images [12; 6], video [11; 14] and audio [22]. In addition, diffusion models can be adapted at test-time through various conditioning mechanisms. Classifier [6] and classifier-free guidance [13] have enabled powerful conditional generative models such as text conditional image models [34; 37] while editing techniques [26; 31] have enabled iterative refinement of generated samples.
One recent application of diffusion models is planning. Diffuser [20] uses diffusion models to generate trajectories for offline reinforcement learning tasks. They condition their samples using classifier guidance to achieve high rewards and satisfy constraints. Trace and Pace [35] utilizes diffusion planning for guided pedestrian motion planning. In the vehicle planning domain, Controllable Traffic Generation (CTG) [55] builds on Diffuser, using diffusion models to generate trajectories which satisfy road rule constraints. Like CTG, our method also models the future trajectories of road users using diffusion models. However, our approach differs from CTG in terms of output and our methods of conditioning. In CTG, marginal per-agent trajectory samples are combined into a joint scene representation by "rolling-out" a portion of each agent's trajectory before drawing new samples per-agent. By contrast, DJINN models the full joint distribution of agent trajectories in one shot, with no re-planning or roll-outs required. The authors of CTG condition their model exclusively on the past states of other agents and the map, and use classifier-guidance to condition their samples to follow road rules. In our method, we demonstrate conditioning on scene semantics via classifier guidance _as well as_ conditioning on arbitrary state observations, including the past or future states of each agent, and control the strength of conditioning using classifier-free guidance as demonstrated in Fig. 1.
## 3 Background
### Problem Formulation
Our work considers traffic scenarios consisting of \(A\) agents across \(T\) discrete timesteps driving on a roadway described by a set of roadgraph features \(\mathbf{M}\). Each agent \(a\in\{1,\ldots,A\}\) in the scene at time \(t\in\{1,\ldots T\}\) is represented by a state \(\mathbf{s}_{t}^{a}=\{x_{t}^{a},y_{t}^{a},\theta_{t}^{a}\}\) consisting of its 2D position \((x_{t},y_{t})\) and heading \(\theta_{t}\). The joint representation of the scene \(\mathbf{x}\) is the combination of all agents across all timesteps \(\mathbf{x}=\{s_{t}^{a}|a\in\{1,\ldots,A\}\,,t\in\{1,\ldots,T\}\}\in\mathbb{R}^ {A\times T\times 3}\). We assume scenes are distributed according to an unknown distribution \(p_{data}(\mathbf{x})\).
We introduce a model which is conditioned on the map features \(\mathbf{M}\) and can moreover be flexibly conditioned on arbitrary set of observed agent states. For the latter purpose, we consider a boolean variable \(\mathcal{O}\in\{0,1\}^{A\times T}\). We denote that a state in the scene is observed if \(\mathcal{O}_{t}^{a}=1\). Using \(\mathcal{O}\), we partition the scene into two components. The observed portion of the scene is defined as \(\mathbf{x}_{obs}=\{\mathbf{s}_{t}^{a}|\mathbf{s}_{t}^{a}\in\mathbf{x},\mathcal{ O}_{t}^{a}=1\}\) while the unobserved, latent portion is \(\mathbf{x}_{lat}=\mathbf{x}\setminus\mathbf{x}_{obs}\). Figure 1 shows five choices for \(\mathcal{O}\) and their corresponding tasks. Our ultimate goal is to learn a conditional distribution over the set of all latent agent states \(\mathbf{x}_{lat}\) given the observed states \(\mathbf{x}_{obs}\) and the map \(\mathbf{M}\), by modelling \(p(\mathbf{x}_{lat}|\mathbf{x}_{obs},\mathbf{M})\). Using this probabilistic framework, we can represent conditional
Figure 1: **Top:** Five example observation masks \(\mathcal{O}\) demonstrating potential conditioning inputs to DJINN. Each element of each mask corresponds to the boolean value of \(\mathcal{O}\) for that agent state. Individual agents are shown in rows, with timesteps as columns. **Bottom:** Generated traffic scenes corresponding to the type of observation masks above.
distributions corresponding to various trajectory forecasting tasks by modifying the observation mask \(\mathcal{O}\) and the corresponding conditioning set \(\mathbf{x}_{obs}\).
### Diffusion Models
Diffusion models [41; 12] are a powerful class of generative models built upon a diffusion process which iteratively adds noise to the data. In the continuous time formulation of this process [44; 21], this iterative addition is described by a stochastic differential equation (SDE)
\[d\mathbf{x}_{\tau}=\mu(\mathbf{x}_{\tau},\tau)d\tau-\sigma(\tau)d\mathbf{w}. \tag{1}\]
Here, \(\tau\in[0,\tau_{max}]\) where \(\tau_{max}\) is a fixed, large constant, \(\mu(\mathbf{x}_{\tau},\tau)\) is the drift function and \(\sigma(\tau)\) is the diffusion coefficient which scales standard Brownian motion \(\mathbf{w}\). Note that our work has two notions of time. Throughout we will use \(t\) to denote the "scenario time" and \(\tau\) to represent "diffusion time". We express the marginal distribution of \(\mathbf{x}_{\tau}\) at diffusion time \(\tau\) as \(p(\mathbf{x}_{\tau})\), with \(p(\mathbf{x}_{0})\) corresponding to the data distribution \(p_{data}(\mathbf{x})\). Typically, \(\mu(\mathbf{x}_{\tau},\tau)\), \(\sigma(\tau)\), and \(\tau_{max}\) are chosen such the conditional density \(p(\mathbf{x}_{\tau}|\mathbf{x}_{0})\) is available in closed form and that \(p(\mathbf{x}_{\tau_{max}})\) approximates a tractable Gaussian distribution \(\pi(\mathbf{x})\). Notably, for every diffusion SDE, there exists a corresponding probability flow (PF) ordinary differential equation (ODE) [44] whose marginal probability densities \(p(\mathbf{x}_{\tau})\) match the densities of Eq. (1)
\[d\mathbf{x}_{\tau}=\left[\mu(\mathbf{x}_{\tau},\tau)-\frac{1}{2}\sigma(\tau) ^{2}\nabla_{x}\log p(\mathbf{x}_{\tau})\right]d\tau. \tag{2}\]
Using the PF ODE, samples are generated from a diffusion model by integrating Eq. (2) from \(\tau=\tau_{max}\) to \(\tau=0\) with initial condition \(\mathbf{x}_{\tau_{max}}\sim\pi(\mathbf{x}_{\tau_{max}})\) using an ODE solver. Typically integration is stopped at some small value \(\epsilon\) for numerical stability. Solving this initial value problem requires evaluation of the _score function_\(\nabla_{\mathbf{x}_{\tau}}\log p(\mathbf{x}_{\tau})\). Since \(p(\mathbf{x}_{\tau})\) is not known in closed form, diffusion models learn an approximation of the score function \(\mathbf{s}_{\theta}(\mathbf{x}_{\tau},\tau)\approx\nabla_{\mathbf{x}_{\tau}} \log p(\mathbf{x}_{\tau})\) via score matching [16; 43; 44].
A useful property of diffusion models is the ability to model conditional distributions \(p(\mathbf{x}_{0}|y)\) at test-time using guidance. Given some conditional information \(y\), the key idea of guidance is to replace the score function in the PF ODE with an approximate _conditional_ score function \(\nabla_{\mathbf{x}_{\tau}}\log p(\mathbf{x}_{\tau}|y)\).
By using the gradient of a pretrained classifier \(p_{\phi}(y|\mathbf{x}_{\tau})\), glassifier guidance [6] approximates the conditional score function through the a linear combination of the unconditional score function and the classifier gradient. The parameter \(\alpha\) controls the strength of the guidance
\[\nabla_{\mathbf{x}_{\tau}}\log p(\mathbf{x}_{\tau}|y)\approx\mathbf{s}_{ \theta}(\mathbf{x}_{\tau},\tau)+\alpha\nabla_{\mathbf{x}_{\tau}}\log p_{\phi} (y|\mathbf{x}_{\tau}). \tag{3}\]
One major drawback of classifier guidance is the need to train an external classifier. Instead, classifier-free guidance [13], utilizes a conditional score network \(\mathbf{s}_{\theta}(\mathbf{x}_{\tau},\tau,y)\). Then, a weighted average of the conditional and unconditional scores is used to estimate the conditional score function.
\[\nabla_{\mathbf{x}_{\tau}}\log p(\mathbf{x}_{\tau}|y)\approx\lambda\mathbf{s}_ {\theta}(\mathbf{x}_{\tau},\tau,y)+(1-\lambda)\mathbf{s}_{\theta}(\mathbf{x}_{ \tau},\tau). \tag{4}\]
Here \(\lambda\) is a scalar parameter which controls the strength of the guidance. In both cases, the approximate conditional score can be substituted into Eq. (2) to draw conditional samples from \(p(\mathbf{x}_{0}|y)\).
## 4 Djinn
Our approach models the joint distribution agent states \(p(\mathbf{x}_{lat}|\mathbf{x}_{obs},\mathbf{M})\) conditioned on a set of observed states and the map context. For this purpose, we employ a diffusion model which diffuses directly over \(\mathbf{x}_{lat}\) - the unobserved states of each agent in the scene for \(t=\{1,\dots T\}\). An important aspect of our method is the choice of observation mask \(\mathcal{O}\) and observation set \(\mathbf{x}_{obs}\) on which we condition. For this purpose we introduce a distribution over observation masks \(p(\mathcal{O})\) which controls the tasks on which we train our model.
In the design of our diffusion process, we follow the choices from EDM [21], setting \(\mu(\mathbf{x}_{lat,\tau},\tau)=\mathbf{0}\) and \(\sigma(\tau)=\sqrt{2\tau}\) from Eq. (2). We also utilize their score function parameterization
\[\nabla_{\mathbf{x}_{lat,\tau}}\log p(\mathbf{x}_{lat,\tau}|\mathbf{x}_{obs},\mathbf{ M},\mathbf{c})=\frac{D_{\theta}\left(\mathbf{x}_{lat,\tau},\mathbf{x}_{obs}, \mathbf{M},\mathbf{c},\tau\right)-\mathbf{x}_{lat,\tau}}{\tau^{2}}. \tag{5}\]
Here \(D_{\theta}\) is a neural network which approximates the latent portion of the noise free data \(\mathbf{x}_{lat,0}\). In addition to \(\mathbf{x}_{lat,\tau}\) and \(\tau\), in our work \(D_{\theta}\) also receives the map context \(\mathbf{M}\), the clean observed states \(\mathbf{x}_{obs}\) and \(c\), a collection of unmodelled agent features per observed agent timestep such as velocity, vehicle size, or agent type. We train our network on a modification of the objective from EDM [21]
\[\mathbb{E}_{\mathbf{x}_{0},\tau,\mathcal{O},\mathbf{x}_{lat,\tau}}\|D_{\theta }\left(\mathbf{x}_{lat,\tau},\mathbf{x}_{obs},\mathbf{M},\mathbf{c},\tau\right) -\mathbf{x}_{lat,0}\|_{2}^{2}. \tag{6}\]
Here, \(\mathbf{x}_{0}\sim p_{data}(\mathbf{x})\), \(\mathbf{x}_{\tau}\sim p(\mathbf{x}_{\tau}|\mathbf{x}_{0})=\mathcal{N}( \mathbf{x},\tau^{2}\mathbf{I})\) and \(\mathcal{O}\sim p(\mathcal{O})\). We compute our loss over \(\tau\sim p_{train}\) - a log normal distribution which controls the variance of the noise added to the data. We set the mean and variance of \(p_{train}\) according to [21].
We use the Heun \(2^{\text{nd}}\) order sampler from [21] to sample traffic scenarios with no changes to the reported hyperparameters. Empirically, we found that deterministic sampling, corresponding to integrating the PF ODE, leads to higher quality samples than using an SDE solver. Unless otherwise noted all samples are produced using \(50\) iterations of the ODE solver, which produces the highest quality samples as measured by ego and joint minADE and minFDE.
**Input Representation** An important choice for trajectory forecasting models is the reference frame for the agent states. In our work, the diffused agent states and observations \(\mathbf{x}_{obs}\) are centered around an "ego agent," which is often specified in trajectory forecasting datasets as the primary agent of interest. We transform \(\mathbf{x}_{0}\) such that the scene is centered on the last observed position of this arbitrary "ego agent" and rotated so the last observed heading of the ego agent is zero. We scale the positions and headings of all agents in each ego-transformed scene to a standard deviation of \(0.5\).
We represent the map \(\mathbf{M}\) as an unordered collection of polylines representing the center of each lane. Polylines are comprised of a fixed number of 2D points. We split longer polylines split into multiple segments and pad shorter polylines padded to the fixed length. Each point has a boolean variable indicating whether the element is padding. Polyline points are represented in the same reference frame as the agent states and are scaled by the same amount as the agent position features.
**Model Architecture** Our score estimator network \(D_{\theta}\) is parameterized by a transformer-based architecture similar to [30]. The network operates on a fixed \([A,T,F]\) shaped feature tensor composed of one \(F\) dimensional feature vector per agent timestep. We use sinusoidal positional embeddings [48] to produce initial feature tensors. Noisy and observed agent states \(\mathbf{x}_{\tau}\), \(\mathbf{x}_{obs}\), the time indices \(t=\{1,\dots,T\}\), and diffusion step \(\tau\) are all embedded into \(F\) dimensional embeddings. \(\mathbf{x}_{lat,\tau}\) and \(\mathbf{x}_{obs}\) are padded with zeros for observed and latent states respectively prior to embedding. A shared MLP projects the concatenated positional embeddings into a \(F\) dimensional vector for each agent.
The main trunk of the network is comprised of a series of transformer layers [48]. Attention between all pairs of feature vectors is factorized into alternating time and agent transformer layers. In time transformer layers, self-attention is performed per-agent across each timestep of that agent's trajectory, allowing for temporal consistency along a trajectory. In agent transformer layers, self-attention is computed across all agents at a given time, updating each agent's features with information about the other agents at that time. We encode the map information \(\mathbf{M}\) with a shared MLP that consumes flattened per-point and per-lane features to produce a fixed size embedding per lane. Cross attention between the collection of lane embeddings and agent states incorporates map information into the agent state features. Our network is comprised of 15 total transformer layers with a fixed feature dimension of 256. We use an MLP decoder after the final transformer layer to produce our estimate of \(\mathbf{x}_{lat,0}\). A full representation of our architecture is available in Appendix A.
## 5 Guidance for Conditional Scene Generation
So far, we have outlined our method for generating joint traffic scenes using DJINN. Next, we describe how the diffusion nature of DJINN enables fine-grained control over the generation and modification of driving scenarios.
### Classifier-free Guidance
In Scene Transformer [30], a masked sequence modelling framework is introduced for goal-directed and agent-reactive scene predictions. One limitation of this approach is that conditioning is performed on precise agent states while future agent states or goals are usually uncertain. We mitigate this limitation through the use of classifier-free guidance.
We assume access to a set of precise observations \(\mathbf{x}_{obs}\), and some set of additional agent states \(\mathbf{x}_{cond}\) on which we wish to condition our sample. For instance, \(\mathbf{x}_{cond}\) may include agent goals upon which we wish to condition. Let \(\mathbf{x}^{\prime}_{obs}=\{\mathbf{x}_{obs}\cup\mathbf{x}_{cond}\}\). Based on Eq. (4), the conditional score is through a weighted average of the score estimate conditioned on \(\mathbf{x}_{obs}\) and the estimated conditioned on \(\mathbf{x}^{\prime}_{obs}\)
\[\nabla_{\mathbf{x}_{lat,\tau}}\log p(\mathbf{x}_{lat,\tau}| \mathbf{x}^{\prime}_{obs})\approx \lambda\frac{D_{\theta}\left(\mathbf{x}_{lat,\tau},\mathbf{x}^{ \prime}_{obs},\mathbf{M},\mathbf{c},\tau\right)-\mathbf{x}_{lat,\tau}}{\tau^{2}} \tag{7}\] \[+(1-\lambda)\frac{D_{\theta}\left(\mathbf{x}_{lat,\tau},\mathbf{ x}_{obs},\mathbf{M},\mathbf{c},\tau\right)-\mathbf{x}_{lat,\tau}}{\tau^{2}}.\]
To facilitate classifier-free conditioning, we train DJINN on a \(p(\mathcal{O})\) representing varied conditioning tasks. These tasks include conditioning on agent history, agent goals, windows of agent states, and random agent states. A full overview of our task distribution is given in Appendix B.
### Classifier Guidance
Many driving behaviors of individual or multiple agents can be categorized by a class \(y\) based on their geometry, inter-agent interactions or map context. Examples of classes include driving maneuvers such as left turns, multi agent behaviors such as yielding to another agent, or constraints such as trajectories which follow the speed limit. DJINN uses classifier guidance to conditioned scenes on these behavior classes. Given a set of example scenes corresponding to a behavior class \(y\), we train a classifier to model \(p_{\phi}(y|\mathbf{x})\). Using Eq. (3) we approximate the conditional score for conditional sampling. Importantly, due to the joint nature of our representation, classifiers for per-agent, multi-agent or whole-scene behaviors can be all used to condition sampled traffic scenes.
### Scenario Editing
One benefit of sampling traffic scenes at once instead of autoregressively is the ability to edit generated or recorded traffic scenarios through stochastic differential editing [26]. Given a traffic scene \(\mathbf{x}\), a user can manually modify the trajectories in the scene to produce a "guide" scene \(\mathbf{x}^{\prime}\) which approximates
\begin{table}
\end{table}
Table 1: Ego-only motion forecasting performance on Argoverse and INTERACTION datasets. minADE and minFDE metrics on both datasets indicate that DJINN produces ego samples which closely match the distribution of ego agent trajectories.
\begin{table}
\end{table}
Table 2: Ego-only and joint metrics comparing DJINN to a jointly trained Scene Transformer model on the Argoverse validation set. DJINN produces better joint samples than SceneTransformer when measured by minSceneADE and minSceneFDE.
the desired trajectories in the scene. The guide scene is used to condition the start of a truncated reverse diffusion process by sampling \(\mathbf{x}_{\tau_{edit}}\sim\mathcal{N}(\mathbf{x}^{\prime},\tau_{edit}\mathbf{ I})\) where \(\tau_{edit}\) is an intermediate time in the diffusion process between \(0\) and \(\tau_{max}\). Then, the edited scene is produced by integrating the PF ODE using the same ODE solver, starting from initial condition \(\tau_{edit}\). Through the stochastic differential editing, the guide scene is translated into a realistic traffic scene with agent trajectories which approximate the guide trajectories. We empirically find \(\tau_{edit}=0.8\) to be a good trade-off between generating realistic trajectory scenes and maintaining the information of the guide scene.
## 6 Experiments
### Motion Forecasting Performance
To measure the quality of the samples from DJINN, we evaluate our method on two popular motion prediction datasets, matching \(\mathcal{O}\) during training to match each dataset. For the INTERACTION dataset [53] scenes, we observe the state of all agents over the first second of the scene and generate the next three seconds. On the Argoverse dataset [4] our model observes agent states over the first two seconds of the scene and generates the next three seconds. Training hyperparameters for both models are found in Appendix A.
We note that both INTERACTION and Argoverse metrics measure an ego-only trajectory-set using minADE and minFDE over 6 trajectories. Since DJINN produces stochastic samples of entire traffic scenes, a set of 6 random trajectories may not cover all future trajectory modes. To alleviate this, we draw a collection of 60 samples for each scenario and fit a 6 component Gaussian mixture model with diagonal covariances using EM in a method similar to [47]. We use the means of the mixture components as the final DJINN prediction for motion forecasting benchmarks.
We present DJINN's performance on motion forecasting in Table 1 with Argoverse results in Table 1a and INTERACTION results in Table 1b. On INTERACTION, DJINN generates excellent ego vehicle trajectories, with similar minFDE and minADE to state of the art methods on this dataset. On the Argoverse test set we produce competitive metrics, although our results lag slightly behind top motion forecasting methods. We hypothesize that our lower performance on Argoverse is due to the lower quality agent tracks in this dataset when compared to INTERACTION.
We further analyze the _joint_ motion forecasting performance of DJINN. To this end, we measure the Scene minADE and minFDE proposed by [2] which measures joint motion forecasting performance over a collection of traffic scenes. We compare DJINN against a reproduction of Scene Transformer trained for joint motion forecasting, using their reported hyperparameters. Ego-only and Scene motion forecasting performance is shown in Table 2. Although Scene Transformer predicts slightly better ego vehicle trajectories, we demonstrate DJINN has superior joint motion forecasting capabilities.
### State-conditioned Traffic Scene Generation
While DJINN is able to draw samples for motion forecasting benchmarks by conditioning on past observations of the scene, a key benefit of our approach is the ability to flexibly condition at test-time
Figure 2: The effect of classifier-free guidance weight on the spread of trajectories for goal conditioned sampling. Samples drawn from the INTERACTION validation set conditioned using classifier-free guidance on a goal state (star). As the guidance weight increases, deviation from the goals decreases.
based on arbitrary agent states. We illustrate this test-time conditioning in Fig. 1 by generating samples from five conditional distributions which correspond to use-cases for our model.
Specifying exact agent states on which to condition can be challenging. One approach is to utilize the states of a prerecorded trajectory to produce conditioning inputs. However, if one wishes to generate a trajectory which deviates from a recorded trajectory, there is uncertainty about the exact states on which to condition. In Fig. 2, we demonstrate how classifier-free guidance can be utilized to handle user uncertainty in conditioning agent states. In this example, we set the observation set \(\mathbf{x}_{obs}\) to the first ten states of each agent's recorded trajectory. Further, we create a conditional observation set \(\mathbf{x}^{\prime}_{obs}\) by augmenting \(\mathbf{x}_{obs}\) with a goal state for each agent drawn from a normal distribution centered on the ground-truth final position of each agent, with 1m variance. We sample traffic scenes with varying levels of classifier-free guidance strength, drawing two conclusions. First, DJINN is robust to goals which do not match the recorded final agent states. Secondly, the strength of the classifier guidance weight controls the emphasis of the goal conditioning, resulting in trajectory samples which cluster more tightly around the specified goal as the guidance strength is increased. With low guidance weight, the samples are diverse, and do not closely match the specified goal position. As the weight increases, the spread of the trajectory distribution tightens, especially for fast, longer trajectories. These properties give users finer control over the distribution of traffic scenes when there is uncertainty over the conditioning states.
### Conditional Generation from Behavior Classes
We now continue to demonstrate the flexibility of our approach by considering test-time conditioning of our model on specific driving behaviors through classifier guidance. Specifically, we highlight the ability to condition DJINN on the behavior class of cut-in trajectories by conditioning our INTERACTION trained model with a cut-in classifier.
A "cut-in" occurs when one vehicle merges into the path of another, often requiring intervention by the cut-off driver. We selected this behavior to demonstrate how classifier guidance can be used with our joint representation to sample scenes conditioned on the behavior of multiple agents. We condition DJINN trained on INTERACTION using a simple cut-in classifier. To train the classifier, we first mined a dataset of cut-in behaviors trajectory pairs from the "DR_CHN_Merging_ZS" location - a highway driving scene with some cut-in examples. Each trajectory pair is comprised of an "ego" and an "other" agent. We define a positive cut-in as a case where the future state of the other agent at time \(t_{other}\) overlaps with a future state of the ego agent at time \(t_{ego}\) such that \(t_{ego}-3s<t_{other}<t_{ego}\). Further, we filter cases where the initial state of the other agent overlaps with any part of the ego
Figure 3: Examples of synthetic cut-in behaviors generated using classifier guidance. Samples are generated from the INTERACTION validation set conditioned on the first 10 agent states. Applying classifier guidance causes the other agent (green) to cut in front of the ego agent (purple). We generate trajectories for all agents in the scene, but other agent trajectories have been omitted for clarity
trajectory to eliminate lane following cases. We label a negative cut-in case as any other pair of trajectories in which the minimum distance between any pair of ego and other states is less than 5m.
Using these heuristics, we collect a dataset of 2013 positive and 296751 negative examples. We trained a two layer MLP classifier with 128 dimensions per hidden layer. The classifier takes as input the diffused trajectories of each agent, the validity of each timestep and the diffusion time \(\tau\). Using this classifier, we generate synthetic cut-in scenarios via Eq. (3). Examples of our synthetic cut-in scenarios are found in Fig. 3. The generated scenarios clearly demonstrate our model can be conditioned to create synthetic cut-in behaviors. These synthetic examples provide evidence that given a collection of trajectories exemplifying a behavior mode, or a heuristic which can be used to generate example trajectories, DJINN can be conditioned to generate synthetic examples representing that behavior mode. This finding further expands the flexibility of our model to generate trajectory samples from valuable conditional distributions.
### Scenario Fine-Tuning
We exhibit another method of controlling the traffic scenarios generated with DJINN through fine-tuning. Since DJINN diffuses entire traffic scenes without iterative replanning, we are able to use stochastic differential editing to modify the sampled scenes. Given a recorded or sampled traffic scene, differential stochastic editing can be used to fine-tune the scene through the use of a manually specified guide. In Fig. 4, we demonstrate how DJINN can fine-tune existing scenarios to produce new scenarios with realistic trajectories but complex interactions. Using two recorded validation set scenes from Argoverse, we aim to edit the scenes to generate more interactive trajectories between the agents. For this purpose, we generate an guide scene \(\mathbf{x}_{guide}\) by manually adjusting the trajectories in each scene so that the future paths of two of the agents will intersect. Through stochastic differential editing, we show that DJINN is able to produce realistic driving scenes which shift the guide scene trajectories to maintain their interactivity but avoid collisions between agents.
Figure 4: Two scenario fine-tuning examples (one per row) based on Argoverse validation set scenarios. **Left**: original scene with ground-truth trajectories shown for two interacting vehicles, vehicle positions at the same time index for all agents. **Middle**: a manual edit of one agent’s trajectory in each scene. One (top) replaces a right turn with a forward continuation, the other (bottom) shifts a trajectory back in space to cause a complex interaction to occur near the end of the trajectory. **Right**: the resulting stochastic differential edit of the original scenario. Both rows of the last column illustrate joint reactivity to the new trajectories arising from the edit; in the top row the left-turning vehicle yields and in the bottom row both trajectories shift to avoid collision.
Conclusions
In this work, we present DJINN - a diffusion model of joint traffic scenes. By diffusing in a joint agent state representation, DJINN can be adapted at test time to a variety of modeling tasks through guidance methods and scenario editing. The power of this scenario generation model opens exciting possibilities. Future research may expand the variety of guidance classifiers such as utilizing the classifiers proposed in [55] for traffic-rule constraint satisfaction. Another promising avenue of research is scaling DJINN for faster scenario generation. Although flexible, the diffusion structure of DJINN makes scenario generation relatively slow due to the iterative estimation of the score function. Distillation techniques such as consistency models [42] may be helpful in this regard to improve the number of score estimates required per sample. Future work may also consider scaling the length and agent count in generated scenarios to improve the complexity of behaviors which can be generated. Other areas of future work include using DJINN in a model predictive control setting (hinted at in the predictive mask of Fig. 1) in which an ego action is scored using statistics of ego-action conditioned joint trajectories from DJINN.
## Acknowledgements
We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), the Canada CIFAR AI Chairs Program, Inverted AI, MITACS, the Department of Energy through Lawrence Berkeley National Laboratory, and Google. This research was enabled in part by technical support and computational resources provided by the Digital Research Alliance of Canada Compute Canada (alliancecan.ca), the Advanced Research Computing at the University of British Columbia (arc.ubc.ca), Amazon, and Oracle.
|
2301.00048 | Robustness of Variational Quantum Algorithms against stochastic
parameter perturbation | Variational quantum algorithms are tailored to perform within the constraints
of current quantum devices, yet they are limited by performance-degrading
errors. In this study, we consider a noise model that reflects realistic gate
errors inherent to variational quantum algorithms. We investigate the
decoherence of a variationally prepared quantum state due to this noise model,
which causes a deviation from the energy estimation in the variational
approach. By performing a perturbative analysis of optimized circuits, we
determine the noise threshold at which the criteria set by the stability lemma
is met. We assess our findings against the variational quantum eigensolver and
quantum approximate optimization algorithm for various problems with up to 14
qubits. Moreover, we show that certain gate errors have a significantly smaller
impact on the coherence of the state, allowing us to reduce the execution time
without compromising performance. | Daniil Rabinovich, Ernesto Campos, Soumik Adhikary, Ekaterina Pankovets, Dmitry Vinichenko, Jacob Biamonte | 2022-12-30T20:36:29Z | http://arxiv.org/abs/2301.00048v3 | # On the gate-error robustness of variational quantum algorithms
###### Abstract
Variational quantum algorithms are tailored to perform within the constraints of current quantum devices, yet they are limited by performance-degrading errors. In this study, we consider a noise model that reflects realistic gate errors inherent to variational quantum algorithms. We investigate the decoherence of a variationally prepared quantum state due to this noise model, which causes a deviation from the energy estimation in the variational approach. By performing a perturbative analysis of optimized circuits, we determine the noise threshold at which the criteria set by the stability lemma is met. We assess our findings against the variational quantum approximate optimization algorithm for 3-SAT problem instances and unstructured search with up to 10 qubits and 30 layers. Moreover, we show that certain gate errors have a significantly smaller impact on the coherence of the state, allowing us to reduce the execution time without compromising performance.
## I Introduction
Noisy Intermediate Scale Quantum (NISQ) computing [1] is constrained by limited coherence times and operation precision [2; 3; 4; 5], which restrict the number of qubits and circuit depths that can be implemented with reasonable fidelity. This limits the range of possible experimental demonstrations. The variational model of quantum computation is tailored to operate within these practical limitations [6; 7; 8], and has been shown to be computationally universal under idealized conditions [9]. Similar to machine learning, a variational algorithm employs a parameterized quantum circuit, called an ansatz, that is iteratively adjusted to minimize a cost function in a quantum-to-classical feedback loop [10]. The cost function usually takes the form of the expectation of a problem Hamiltonian, where the ground state of the problem Hamiltonian represents the solution to a given problem instance. By minimizing the cost function (energy), a variational algorithm aims to approximate the ground state of the Hamiltonian. However, this approach does not guarantee the quality of the approximate solution, which is typically measured by the overlap between the state prepared by the ansatz and the true ground state. Nonetheless, the overlap can be bounded. Using the stability lemma [9], it has been demonstrated that the bounds can be directly linked to the energy, allowing us to determine the energy threshold (upper bound) required to ensure a minimum (fixed) overlap. We refer to this as the acceptance threshold, and a state with an energy below this threshold is considered accepted by the algorithm.
Variational algorithms are designed to mitigate some of the systematic limitations of NISQ devices [8; 11; 12; 13]. However, these algorithms are still susceptible to stochastic noise. While there is some evidence that variational algorithms can benefit from a certain level of stochastic noise [14], in general, noise negatively impacts their performance by inducing decoherence and impacting solution quality.
In this paper, we investigate how errors in the form of parameter deviations impact the performance of variational algorithms when operated at their noiseless optimal parameters. We analytically demonstrate that the energy shift varies quadratically with the spread of parameter deviation, equivalent to an energy shift linear with respect to the gate error probabilities for various noise models [15]. We validate our findings using the quantum approximate optimization algorithm on two common problems: 3-SAT [16] and unstructured search [17; 18]. We also observe that the performance of the algorithm is more resilient to alterations in certain parameters. Based on these findings, we propose methods to potentially enhance performance and reduce the execution time of variational quantum algorithms.
## II Preliminaries
### Variational Quantum Approximate Optimization
The quantum approximate optimization algorithm (QAOA) [19], originally designed to approximately solve combinatorial optimization problems [16; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30], consists of ansatze circuits expressive enough to (in theory) emulate any quantum circuit [21; 22].
Consider a pseudo-Boolean function \(\mathcal{C}:\{0,1\}^{\times n}\rightarrow\mathbb{R}\), the objective of the algorithm is to approximate a bit string that minimizes \(\mathcal{C}\). To accomplish this, \(\mathcal{C}\) is first encoded as a problem Hamiltonian \(H\), diagonal in the computational basis. The ground state of \(H\) encodes the solution to the problem; in other words QAOA searches for a solution \(|g\rangle\) such that \(\langle g|H|g\rangle=\min H\).
The algorithm begins with an ansatz state \(|\psi_{p}(\mathbf{\gamma},\mathbf{\beta}))\)
prepared by a circuit of depth \(p\) -- parameterized as:
\[\left|\psi_{p}(\mathbf{\gamma},\mathbf{\beta})\right\rangle=\prod_{k=1}^{p}e^{-i\beta_{k} H_{x}}e^{-i\gamma_{k}H}\left|+\right\rangle^{\otimes n}, \tag{1}\]
with real parameters \(\gamma_{k}\in[0,2\pi)\), \(\beta_{k}\in[0,\pi)\). Here \(H_{x}=\sum_{j=1}^{n}X_{j}\) is the standard one-body mixer Hamiltonian with Pauli matrix \(X_{j}\) applied to the \(j\)-th qubit. The cost function is given by the expectation of the problem Hamiltonian with respect to the ansatz state. The algorithm minimizes this cost function to output:
\[E^{*}=\min_{\mathbf{\gamma},\mathbf{\beta}}\left\langle\psi_{p}(\mathbf{\gamma},\mathbf{\beta} )\right|H\left|\psi_{p}(\mathbf{\gamma},\mathbf{\beta})\right\rangle \tag{2}\]
\[\mathbf{\gamma}^{*},\mathbf{\beta}^{*}\in\arg\min_{\mathbf{\gamma},\mathbf{\beta}}\left\langle \psi_{p}(\mathbf{\gamma},\mathbf{\beta})\right|H\left|\psi_{p}(\mathbf{\gamma},\mathbf{\beta} )\right\rangle \tag{3}\]
Here, \(\left|\psi_{p}(\mathbf{\gamma}^{*},\mathbf{\beta}^{*})\right\rangle\) is the approximate ground state of \(H\) and hence the approximate solution to \(\mathcal{C}\). Indeed, the quality of the approximation, quantified as the overlap between the true solution and the approximate solution, is not known a priori from (2). Nevertheless one can establish bounds on this quantity using the so called stability lemma.
### Stability lemma
The stability lemma states that if \(|g\rangle\) is the true ground state of \(H\) with energy \(E_{g}\) and \(\Delta\) is the spectral gap (the difference between the ground state energy and the energy of the first excited state) the following relation holds [9; 31]:
\[1-\frac{E^{*}-E_{g}}{\Delta}\leq|\langle\psi_{p}(\mathbf{\gamma}^{*},\mathbf{\beta}^{* })|g\rangle|^{2}\leq 1-\frac{E^{*}-E_{g}}{E_{m}-E_{g}} \tag{4}\]
where \(E_{m}\) is the maximum eigenvalue of \(H\). Thus to guarantee a non-trivial overlap one must ensure that \(E^{*}\leq E_{g}+\Delta\). We call the latter the acceptance condition.
## III Variational quantum algorithms in the presence of realistic gate errors
Implementation of unitary operations depends significantly on the considered hardware. However, typically the implementation makes use of electromagnetic pulses, such as in superconducting quantum computers [32; 33], neutral atom based quantum computers [34; 35], and trapped ion based quantum computers [8; 36]. Such pulses can change the population of the energy levels that constitute a qubit or introduce phases to the quantum amplitudes, thus controlling the state of the qubits. Consequently, the main contribution to gate errors comes from variation in pulse shaping, meaning that amplitude and timing of electromagnetic pulse can stochasticaly vary. In certain experimental setups, such as ground state ion qubits, where entangling operations are performed using the radial phonon modes [37], the variability in pulse shaping is the main source of gate errors.
Angles of rotation in a typical gate operation depend on time averaged intensity \(I(t)\) of the electromagnetic pulse; \(\theta\propto\int I(t)dt\). Thus, variations in the pulse shaping lead to stochastic deviations of the angles of rotations from the desired values. In other words, if a circuit is composed of the parameterised gates \(\{U_{k}(\theta_{k})\}_{k=1}^{q}\); \(\theta_{k}\in[0,2\pi)\) and one tries to prepare a state \(\left|\psi(\mathbf{\theta})\right\rangle=\prod_{k=1}^{q}U_{k}(\theta_{k})\left|\psi _{0}\right\rangle\), a different state
\[\left|\psi(\mathbf{\theta}+\mathbf{\delta\theta})\right\rangle=\prod_{k=1}^{q}U(\theta _{k}+\delta\theta_{k})\left|\psi_{0}\right\rangle, \tag{5}\]
is prepared instead due to the presence of errors. Notice here that the perturbation \(\mathbf{\delta\theta}\) to the parameters is stochastic and is sampled with a certain probability density \(p(\mathbf{\delta\theta})\). This implies that the prepared state can be described by an ensemble \(\{\left|\psi(\mathbf{\theta}+\mathbf{\delta\theta})\right\rangle,p(\mathbf{\delta\theta})\}\), which we can equivalently view as a density matrix
\[\rho(\mathbf{\theta})=\int\limits_{\mathbf{\delta\theta}\in[-\pi,\pi]^{\times q}}p( \mathbf{\delta\theta})|\psi(\mathbf{\theta}+\mathbf{\delta\theta})\rangle\!\langle\psi( \mathbf{\theta}+\mathbf{\delta\theta})|d(\mathbf{\delta\theta}). \tag{6}\]
Eq. (6) represents a noise model native to the variational paradigm of quantum computing. For the rest of this paper we systematically study the effect of this noise model on the performance of QAOA for instances of 3-SAT and the unstructured search problem (see appendix A for more details on the considered problems). In particular we study the energy perturbation around \(E^{*}\) in different scenarios subsequently recovering the strength of noise under which the acceptance condition continues to be satisfied.
## IV Results
### Perturbative analysis in presence of gate errors
Consider a problem Hamiltonian \(H\) and a variational ansatz \(\left|\psi(\mathbf{\theta})\right\rangle=U_{1}(\theta_{1})\ldots U_{q}(\theta_{q}) \left|\psi_{0}\right\rangle\) used to minimize \(H\). Here the gates \(U_{k}(\theta_{k})\) have the form:
\[U_{k}(\theta_{k})=e^{iA_{k}\theta_{k}},A_{k}^{2}=\mathbb{1}, \tag{7}\]
A typical example of such an ansatz is the checkerboard ansatz, with Molmer-Sorensen (MS) gates as the entangling two qubit gates. Nevertheless, any quantum circuit can admit a decomposition in terms of operations that satisfy (7); this adds generality to this assumption.
In the presence of gate errors the prepared quantum state decoheres as \(\left|\psi(\mathbf{\theta})\right\rangle\rightarrow\rho(\mathbf{\theta})\) as per (6). To obtain the analytic form of \(\rho(\mathbf{\theta})\) we first note that
\[U_{k}(\theta_{k}+\delta\theta_{k})=U_{k}(\theta_{k})U_{k}(\delta\theta_{k})=\cos \delta\theta_{k}U_{k}(\theta_{k})+\sin\delta\theta_{k}U_{k}\left(\theta_{k}+ \frac{\pi}{2}\right).\]
This follows directly from (7). Therefore we get:
\[|\psi(\mathbf{\theta}+\mathbf{\delta\theta})\rangle\!\langle\psi(\mathbf{\theta}+\mathbf{ \delta\theta})|=\sum_{k_{1},\ldots,k_{q},m_{1},\ldots,m_{q}=0}^{1}(\cos^{2} \delta\theta_{1}\tan^{k_{1}+m_{1}}\delta\theta_{1})\ldots(\cos^{2}\delta\theta _{q}\tan^{k_{q}+m_{q}}\delta\theta_{q})|\psi_{k_{1}\ldots k_{q}}\rangle\! \langle\psi_{m_{1}\ldots m_{q}}|, \tag{8}\]
where
\[|\psi_{k_{1}\ldots k_{q}}\rangle=U_{1}(\theta_{1}+k_{1}\frac{\pi}{2})\ldots U _{q}(\theta_{q}+k_{q}\frac{\pi}{2})\,|\psi_{0}\rangle\,. \tag{9}\]
Here we make three realistic assumptions--(a) perturbations to all the angles are independent, (b) mean perturbation \(\langle\delta\theta_{k}\rangle=0\) and (c) the distribution \(p(\delta\theta_{k})\) vanishes quickly outside the range \((-\sigma_{k},\sigma_{k})\); that is, the error is localized on the scale \(\sigma_{k}\ll 1\). Note that even if assumption (b) does not hold, as long as mean value \(\langle\delta\theta_{k}\rangle\) is independent from the angle \(\theta_{k}\), one can always shift the parameters as \(\theta_{k}\rightarrow\theta_{k}-\langle\delta\theta_{k}\rangle\) to avoid non-zero mean. Otherwise, terms linear in \(\langle\delta\theta_{k}\rangle\) could contribute to the energy perturbation [38].
Substituting (8) in (6) we arrive at the expression:
\[\rho(\mathbf{\theta})=|\psi(\mathbf{\theta})\rangle\!\langle\psi(\mathbf{\theta})|+ \delta\rho, \tag{10}\]
where
\[\delta\rho\approx-\sum_{k=1}^{q}a_{k}|\psi(\mathbf{\theta})\rangle\!\langle\psi( \mathbf{\theta})|+\sum_{k=1}^{q}a_{k}|\psi_{k}\rangle\!\langle\psi_{k}|+o(\sigma_ {k}^{2}). \tag{11}\]
Here \(|\psi_{k}\rangle=|\psi_{00\ldots 1\ldots 00}\rangle\) with 1 placed in the \(k\)-th position, and
\[a_{k}\equiv\langle\sin^{2}\delta\theta_{k}\rangle=\int\sin^{2}\delta\theta_{ k}p(\delta\theta_{k})d(\delta\theta_{k})\sim\sigma_{k}^{2}. \tag{12}\]
Notice that (11) can be viewed as the action of certain noisy channel, where each of the gates is altered with probability \(a_{k}\sim\sigma_{k}^{2}\). In this sense, we call \(a_{k}\)'s gate error probabilities, though this treatment is specific to the interpretation of the noisy channel.
Notice that the derivation above does not require \(\mathbf{\theta}\) to be a minimum of the noiseless cost function. Let us now assume that \(\mathbf{\theta}^{*}\) is a vector of parameters such that \(|\psi(\mathbf{\theta}^{*})\rangle\) approximates the ground state of \(H\). The noise induced energy perturbation around the optimal energy \(E^{*}\) is given as:
\[\begin{split}\delta E&=\operatorname{Tr}(\rho(\mathbf{ \theta}^{*})H)-\langle\psi(\mathbf{\theta}^{*})|\,H\,|\psi(\mathbf{\theta}^{*})\rangle \\ &=\sum_{k}(\langle\psi_{k}|\,H\,|\psi_{k}\rangle-E^{*})a_{k}\leq( E_{m}-E^{*})\sum_{k}a_{k},\end{split} \tag{13}\]
which demonstrates that energy perturbation depends linearly on the gate error probabilities \(a_{k}\) (quadratic in \(\sigma_{k}\)) [15].
For the simplest case where each parameter is sampled from the same distribution (\(\sigma_{k}=\sigma\)) we can roughly estimate:
\[\delta E\leq q\sigma^{2}(E_{m}-E^{*}). \tag{14}\]
Thus, requesting an energy threshold \(E\leq E_{g}+\Delta\), we conclude that for \(\sigma\lesssim\sqrt{\frac{\Delta-(E^{*}-E_{g})}{q(E_{m}-E^{*})}}\) the acceptance condition is still satisfied.
While our perturbative analysis holds for all variational algorithms, we substantiate our findings numerically using QAOA. In particular we solve instances of 3-SAT and unstructured search problems to study the behaviour of energy perturbation around \(E^{*}\) caused by the presence of gate errors.
#### ii.1.1 Constant perturbation
We begin with a simplified version of the noise model proposed in (6). We ran QAOA for 100 uniformly generated 3-SAT instances of 6,8, and 10 variables with 26, 34 and 42 clauses respectively. All the instances were selected to have a unique satisfying assignment. The instances were minimized by QAOA sequences of 15, 25 and 30 layers respectively in order to obtain expected values well below the energy gap. In order to numerically verify the behaviour of the energy perturbation, we vary all optimal parameters by a constant angle \(\delta\). Figure 1 illustrates the shift in the energy for the minimized instances, which can be seen to have a quadratic dependence of the perturbed energy \(\delta E\) with respect to the shift \(\delta\). This is natural to expect since the parameters deviate from the local minimum, where linear contribution must have vanished (a rigorous expression showing the quadratic behavior is derived in appendix B).
Similar to the case of 3-SAT, for the problem of unstructured search we perturb optimal parameters of the circuit by an angle \(\delta\) and plot corresponding energy in Fig. 2. Again, as expected, for small values of \(\delta\) the energy perturbation is quadratic which comes from the fact that the deviation happens around the minimum.
#### iii.1.2 Stochastic perturbation
We now consider the complete noise model in (6) and verify our analytical prediction as shown in (14). For each 3-SAT instance, we randomly sample perturbations \(\delta\) to each of the gates from a uniform distribution on the interval \((-\sigma,\sigma)\) and average the obtained energy. Then we average energies over instances of the same number of qubits as depicted in Fig. 3. It is seen that for small values of noise the energy scales as \(\delta E\propto\sigma^{2}\), as per (14), which is equivalent to linear dependence on the gate error probabilities \(a_{k}\). It is seen, that the value \(\sigma\sim 0.075\) could never violate the acceptance criteria, as corresponding energy error never exceeds the gap \(\Delta\geq 1\). For smaller number of qubits and gates the threshold value of \(\sigma\) increases.
For unstructured search, we average the energy over \(\delta\) sampled for each gate from the uniform distribution \((-\sigma,\sigma)\). We again recover that \(\delta E\propto\sigma^{2}\), as depicted in Fig. 4. It is seen that the same threshold \(\sigma\sim 0.075\) now increases energy by no more then \(0.6\), which guaranties \(40\%\) overlap with the target state.
### Perturbation to individual parameters
Here we consider a modified version of (6), where parameters are perturbed one at a time while the rest are kept intact. Effect of this model on the energy is illustrated in Figures 5 and 6 for \(n=10\) qubits. Similar results were also obtained for \(n=6\) and \(n=8\) qubits. The results are numerical and are yet to be explained
Figure 3: Average energy shift of 100 uniformly generated 3-SAT instances of 6, 8 and 10 qubits with clause to variable ratio of 4.2 and unique satisfying assignment. The shifts are obtained by the perturbation of \(\mathbf{\gamma}^{*},~{}\mathbf{\beta}^{*}\) by \(\delta\) uniformly sampled from the range \((-\sigma,\sigma)\). Error bars depict standard error. Polynomial fits of data indicate that in the range \(\sigma\in[0,0.1]\)\(\delta E\propto\sigma^{2}\).
Figure 2: Energy shift for the problem of unstructured search obtained by perturbing of the ansatz state as \(|\psi_{p}(\mathbf{\gamma}^{*}+\delta,\mathbf{\beta}^{*}+\delta)\rangle\). Polynomial fits for data points of 6, 8 and 10 qubits follow quadratic curves in the ranges \(\delta\in[0,0.02],~{}[0,0.01],~{}[0,0.008]\) respectively.
analytically. We observe that perturbations to certain angles have a significantly smaller effect on the energy. Thus we can infer that reducing the values of such angles would not have a significant effect on performance but will reduce the execution time of the circuit, that is \(t_{exec}=\sum_{k=1}^{p}\beta_{k}+\gamma_{k}\). Alternatively, increasing depth to \(p+1\) while limiting the maximum execution time to that of the original circuit, \(t_{exec}^{p+1}\leq t_{max}^{p+1}=t_{exec}^{p}\), one can potentially improve performance.
Reducing the execution time is important to quantum algorithms, since variational parameters are proportional to the time required to execute the gates experimentally. NISQ era devices suffer from limited coherence, thus reducing execution times can lead to more efficient hardware utilization [39; 40]. We test these ideas in the setting of unstructured search, as depicted in Fig. 7. Here we demonstrate the optimized QAOA energies for 6 qubits at multiple depths with execution time limited to \(t_{max}\). The highlighted green and orange rectangles correspond to the two groups of optimal angles that minimize the energy at each depth, as presented in [17]. Green rectangles also indicate the depth and \(t_{exec}\) at which the ansatz will not be able to decrease its energy by either increasing depth or \(t_{max}\). Following the observations of Fig. 5, by slightly reducing \(t_{max}\) the optimizer may reduce the parameters to which the energy is less sensitive. This results in a slight energy increase as illustrated in Fig. 7 where to the left of the green rectangles we can observe darkening gradients.
By contrast, orange rectangles highlight longer execution times corresponding to different sets of angles that also minimize the energy for a given number of layers. Therefore, if the optimization routine finds the solution corresponding to the orange rectangle, setting \(t_{max}\) to be slightly less than the \(t_{exec}\) of the orange rectangle will lead the optimizer to find angles corresponding to the green rectangle. This will amount to a considerable reduction in execution time. Alternatively, increasing the number of layers while keeping \(t_{max}\) constant may reduce the energy.
In general, for an arbitrary problem Hamiltonian one can not be sure if optimization has returned the ideal set of angles (green ones in our example). For this reason, one might employ several strategies based on Fig. 7 to achieve a minimum threshold energy. These include--to reduce \(t_{max}\) until energy starts degrading or increase depth with fixed \(t_{max}\) until performance stagnates.
## V Discussion
In this study, we considered a noise model where variational gate parameters are stochastically perturbed, and we demonstrated how this perturbation affects the optimised energy \(E^{*}\). Through a perturbative analysis, we found that the change in energy \(\delta E\) due to the presence of the gate errors behaves quadratically with respect to the spread of parameter deviations, which is equivalent to linear dependence on the gate error probabilities. Using this result, we derived upper bounds on the amount of perturbation that can be tolerated while still satisfying the acceptance condition and achieving a fixed overlap between the target state and the state prepared by the noisy variational circuit.
Our analytical findings are confirmed by numerical simulations of the quantum approximate optimisation algorithm (QAOA) for two common problems - 3-SAT and unstructured search - using different modifications of the considered noise model. Our numerical results further showed that the algorithmic performance is more resilient to perturbations of certain variational parameters. Based on this observation, we proposed a strategy to improve performance and reduce the execution time of variational quantum algorithms. Specifically, we showed that the performance of QAOA (with execution time \(t_{exec}\)) is not affected when limiting the maximum execution time to \(t_{max}=t_{exec}-\epsilon\) for \(\epsilon\ll t_{exec}\). We also demonstrated that in some cases reducing \(t_{max}\) can lead to significant reductions in \(t_{exec}\), while increasing the depth of the algorithm can lead to an energy reduction while fixing \(t_{exec}\).
Whereas our study primarily focused on energy perturbations around the noiseless optimum \(\mathbf{\theta}^{*}\), in practice, one has to train the algorithm in the presence of noise, which can change the optimal angles \(\mathbf{\theta}^{*}\) to \(\mathbf{\theta}^{*}+\mathbf{\delta}\mathbf{\theta}^{*}\), where the shift \(\mathbf{\delta}\mathbf{\theta}^{*}\) increases with the strength of the noise. However, using perturbation theory around the noiseless optimum, one can estimate \(\mathbf{\delta}\mathbf{\theta}^{*}=O(\sigma^{2})\) in the regime of weak noise, and the corresponding change in the energy is \(\mathrm{Tr}(\rho(\mathbf{\theta}^{*}+\mathbf{\delta}\mathbf{\theta}^{*})H)-\mathrm{Tr}( \rho(\mathbf{\theta}^{*})H)=O(\sigma^{4})\). Therefore, in the regime of weak noise, one can safely use the noiseless optimum \(\mathbf{\theta}^{*}\). For detailed calculations, please refer to appendix C.
## Acknowledgement
D.R., E.C., S.A., E.P., D.V. acknowledge support from the research project, Leading Research Center on Quantum Computing (agreement No. 014/20).
|
2310.01417 | The Feasibility of Electric Air Taxis: Balancing Time Savings and CO$_2$
Emissions -- A joint case study of respective plans in Paris | This paper evaluates the sustainability of Advanced Air Mobility (AAM) in
urban and regional mobility, using Paris as a case study. Paris is committed to
eco-friendly transportation and has introduced AAM, including electric Vertical
Take-Off and Landing (eVTOL) air taxis for the 2024 Olympic Games. We assess
eVTOL energy consumption and CO$_2$ emissions on urban and regional routes,
comparing them with cars, public transport, and helicopters. Urban eVTOLs save
around 23 minutes over cars and 22 minutes over public transport on 50 km
routes. For regional routes (300 km), eVTOLs save 76 minutes over cars and 69
minutes over trains. However, eVTOLs' eco-friendliness depends on context. In
urban areas, they consume more energy than electric cars, but beat traditional
helicopters by 47%. For regional travel, eVTOLs outperform helicopters and some
cars but lag behind electric vehicles and trains. To maximize AAM's
sustainability in Paris, stakeholders must consider real-world operations and
integrate eVTOLs into the broader transportation system. This approach can lead
to greener urban and regional transportation. | Nabil Hagag, Bastian Hoeveler | 2023-09-11T12:41:42Z | http://arxiv.org/abs/2310.01417v1 | # The Feasibility of Electric Air Taxis:
###### Abstract
This paper presents a comprehensive evaluation of the sustainability of Advanced Air Mobility (AMM) within urban and regional mobility infrastructure, utilizing Paris as a prominent case study. Driven by ambitious environmental targets, Paris aims to transform its transportation landscape into a cleaner, safer ecosystem. Collaborating with public and private stakeholders, the region has positioned AAM as a promising facet of future mobility, highlighted by the world's first scheduled commercial electric Vertical Take-Off and Landing (eVTOL) air taxi service during the 2024 Olympic Games. The study's main goal is to assess the energy consumption and CO\({}_{2}\) emissions of AAM aircraft across typical flight missions, encompassing urban and regional routes. A comparison is drawn between eVTOL performance and conventional modes such as cars, public transport, and helicopters. Key findings reveal intriguing insights. On urban routes spanning 50 km, eVTOLs offer noteworthy time savings of around 23 minutes compared to cars and 22 minutes compared to public transport. Moreover, concerning specific scenarios, eVTOLs demonstrate substantial time savings for regional routes of 300 km--averaging 76 minutes compared to cars and 69 minutes compared to trains. Regarding CO\({}_{2}\) emissions, a contrast emerges between urban and regional contexts. Urban eVTOL operations are relatively less eco-friendly due to higher energy consumption, than electric cars. While multicopters consume 47% less CO\({}_{2}\) than traditional helicopters, they surpass petrol cars by 13%, diesel cars by 19%, and electric cars by up to 256%. In contrast, for regional travel, Lift-and-Cruise 1 eVTOLs consume 77% less CO\({}_{2}\) than average helicopters, 46% less than petrol cars, 44% less than diesel cars, but emit 68% more than electric vehicles and 96% more than electric trains. In summary, eVTOLs exhibit significant time savings and CO\({}_{2}\) reductions on regional routes, yet their overall environmental performance hinges on mission specifics. To harness AAMs full potential for Paris's sustainability goals, policymakers, manufacturers, and researchers should explore diverse configurations, account for real-world operations, and seamlessly integrate eVTOLs into the broader transportation framework. This approach can pave the way for greener, more efficient urban and regional transportation futures.
Advanced Air Mobility, Urban Air Mobility, Regional Air Mobility, Electric Vertical Take-off and Landing Vehicle, Air Taxi, Sustainability, Time Saving, Energy Demand, CO\({}_{2}\) Emission, Paris
## 1 Introduction
Advanced Air Mobility (AAM) is an air transport system concept that integrates new, transformational aircraft designs and flight technologies into existing and modified airspace operations [1]. Especially electric Vertical Take-off and Landing (eVTOL) vehicles are the focus of this new transport technology. Considering the growing sustainability awareness of potential customers, eVTOL concepts (e.g. air taxis) are advertised to be especially free of emissions and contributing to the reduction of greenhouse gas (GHG) emissions, while quiet enough to operate in urban or regional environments without disturbing residents.
The Paris Climate Action Plan, launched in 2018 outlines a comprehensive strategy to reduce GHG emissions by improving energy efficiency from buildings, transportation, and waste management. Based on this, the municipal of Paris wants to be a carbon neutral city, powered completely by renewable energy until 2050 [2].
Paris will host the Olympic Games 2024 and the world eagerly awaits the possibility of witnessing the first-ever commercial air taxi flight during this prestigious event [3]. This groundbreaking moment would not only enhance accessibility for travelers but also showcase the aviation industry's commitment to sustainability and time-saving convenience.
Additionally, Paris is building a new metro line that will provide a direct connection from the city to the airport by 2030 [4]. This crucial infrastructure improvement will greatly simplify the accessibility of the citizens and tourists to the airport and introduces another convenient and time-efficient transportation option. Paris clear focus is on innovative and sustainable transport solutions. [2]
However, are eVTOLs as sustainable in terms of time-efficiency as assumed, especially additionally considering their energy demand? The deployment of AAM in Paris raises questions about its sustainability, particularly in terms of energy demand and resulting CO\({}_{2}\) emissions. According to the International Energy Agency, aviation is responsible for approximately 2.5% of global CO\({}_{2}\) emissions [5]. While eVTOL aircraft may offer a more sustainable alternative to traditional helicopters, their energy demand and carbon footprint during operation must still be considered in its assessment.
The Paris Region is home to 18.3% of the French population with around 12.3 million inhabitants and is a gateway to Europe and the world. It is easy to access with three international airports and seven TGV high-speed train stations that connect it to all of the world's major economic centers [6]. Paris Region accounts for 70% of French train traffic, with five million passengers traveling by train in France every day, including 3.5 million in the Paris Region. The Gare du Nord, one of the ten main train stations in Paris, is the busiest station in Europe, with over 200 million passengers per year [6].
In the Paris region, the average number of daily trips per person is 3.8, but this number hides strong disparities. Parisians themselves travel the most with an average of 4.3 trips per day, but cover the shortest distance by 12 km. Conversely, residents of the outer suburbs travel farther by 24 km. It is the working population who travels the most, with an average of 4.3 trips per day. However, the daily time budget for travel is the same regardless of location, at 1h30 per day. Even in the outer suburbs, some people only travel within their local area, which balances the time budgets of those who go to Paris [7].
### Aim
In this paper we address the question if eVTOLs can be a sustainable solution for urban or regional transportation in the Paris region. Specifically, the study will focus on the time saving, but also on the energy demand and carbon footprint of eVTOL aircraft during operation by conducting a joint case study of respective plans for AAM in Paris. In this regard, there are two essential aspects which contribute to the success of the air taxi technology:
_1) Do air taxis reduce commute travel time compared to conventional transportation solutions?_
_2) Do electric air taxis decrease the carbon footprint of travel compared to conventional transportation solutions?_
### Related Work
As part of DLR's internal project, HorizonUAM, the focus lies on evaluating the potential and challenges presented by air taxis and Urban Air Mobility (UAM) concepts. A study conducted within this project utilizes a drone traffic scenario generator and 4D trajectory planning technology, tested within the urban landscape of Hamburg, Germany. Through a comparative analysis of travel times and distances, the research underscores a noteworthy 50% reduction in travel time and an impressive up to 16% decrease in route length for air taxis compared to conventional taxticks. [8]
The ASSURED-UAM project, led by Lukasiewicz Research Network - Institute of Aviation, represents a significant initiative to seamlessly integrate UAM with Air Traffic Management and urban transportation systems. This integration seeks to uphold UAM's acceptability, safety, and sustainability, thereby providing a valuable reference point. Through the analysis of energy efficiency parameters, offers a framework for contrasting UAM passenger transportation with conventional ground-based methods. This approach draws from insights obtained from urban mobility evaluations, facilitating a comprehensive assessment of UAMs energy efficiency within the broader
transportation landscape. Addressing environmental ramifications, discernible trends emerge. Notably, smaller aircraft with modest payloads exhibit lower carbon footprints, particularly during operations compared to larger UAS. The interplay between carbon emissions and a nation's electricity mix becomes evident, with a clear correlation between fossil fuel contribution and carbon footprint. Furthermore, nuanced insights emerge regarding aircraft specifications, operational concepts, and infrastructure, each influencing carbon footprints across different phases of an aircraft's lifecycle. [9]
In summary, the amalgamation of these studies underscores a comprehensive understanding of UAM's sustainability implications, as well as its potential energy efficiency benefits. These insights can be harnessed to critically evaluate electric air taxis' carbon emissions, particularly in relation to traditional modes of transportation such as helicopters, gasoline, diesel, hybrid, and electric cars, public transport, and electric trains. The collective research showcases the urgency and global significance of devising environmentally sound urban mobility solutions.
## 2 State of Art
The following chapter provides an overview of the latest concepts and advancements in the operation of AAM. This chapter also aims to offer a comprehensive understanding of the existing battery technologies used in eVTOLs and their essential for an environmentally friendly aerial transportation.
### Advanced Air Mobility
AAM refers to the use of aircraft in urban and regional areas to address traffic congestion and enhance overall mobility [10]. With the advancements in technology and the integration of Artificial Intelligence, AAM is expected to become a reality in Europe within the next 3-5 years [11]. While the term encompasses a broader range of use cases, this paper primarily focuses on the passenger transport aspects of AAM.
In this context, two main use cases for AAM are discussed: Urban Air Mobility (UAM) and Regional Air Mobility (RAM), as depicted in Fig. 1. Each of these scenarios presents a unique opportunity to revolutionize urban and regional transportation and offer fast, efficient, and environment friendly alternatives to conventional commuting methods.
Within the UAM scenario, eVTOL vehicles are designed to travel within a city, covering distances of about 50 km. This allows passengers to bypass long traffic jams and move swiftly to their destinations, contributing to smoother urban mobility.
On a larger scale, the RAM transport use case involves eVTOL vehicles traveling over distances of up to 300 km within highly urbanized areas. This holds the potential for valuable traffic relief and improved overall mobility in densely populated cities.
Each of these use cases presents its own set of challenges and opportunities that require careful evaluation and consideration to realize the full potential of AAM [10].
### Evtol Types
There are currently over 850 VTOL concepts with a large variety of technological maturity and different configurations [12]. Most of these concepts are propelled by electric motors supplied by battery systems. Some of them use a hybrid-electric approach where combustion engines act as generators. Essentially, the architectures used for these various concepts can be differentiated by the usage of a wing for high efficient cruise flight or being wingless [13]. In Fig. 2, the four most common types of eVTOL aircraft architectures are shown and compared against their forwarded and vertical lift. The choice of architecture already suggests the trade-off between the different cruise and hover efficiencies.
The multicopter configuration has high hover lift efficiency and low disc loading due to its high number of rotors. This means that it is able to take off and land vertically, but is less efficient during horizontal flight than other types due to high power demand.
Lift-and-cruise configuration have higher cruise efficiency and therefore able to fly longer distances compared to multicopters. This configuration is able to transit from vertical take-off to horizontal flight, allowing them to take advantage of both modes.
Tilt-rotor or tilt-wing configuration have lower hover lift efficiency and higher disc loading than multicopter or lift-and-cruise models, resulting in higher power demand and lower efficiency. However, these models are better suited for longer distances due to their ability to fly faster and their longer range.
Vectored-thrust or fixed wing configuration have low hover lift efficiency and high disc loading, meaning they are highly efficient in forward flight. However, they are less efficient in vertical take-off and landing than tilt-rotors or lift-and-cruise models.
Figure 1: AAM based on UAM and RAM
Figure 2: eVTOL configurations [14]
### Lithium-ion Battery
Batteries play a critical role in the operation of eVTOLs, as they provide the power required for the electric motors to lift the aircraft off the ground and maintain flight. The state of art in battery technology for eVTOLs is rapidly evolving, with research focused on increasing energy density, reducing weight, and improving safety. Lithium-ion batteries are commonly employed in the current generation of eVTOL aircraft due to their high energy density and well-established performance characteristics [12]. This battery type has up to today's state a gravimetric energy density between 150 - 350-Wh/kg and proven reliability.
### Energy Demand of eVTOL
The energy demand of eVTOLs depend on various factors such as their weight, design, propulsion system, and operational mode. Generally, eVTOLs require considerable amounts of energy for the phase during VTOL compared to the horizontal flight. Additionally, is the energy demand further affected by the duration of hovering, but also weather conditions. Therefore, eVTOL manufacturers are continuously exploring ways to improve the efficiency of their vehicles through the use of lightweight materials, advanced propulsion systems, and optimized operational procedures. The energy demand of eVTOLs is a crucial factor in determining their commercial viability, as it directly affects operating costs, range, and environmental impact. [16, 17]
To determine the power and energy needed for hover flight of eVTOLs, it is necessary to calculate the thrust. [18] Based on the jet theory, thrust calculation equations can be derived, as described in [19]:
\[E_{H}=MTOW\cdot\,\mathrm{g}\cdot\frac{1}{\tau_{C}}\cdot\nu_{real}\cdot\frac{ 1}{\eta_{c}}\cdot t_{H} \tag{1}\]
Equation 2 determines the required power demand during cruise flight for all types of eVTOL. The formula for the required propulsion power during cruise flight differs from the power required in hover flight as it considers the glide ratio [19]:
\[E_{C}=MTOW\cdot\,\mathrm{g}\cdot\frac{1}{\tau_{C}}\cdot\nu_{real}\cdot\frac{ 1}{\eta_{c}}\cdot t_{C}\]
The energy capacity of the battery depends on the total mass of the battery pack and the effective energy density:
\[E_{B}=e_{A}\cdot BM\text{ with }\mu_{A}=\frac{BM}{MW} \tag{3}\]
### Energy Demand of Ground Vehicle
The energy demand for vehicles is a crucial factor in determining their environmental impact and overall efficiency. Petrol and diesel-powered vehicles have long dominated the automotive landscape, primarily relying on their internal combustion engines to generate power. The energy demand for these vehicles is closely linked to their fuel consumption, measured in litres per 100 kilometres (\(U\)/100 km).
In contrast, electric-powered vehicles, such as electric cars and trains, represent an eco-friendlier alternative. The energy demand for these vehicles is measured in kiloatthours per 100 kilometres (\(\mathrm{kWh/100}\) km). Electric vehicles rely on batteries to store electrical energy, which powers electric motors to drive the wheels. Their energy efficiency is considerably higher than traditional internal combustion engine vehicles, as they convert a larger portion of the energy from the grid into actual propulsion.
Using equation 4 to determine the required power demand during all types of ground vehicles, like cars and electric vehicles, but also public transportation by tram, bus or metro:
\[E_{D}=\frac{s}{100}\ast h_{\nu}\ast f_{c} \tag{4}\]
### _Carbon Dioxide Equivalent / \(\mathrm{CO_{2}}\) (eq)_
Based on Eurostat definition of carbon dioxide equivalent (\(\mathrm{CO_{2}}\) equivalent or \(\mathrm{CO_{2}}\) (eq)) is a metric measure used to compare the emissions from various greenhouse gases on the basis of their global-warming potential, by converting amounts of other gases to the equivalent amount of \(\mathrm{CO_{2}}\) with the same global warming potential. This potential is a measure for the atmospheric warming caused by a gas compared to \(\mathrm{CO_{2}}\), which has the factor 1 for \(\mathrm{CO_{2}}\). [20]
The carbon footprint of electric vehicles depends on the energy mix used to generate the electricity. If the electricity comes from renewable sources, such as wind or solar power, the carbon footprint of electric vehicles can be close to zero. The considerations of practical terms, certain emissions associated with the production, transportation, and installation of renewable energy infrastructure may be accounted for, thus affecting the total emissions over the lifecycle. However, if the electricity comes from power plants, such as coal-fired plants, the carbon footprint of electric vehicles remains substantial by around 50 grams of \(\mathrm{CO_{2}}\) per kilometre, which is still better than conventional (whicles. [25]
The \(\mathrm{CO_{2}}\) emissions depend on the energy demand and the energy source. Even fully electric UAM vehicles are not completely free of carbon emissions, as the source of the electricity used to charge the batteries has an essential impact on the carbon footprint of the vehicle. [24]
\begin{table}
\begin{tabular}{c|c|c} \hline \hline \multirow{2}{*}{**Vehicle**} & \(\begin{array}{c}\varnothing\ h_{\nu}\\ \text{\&}\text{\
Recent studies have shown that the primary energy demand and CO\({}_{2}\) emissions of eVTOLs are notable lower than those of conventional aircraft. According to a report by Roland Berger, eVTOLs reduce CO\({}_{2}\) emissions by up to 50% compared to conventional helicopters [21]. Additionally, a study by the University of Michigan found that eVTOLs can reduce up to 40% energy demand compared to ground-based electric cars [22]. These results demonstrate the potential for eVTOLs to improve sustainability.
The study by Carnegie Mellon University examines the energy demand and GHG emissions of a very small quadcopter drone used for last-mile deliveries. The model showed that an electric quadcopter drone transporting a package off 0.5 kg consumes 0.08 MJ/km and causing 70 g of CO\({}_{2}\)(eq) considering the electric energy mix in the United States. Comparisons with other vehicles show that drones can reduce the energy consumption by 94% and 31% and GHG emissions by 84% and 29% per package delivered by replacing diesel trucks and electric vans, respectively [23].
To obtain realistic CO\({}_{2}\) emissions for an UAM operation based on the energy demand, flight scenarios are designed for the use case Paris. The following formula describes the calculation of CO\({}_{2}\) emission based on the hover and cruise energy demand by:
\[\begin{array}{c}\textit{CO2 emission}=\\ \textit{energy demand}\left(E_{S}+E_{R}\right)*\textit{ energy mix}\end{array} \tag{5}\]
## 3 Methodology
This chapter describes the approach to determine the CO\({}_{2}\) emissions of helicopters and UAM vehicles for relevant routes in context of the Paris 2024 Olympic games.
### Selected Flight Mission Profile
In order to determine the energy demand of eVTOLs, the flight scenario is divided into two main phases, namely hover and cruise. The energy demand of the selected eVTOLs is evaluated by analysing the calculated results based on the selected properties under optimal conditions as shown in the following figure.
The energy demand calculations are performed by integrating all equations and data into a macro calculation tool, based on previous research findings. The calculation tool allows for the selection of different eVTOL configurations and flight mission profiles.
### Selected Flight Routes
In this paper, the starting point is the Place de la Concorde in the city centre of Paris. According to the City of Paris, which provides real-time traffic updates through its website, traffic congestions are often high between 2 PM and 4 PM in the city centre.
By using Google Maps the average distance and travel time of seven days are calculated to ensure a representative analysis. This step is repeated for both the use cases UAM (max. 50 km) and RAM (max. 300 km).
The first set of missions contain UAM routes between places that are relevant for the Olympic games. They start at Place de la Concorde and go to 8 relevant points of interest in Paris with a distance between 2 km and 29 km.
### SELECTED EVTOLS
The initial stage of the investigation involves determining the characteristics for various eVTOL vehicles.
For the UAM use case with a range of up to 50 km, a multicopter is a suitable choice due to its inherent ability to take off and land vertically, enabling operations within constrained urban spaces. While other configurations such as vectored-thrust, lift-and-cruise, and tilt-rotor can also achieve VTOL capability, the use of a multicopter configuration may offer advantages in terms of maneuverability and adaptability to urban environments. However, these alternative configurations might present challenges related to their maneuverability within densely populated urban areas.
In contrast, for the RAM scenario, a lift-and-cruise model is a better choice. These models have the ability to take off and land vertically, but can then transition into a more efficient cruise mode, allowing them to cover longer distances at higher speeds. This makes them more suitable for longer, regional flights that require higher speeds and greater efficiency.
### SELECTED HELICOPTS
A total of seven helicopters have been assessed for their CO\({}_{2}\) emissions per passenger-kilometer during flight. The helicopters under investigation are detailed in Table 6. This range encompasses helicopters like the Robinson R44, accommodating 3 passengers and possessing a maximum take-off mass of 1089 kg, all the way up to the medium-sized Airbus Helicopter H145, with a capacity of up to 10 passengers and a maximum take-off mass of 3900 kg. With the exception of the Robinson R44, which utilizes gasoline, all helicopters employ kerosene as fuel. For reference, the combustion of 1 gallon of kerosene emits 9.9 kg of CO\({}_{2}\), while the same quantity of gasoline emits 8.8 kg of CO\({}_{2}\)[25].
The helicopters' hourly burn rate and cruise speed are detailed in their flight manuals. Just like with the UAMs, flight time for the helicopters is determined by dividing the direct distance by the cruise speed, without accounting for airspace layout, terrain, or approach and departure procedures. Furthermore, an additional allowance for factors such as run-up times, taxing, route deviations during cruise, and approach and departure procedures is factored into the calculations.
### SELECTED ENERGY MIX
The CO\({}_{2}\) emissions generated by the operation of air taxis are closely related to the electricity mix of the country in which the operation takes place. The data collected serves as the basis for determining energy demand and CO\({}_{2}\) emissions. In this regard, the average energy demand of the chosen mode of transport and the emission factor of the electricity mix of the average consumption in Europe are considered. These data are combined with suitable calculation methods to determine the associated CO\({}_{2}\) emissions.
\begin{table}
\begin{tabular}{c|c|c} \hline \multirow{2}{*}{**Helicopter Type**} & **PAX** & **Cruise speed** \\
**seats** & **[km/h]** \\ \hline R44 & 3 & 200 \\ R66 & 4 & 200 \\ H120 & 4 & 223 \\ H125 & 6 & 260 \\ H135 & 7 & 253 \\ H145 & 10 & 247 \\ Bell 206 & 3 & 223 \\ \hline \end{tabular}
\end{table}
Table 6: Helicopter type configurations
\begin{table}
\begin{tabular}{c|c} \hline \multicolumn{2}{c|}{**RAM (< 300 km)**} \\ \hline
**Destination** & **Flight range [km]** \\ \hline Beauvais & 69 \\ Rouen & 101 \\ Orleans & 103 \\ Reims & 140 \\ Le Havre & 179 \\ Le Mans & 185 \\ Lille & 205 \\ Calais & 240 \\ \hline \end{tabular}
\end{table}
Table 4: Selected flight destinations on regional level
All distances are determined under the assumption of a direct flight path without considering airspace structure, routing restrictions and topography influence.
Figure 7: RAM mission use case (<50 km)
### Selected Travel Emission Calculations
All emission values utilized in this study are derived from the "Travel Emissions Calculator" provided by GoClimate, which estimates carbon emissions generated by different travel methods based on distance [27]. The calculator then provides the estimated carbon emissions for each mode of transport such as petrol, diesel, or electric car, as well as train and subway. It is essential to note that the emission data used in the calculator is based on internal data, and therefore, the results may vary depending on the geographical location of the user.
The utilization of the GoClimate Travel Emissions Calculator allows us to obtain reliable and standardized emission values for various transportation modes, forming a solid foundation for evaluating the environmental impact of travel in our study.
## 4 Results
This chapter presents the findings regarding time savings and CO\({}_{2}\) emissions associated with eVTOLs, focusing on the selected routes and mission profile within the context of Paris and France. As mentioned in Chapter 3.3, the analysis covered two representative eVTOL configurations: a multicopter for the UAM use case and a lift-and-cruise model for the RAM use case.
### Time saving
The results show that the average time savings is approximately 23 minutes when using a multicopter compared to a car, while compared to public transportation, the average time savings is 22 minutes. These results were obtained considering only the pure flight time and assuming direct flight paths for the selected urban destination in Paris. Furthermore, no boarding time was considered.
On regional level, the diagram below shows the average time savings. When using a lift-and-cruise model, the average time savings is approximately 76 minutes compared to a car, while compared to public transportation, the average time savings is 69 minutes. These results were obtained considering only the pure flight time and assuming direct flight paths for the selected regional destinations in France departing from Paris. Again, no boarding time was considered.
### CO\({}_{2}\) Emission Multicopter vs. Cars
In this section, the results of the study on the comparison of CO\({}_{2}\) emissions originating from the energy demand of multicopter and conventional passenger cars (powered by petrol, diesel, hybrid, and electric) in an urban and regional setting are shown.
It is evident that multicopters for the use case UAM generally exhibit higher CO\({}_{2}\) emission values per passenger compared to conventional cars.
\begin{table}
\begin{tabular}{c|c} \hline \hline
**Area** & **\# Energy mix** \\ & **[g CO\({}_{2}\)/kWh]** \\ \hline Sweden & 13 \\ France & 57 \\ Germany & 508 \\ Europe & 226 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Energy mix by different regions [26]
Figure 8: UAM Time saving: eVTOL vs. Car vs. Train
Figure 9: RAM Time saving: eVTOL vs. Car vs. Train
The highest emission value of 4.18 kg CO\({}_{2}\) (eq) by a multicopter is recorded for the distance from Place de la Concorde to Charles de Gaulle Airport, covering a distance of 28.94 km. Comparatively, the electric car requires only 1.15 kg CO\({}_{2}\) (eq) for this distance, the hybrid car 2.14 kg CO\({}_{2}\) (eq), the diesel car 3.45 kg CO\({}_{2}\) (eq), and the car by petrol 3.62 kg CO\({}_{2}\) (eq). Interestingly, on the shortest distance to the Eiffel Tower, spanning only 2.8 km, multicopter emit a relatively low CO\({}_{2}\) emission of 0.45 kg CO\({}_{2}\) (eq), suggesting their potential as a more environmentally friendly option for short-distance travel, compared to some conventional cars such as a patrol car with 0.31 kg CO\({}_{2}\) (eq), diesel car with 0.29 kg CO\({}_{2}\) (eq), hybrid car with 0.18 kg CO\({}_{2}\) (eq) and electric car with 0.10 kg CO\({}_{2}\) (eq). The electric vehicle records the lowest CO\({}_{2}\) emission consumption for this route.
The following figure illustrates the resulting percentage comparison of CO\({}_{2}\) emissions between multicopter and different types of car per passenger concerning their respective total emissions. It is evident that for the aforementioned example of traveling from Place de la Concorde to Charles de Gaulle Airport, using an eVTOL for the flight route results in approximately 16% higher CO\({}_{2}\) equivalent emissions compared to the driving route with a petrol-powered car. When compared to a diesel-powered car, the eVTOL emits around 21% more CO\({}_{2}\), approximately 96% more than a hybrid vehicle, and a striking 364% more than an electric vehicle. Regarding the shortest distance to the Eiffel Tower, similar trends are observed. The eVTOL produces about 46% more CO\({}_{2}\) equivalent emissions than a gasoline car, roughly 53% more than a diesel car, approximately 247% more than a hybrid vehicle, and a significant 459% more emissions compared to an all-electric car.
Figure 11: Use Case: UAM – percentage CO2 emission comparison multicopter vs. car
Figure 10: Use Case: UAM – total CO\({}_{2}\) emission comparison multicopter vs. car
On average across all 8 flight missions, an multicopter emits approximately 123% more CO\({}_{2}\) equivalent emissions compared to a car running on petrol, roughly 129% more than a car powered by diesel, approximately 209% more than a hybrid vehicle, and a staggering 388% more than an electric car.
In relation to the regional use case (RAM), the results reveal that concerning the highest emission value of 16.82 kg CO\({}_{2}\) (eq) by the lift-and-cruise v/TOL is recorded for the distance from Place de la Concorde to Calais, covering a direct distance of 240.39 km. Comparatively, the electric car requires only 10.43 kg CO\({}_{2}\) (eq) for this distance, the hybrid car kg 19.37 CO\({}_{2}\) (eq), the diesel car 31.29 kg CO\({}_{2}\) (eq), and the car by petrol 32.78 kg CO\({}_{2}\) (eq).
The shortest distance is between Paris and the city of Beauvais with a direct flight distance of 69.46 km. For this route, the CO\({}_{2}\) emission per passenger is calculated at 5.12 kg CO\({}_{2}\) (eq). Comparatively, the electric car requires only 3.40 kg CO\({}_{2}\) (eq) for this distance, which accounts for 48% of the ratio. Compared to Car-Diesel the eVTOL configuration consumes nearly 50% less CO\({}_{2}\) emissions. Moreover, it results in 19% less emissions than a hybrid car however 51% more emissions than an electric car.
The following figures 13 and 14 presents also the total and percentage CO\({}_{2}\) emission comparison of CO\({}_{2}\) emissions between the lift-and-cruise configuration and different car types at regional level.
Figure 14: Use Case: RAM – percentage CO\({}_{2}\) emission comparison lift-and-cruise 1 vs. car
Figure 12: Use Case: UAM – average percentage CO\({}_{2}\) emission comparison eVTOL vs. car
Figure 13: Use Case: RAM – total CO2 emission comparison lift-and-cruise 1 vs. car
On average across the 8 selected regional flight routes, an eVTOL can save around 47% of CO\({}_{2}\) emissions compared to a patrol car, approximately 44% compared to a diesel car, and approximately 10% compared to a hybrid car. However, when compared to electric vehicles, eVTOL emissions are approximately 67% higher.
These findings indicate that, on a regional level, eVTOLs present notable advantages in reducing CO\({}_{2}\) emissions when compared to conventional petrol and diesel cars. Nevertheless, they still emit about notable more CO\({}_{2}\) than electric vehicles on average for the 8 selected regional flight routes.
### CO2 emission eVTOL vs. train
In this section, the results of the study on the comparison of CO\({}_{2}\) emissions from energy demand between eVTOLs and public transportation on urban and regional level are presented. Following figure shows the total CO\({}_{2}\) emissions at urban level between multicopter eVTOLs and public transportation (average value of metro, tram and bus, normalized to 2 PAX).
From the quantitative results it is evident that the total CO\({}_{2}\) emissions generated by eVTOLs are crucial higher than those produced by public transportation.
The following figure presents a percentage comparison of CO\({}_{2}\) emissions between multicopter eVTOLs and public transportation.
The CO\({}_{2}\) emission results show that on urban level an eVTOL compared to public transportation, normalized to 2 passengers, exhibits crucial higher CO\({}_{2}\) emissions. For the flight route from Place de la Concorde to Charles de Gaulle Airport, eVTOLs produce approximately 226% more CO\({}_{2}\) emissions. Similarly, for the route to the Eiffel Tower, eVTOLs generate nearly 362% more CO\({}_{2}\), equivalent to approximately 3.6 times higher emissions, which remains notably high. On average across the selected urban routes for one passenger, a multicopter eVTOL produce about 2.6 times more CO\({}_{2}\) emissions compared to public transportation.
In the context of the regional use case, the comparison between lift-and-cruise eVTOLs and electric trains unequivocally demonstrate that eVTOLs emit notable more CO\({}_{2}\) than an electric train with 2 passengers.
Figure 16: Use Case: UAM – total CO\({}_{2}\) emission comparison multicopter vs. public transportation
Figure 17: Use Case: UAM – percentage CO\({}_{2}\) emission comparison multicopter vs. public transportation
Figure 15: Use Case: RAM – average percentage CO\({}_{2}\) emission comparison eVTOL vs. car
This difference in CO2 emissions highlights the substantial environmental advantage of electric trains over lift-and- cruise evTOLs for regional travel. Electric trains offer a much cleaner and greener alternative, essentially reducing CO2 emissions and contributing to a more sustainable transportation system.
### CO2 emission event vs. Helicopter
The following results shows the CO2 emissions per km and passenger of all investigated helicopter types and UAM vehicle configurations.
The lowest CO2 emissions for any helicopter has the R66 with 0.17 kg/passenger-km. Due to the turbine engine and the higher start-up fuel consumptions, the R44 with a fuel consumption of 0.21 kg/km is used for subsequent investigations being more representative. The highest fuel consumption per passenger and flight distance has the H135 offering crucial higher redundancy, safety and flight performance compared to the R66. The highest CO2 emissions for any UAM configuration type has the Quadcopter with 0.66 kg/km, the lowest the vectored thrust and the tilt-rotor concept with 0.08 kg/km each.
Considering the additional energy demand for take-off and landing the multicopter is used as representative for UAM flights in subsequent investigations being the best set-up for short flights. This type can avoid the additional transition phase required by other eVTOL types.
The following figure illustrates the equivalent CO2 emissions for a multicopter concept compared to the average emissions of seven selected helicopters for urban use cases. It is clear that the fuel consumption during run-up results in higher CO2 emissions for the selected helicopters. For the shortest route from Place de la Concote to the Eiffel Tower, the average emissions from the helicopters is around 0.57 kg CO2 (eq), resulting in a 12% reduction in CO2 emissions with the multicopter. For the longest UAM-distance to Charles de Gaulle Airport, the helicopters emit around 8.55 kg CO2 (eq) per passenger on average, whereas the multicopter indicates a 51% potential reduction in CO2 emissions.
The following figure illustrates the equivalent CO2 emissions for a lift-and-cruise concept compared to the average emissions of seven selected helicopters for regional use cases. For the shortest route from Place de la Concote of Beauvais, the average emissions from the helicopters is around 20.91 kg CO2 (eq), resulting in a 75% reduction in CO2 emissions with the multicopter. For the longest RAM-distance to Clais, the helicopters emit around 72.38 kg CO2 (eq) per passenger on average, whereas the multicopter indicates a 77% potential reduction in CO2 emissions.
All investigations do not consider efficiency gains of the helicopter when cabin heating or defrosting is required. This will cost additional electric energy for the UAM concept, while the helicopter can use the engine heat exchanger.
## 5 Discussion
Several key elements will be discussed with respect to the sustainability of air taxis. Firstly, this paper acknowledges that while the analysis focused on time efficiency and CO\({}_{2}\) emissions of air taxis, sustainability encompasses various factors beyond these two aspects. Additional sustainable factors such as noise pollution, land usage for veriptors, or the overall infrastructure impact should be considered to provide a comprehensive evaluation of the overall sustainability of air taxis.
Secondly, in the perspective of time saving it is needed to discuss the assessment of simplistic veriptort operations and crucial factors such as boarding and de-boarding times. In real-world scenarios, the time taken for passengers to reach the veriptor from their location (door-to-Veriptor time) and the time required for boarding and de-boarding procedures can essentially impact the overall travel time. Therefore, a thorough approach, encompassing the entire travel process from the passenger's point of origin to the final destination, is essential to provide a more accurate and realistic assessment of the time-saving potential of eVTOLs in urban and regional transportation settings.
Thirdly, the paper suggests that a more in-depth comparison of CO\({}_{2}\) emissions could be achieved through a detailed analysis of the entire Product Life Cycle, which would consider emissions across all stages of an air taxi's life, from manufacturing and operation to end-of-life disposal. This comprehensive approach would provide a more detailed understanding of the environmental impact of air taxis and aid in identifying areas for improvement.
Another crucial point raised is the limited scope of the analysis, which only considered multicopters and lift-and-cr cruise configuration. To provide a more holistic picture, the paper suggests including various configurations, such as tilt-rotors and vectored thrust, in the calculations and comparing their performance. Different flight models may exhibit varying energy efficiencies and emissions profiles, which could influence the overall sustainability assessment.
Additionally, an essential aspect to consider is that the energy demand of eVTOLs was only calculated for a simplified mission profile (cf. Fig 5) for two scenarios UAM and RAM, respectively. To obtain a more realistic representation, further calculations should account for various operating conditions, such as lower time, and flight altitudes. Also, redundancy, flight safety and reliability where of no concern so for this investigation. The data shows that these performance indicators have an essential impact on the fuel consumption of helicopter. Another factor that was not considered is winter operations capability or all-weather capability. It is anticipated that under these conditions, the energy demand will be considerably higher, consequently leading to increased CO\({}_{2}\) emissions.
Furthermore, the paper underscores the significance of tailoring CO\({}_{2}\) emission calculations to reflect the specific energy mix of the country in which the flight routes are situated. The choice to utilize the average European electricity mix in the analysis was driven by the aim to establish a broad foundation that aligns with the overarching integration of UAM into major urban areas across Europe. This approach facilitates a comprehensive understanding of the potential sustainability impact of UAM on a pan-European scale, guiding policy decisions and frameworks for UAM adoption in diverse metropolitan contexts. However, the paper recommends a more refined approach by utilizing the distinct French electricity mix, boasting a lower CO\({}_{2}\) intensity of 58 gr/kVh, for flights conducted within France. This adaptable approach acknowledges the variance in energy sources between countries, ensuring the accuracy of sustainability data tailored to regional nuances. The dynamic nature of these energy sources also underscores the influence of contemporary political and environmental contexts.
These points emphasize the need for a broader analysis and more realistic operating conditions to obtain a comprehensive understanding of the sustainability and environmental impact of eVTOLs.
The paper brings attention to key considerations for a comprehensive evaluation of the sustainability of air taxis. It emphasizes the need to explore and include various sustainable factors beyond time efficiency and CO\({}_{2}\) emissions, conduct product-lifecycle analysis, broaden the scope of flight models, and customize emission calculations based on the specific energy mix of the region. Addressing these points would lead to a more nuanced understanding of the environmental impact of air taxis and pave the way for promoting sustainable aviation solutions.
## 6 Conclusion
In this paper, the timing saving and CO\({}_{2}\) emission by two eVTOL configurations for urban and regional transportation were evaluated. The analysis was compared to conventional transportation modes such as cars, public transportation and helicopters. In the realm of CO\({}_{2}\) emissions, the analysis uncovers a valuable divergence between urban and regional scenarios.
Urban eVTOL operations are aiming to offer a greener and more efficient alternative to conventional transportation. However, due to their elevated energy consumption, they demonstrate relatively lesser ecological friendliness compared to certain other modes of transportation. Specifically, for the UAM-use case, a Multicopter consumes on average UAM-mission 1.65 kg of CO\({}_{2}\) equivalent per person. When compared to other means of transport, the Multicopter consumes 47% less than the average conventional helicopter, making it a more environmentally conscious choice for urban air mobility. However, it does consume 13% more than a petrol car, 19% more than a diesel car, 92% more than a hybrid car, 143% more than metro-bus-tram, and 256% more than an electric car.
In a regional context, eVTOLs, such as the Lift-and-Cruise 1, showcase substantial CO\({}_{2}\) reductions compared to average conventional helicopters, petrol, diesel, and hybrid vehicles. Specifically, the Lift-and-Cruise 1 eVTOL consumes _77%_ less CO\({}_{2}\) equivalent than the average conventional helicopter, 46% less than a petrol car, 44% less than a diesel car, and 9% less than a hybrid car. This highlights the valuable CO\({}_{2}\) emission comparison between eVTOLs and different conventional transportation modes. However, it is crucial to note that eVTOLs emit significantly 68% more CO\({}_{2}\) than electric vehicles and 96% more than electric trains on selected regional routes.
Consequently, while the adoption of eVTOLs can significantly reduce the carbon footprint compared to using conventional helicopters and certain ground vehicles, it is essential to consider the broader spectrum of available transportation modes, as ground-based electric vehicles and public transportation still offer lower CO\({}_{2}\) emissions for urban and regional transit.
The following key findings can be highlighted:
* eVTOLs can save more than 20 min on average compared to cars and public transportation on urban level (<50 km)
* eVTOLs can save essential time on regional level (< 300 km) compared to cars around of 76 minutes and 69 minutes compared to train
* electric VTOL concepts can reduce the operational CO\({}_{2}\) emissions compared to combustion engine driven helicopter flights
* eVTOLs are more environmentally friendly than helicopter but not more than other transport options in urban areas, as they consume more energy
* eVTOLs can save CO\({}_{2}\) emissions compared to combustion engine cars on regional routes
* Electric trains are the most environmentally friendly alternative for regional transport
These results highlight the need to continue research and development to improve the environmental performance of eVTOL technology and prioritize the introduction of greener transportation alternatives to reduce the environmental impact of greenhouse gas emissions. The most obvious impacts are energy requirements during takeoff, startup, and shutdown, especially for UAM flights, and lower CO\({}_{2}\) emissions during cruise for regional flights.
As battery technology advances, eVTOLs are likely to become increasingly efficient, paving the way for a cleaner and more sustainable mode of transportation in the future. However, it is also prudent to focus on renewable energy sources. In summary, the carbon footprint of UAM vehicles is largely dependent on the type of energy source used to power them.
Efforts to promote and invest in efficient transportation and greener technologies have an important role to play in achieving a more sustainable and environmentally conscious future.
## Acknowledgement
The research presented in this paper is part of the research activity on the project HorizonUAM carried out by the Department of Unmanned Aerial Systems (UAS) at the Institute of Flight Guidance by the German Aerospace Centre (DLR).
I extend my gratitude to Atul Kumar, Nicolas Brieger, Markus Engelhardt, Thuysi Dao, and Venuska Mazza Rodrigues Dias for their valuable contributions to this paper. I would also like to express my appreciation to all other participants who engaged in discussions that contributed to shaping this critical narrative, as their collective input was essential in developing a comprehensive understanding of the complexities and implications surrounding AAM and eVTOLs.
## Competing Interests
The authors have no competing interests to declare that are relevant to the content of this article.
Figure 22: CO\({}_{2}\) emissions comparison for RAM: Lift-and-cruise vs. alternatives |
2309.13170 | Investigating Efficient Deep Learning Architectures For Side-Channel
Attacks on AES | Over the past few years, deep learning has been getting progressively more
popular for the exploitation of side-channel vulnerabilities in embedded
cryptographic applications, as it offers advantages in terms of the amount of
attack traces required for effective key recovery. A number of effective
attacks using neural networks have already been published, but reducing their
cost in terms of the amount of computing resources and data required is an
ever-present goal, which we pursue in this work. We focus on the ANSSI
Side-Channel Attack Database (ASCAD), and produce a JAX-based framework for
deep-learning-based SCA, with which we reproduce a selection of previous
results and build upon them in an attempt to improve their performance. We also
investigate the effectiveness of various Transformer-based models. | Yohaï-Eliel Berreby, Laurent Sauvage | 2023-09-22T20:16:40Z | http://arxiv.org/abs/2309.13170v1 | # Investigating Efficient Deep Learning Architectures
###### Abstract
Over the past few years, deep learning has been getting progressively more popular for the exploitation of side-channel vulnerabilities in embedded cryptographic applications, as it offers advantages in terms of the amount of attack traces required for effective key recovery. A number of effective attacks using neural networks have already been published, but reducing their cost in terms of the amount of computing resources and data required is an ever-present goal, which we pursue in this work.
This project focuses on the ANSSI Side-Channel Attack Database (ASCAD). We produce a JAX-based framework for deep-learning-based SCA, with which we reproduce a selection of previous results and build upon them in an attempt to improve their performance. We also investigate the effectiveness of various Transformer-based models.
Deep Learning Cryptography Side-Channel Attacks Profiling Attacks
## 1 Introduction
### Context: Side-Channel Attacks
Side-Channel Attacks (or SCA) are a class of cyberattacks that exploit weaknesses specific to implementations of would-be secure systems. For information recovery, they may rely on correlation between the targeted data and variations in timing [14], power consumption and electromagnetic (EM) emissions [15], sound emission [16], and other characteristics of a system. They may rely on the system's failure behavior, through Differential Fault Analysis [13]. If a suitable side-channel vulnerability exists in its implementation(s), even a fully theoretically-secure algorithm can be cracked.
The implementer of a system may try to minimize information leakage through _countermeasures_. These may include performing sensitive operations in constant time; always executing the same code regardless of the input fed to the system; avoiding the direct manipulation of sensitive data through _masking_; etc. They may also include physical defense mechanisms, such as EM shielding.
Side-channel attacks may be _profiling_ or _non-profiling_, the difference being that the attacker has access to a copy of the target device in the _profiling_ case. In this work, we will focus on _profiling_ attacks, into which machine learning techniques have been making headway. In particular, neural networks have been studied due to their ability to recover information even from highly-protected implementations.
As new attacks are published, increasingly-effective countermeasures are developed, rendering further attacks more costly both in terms of computing power required to train the models, and in terms of the amount of data that must be collected in order to apply them. As such, enhancing the information-recovery capabilities of neural networks used for SCA is of major interest.
So far, in practice, even neural networks are typically unable to reliably recover the correct value of a given key byte from a single trace. As such, guesses made over multiple traces are combined to obtain a more reliable one. The
minimum number of traces required for reliable recovery of the target variable is commonly referred to as "guessing entropy" in the literature.
In this project, we focus on power consumption and EM traces. We use the ASCAD project, described below, as a source of data and as a starting point.
### Overview of ASCAD
ASCAD (ANSSI SCA Database) [1] is a collection of power trace databases for side-channel attacks. Since the introduction of its first version, has enjoyed significant popularity in the SCA community as a benchmark for deep-learning-based attacks.
Thus far, it comprises two main versions, both targeting AES and collected on different microcontrollers and AES implementations. The project comes with a set of pre-trained models targeting each database, as well as code to train them.
Each set of databases is provided with "raw" traces, covering some portion of the encryption/decryption process, and "extracted" traces, covering a subset of the former which is known to leak relevant information. Extracted traces come with precomputed _labels_, corresponding to intermediate variables manipulated during the encryption/decryption process from which the key can be recovered. It is significantly easier to recover a well-chosen intermediate variable then post-process it to compute the key than to try to recover the key directly.
Traces may be synchronized or desynchronized. In the synchronized case, a given timestep always corresponds to the same instant in the encryption or decryption process, whereas such an instant may be represented at different time steps in the desynchronized case. Desynchronization implies the need for some degree of shift invariance or equivariance in the network's feature extraction process.
#### 1.2.1 ASCADv1 - Implementation on ATMega8515
Described in [1], ASCADv1 has two campaigns (themselves ambiguously named v1 and v2 - we refer to them as "fixed key" and "variable key") targeting a software AES implementation on ATMega8515 [1], which uses boolean masking.
The fixed-key campaign uses the same key for profiling and attack sets, whereas the variable-key campaign uses a random key for profiling and a fixed one for attack.
They are structured as shown in Table 1.
The README file for the variable-key campaign mentions that the traces are "not synchronized". The traces do appear synchronized _to some degree_; the magnitude of the desynchronization is not explicited.
Additionally, the variable key dataset exhibits significant qualitative differences from the fixed key dataset, as one can see on Figure 1.
These differences were not clearly documented: no mention of the variable-key dataset was made in [1], and no explanation was provided in the repository.
They were noticed by other researchers, but clarification thus far has been incomplete [Com] regarding the precise method of measurement for each campaign. Nevertheless, the difference in goals behind each campaign is to be noted:
"The fixed key campaign was measured with a strong incentive to get a clean signal in order to make sure that simple attacks could be performed, while this effort was not stressed in the random key campaign, which aims at being more challenging. This explains the notable difference of sampling rate and signal quality that you observed."
- \(@\)rb-anssi (Ryad Benadjjila)
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline
**Version** & **N\({}^{\circ}\) Profiling traces** & **N\({}^{\circ}\) Attack traces** & **SPET** & **SPRT** \\ \hline v1 / fixed key & 50,000 & 10,000 & 700 & 100,000 \\ \hline v2 / variable key & 200,000 & 100,000 & 1400 & 250,000 \\ \hline \end{tabular}
\end{table}
Table 1: ASCADv1 (ATMega) datasets (SPET/SPRT = Samples Per Extracted/Raw Trace)
The lack of clarity around the nature of the datasets, along with the relative scarcity of papers exploiting the variable-key one, represented a significant source of confusion at the beginning of this project. It was originally incorrectly assumed that published attacks focused on the variable-key dataset, as the performance of a network on the fixed-key dataset said little about its ability to generalize to different keys.
#### 1.2.2 ASCADv2 - Implementation on STM32F303RCT7
Described in [14], ASCADv2 has a single campaign, with a total of 800,000 traces. The "extracted" dataset features a profiling set of 500,000 traces using random keys, and an attack set of 10,000 traces using random keys as well. The paper claims 700,000 profiling traces, 20,000 validation traces and 50,000 test traces, but the data linked on the project's GitHub appears to be different from the one it describes.
The AES implementation it targets [15] uses affine masking [16] as a side-channel countermeasure instead of the simpler boolean masking technique used on the ATMega8515.
Except for data exploration, our tests were not applied to ASCADv2 over the course of this project. We had initially hoped to do so, but ASCADv1 by itself presented difficulties that we wanted to resolve first.
Figure 1: Comparison of traces between ASCAD fixed-key and variable-key ATMega datasets
### Prior work on ASCAD
Ever since ASCAD's introduction, there has been a notable body of research on efficient architectures for attacks on the database, in addition to the ones put forth in the original paper. We chose to focus on a handful of them, described below.
#### 1.3.1 Original ASCADv1 networks
[1] explored Multi-Layer Perceptrons (MLPs), and several Convolutional Neural Network (CNN) architectures: VGG-16 [15], ResNet-50 [16] and Inception-v3 [15].
CNNs and MLPs had comparable performance in the synchronized key, but CNNs vastly outperformed MLPs in the presence of the desynchronization, owing to the translation equivariance of convolutions. Among CNNs, a VGG-16-inspired network had the best accuracy.
Training setup:
* Optimizer: RMSProp
* Learning rate schedule: constant at \(10^{-5}\)
* Preprocessing: none (unscaled, uncentered traces)
#### 1.3.2 "Methodology for Efficient CNN Architectures in Profiling Attacks"
[15] put forth a disciplined methodology to build efficient CNNs for profiling attacks. It uses relatively shallow architectures, with short filters, leverages batch normalization [14] and a one-cycle learning rate schedule.
In parallel, the authors used gradient [13] and activation visualization to pinpoint the timesteps considered to be of interest by the network, and compared them to Signal-to-Noise Ratio (SNR) analyses on intermediate variables of interest.
For ASCADv1, its best network in the synchronized case has 3,930 times fewer parameters than the previous state of the art, and has a greater information recovery capability, requiring 191 traces for a zero-entropy key recovery, against a previous best of 1,146. Only the fixed-key dataset was investigated by the authors.
Training setup:
* Optimizer: Adam [14]
* Learning rate schedule: linear one-cycle with maximum of \(10^{-3}\)
* Preprocessing: point-wise (synchronized) / none (desynchronized)
#### 1.3.3 "Pay Attention to Raw Traces: A Deep Learning Architecture for End-to-End Profiling Attacks"
The approach proposed in [13] (P.A.R.T.) sets itself apart from others by being able to take "raw" traces as input - that is, traces covering a significant part of the encryption/decryption cycle, and not just a small window in which there is high information leakage. In that respect, it does away with the need for the selection of Points Of Interest (POIs) prior to network training and evaluation.
In order to be able to achieve that goal without running into the so-called "curse of dimensionality" and the accompanying explosion in resource consumption, P.A.R.T. first employs a so-called "junior encoder", which encodes overlapping time windows with a width of 1-2 clock cycles into a much shorter sequence:
Synchronized settingTwo sequences of time windows are encoded by locally-connected layers with stride equal to their width, a single filter and an offset of half their width between them. They are then concatenated. This significantly reduces the dimensionality of the output fed to subsequent layers.
Desynchronized settingA stride of 1 is used for the first layer, which leads to no dimensionality reduction besides that ensuing from the lack of padding. This reduction is achieved by stacking convolutional layers with kernels of length 3 and a stride of 1 still, and max-pooling layers. The final number of channels is 128. Data augmentation is applied by randomly shifting the input in addition to its original desynchronization.
After the junior encoder, a "senior encoder" is used to combine features across time, using bidirectional LSTMs. It is followed by a simple multiplicative attention mechanism and a feed-forward classifier.
Once fully converged, the authors' networks can recover a key from both datasets in under 10 traces.
N.B.: In this paper, the ASCADv1 (ATMega) fixed-key dataset is described as "ASCAD v1", and the variable-key dataset as "ASCAD v2".
#### 1.3.4 Original ASCADv2 network
The neural network proposed in [14] is a derivative of ResNet [10]. It uses multi-task learning to recover information about the masking parameters and about each key byte simultaneously, with a common trunk for feature extraction followed by branches specific to each predicted variable.
### Transformer and derivatives
The introduction of the Transformer architecture in [13], whose fundamental characteristic is the pervasive use of attention, represented a major leap forward in deep learning. Since then, its derivatives have been successfully applied to Natural Language Processing (NLP) [15, 16, 17, 18], Computer Vision [19, 20], speech recognition [19, 2], symbolic computation [14], etc., regularly producing new state-of-the-art results. The vast flexibility of this family of architectures made it a natural candidature for evaluation in the SCA context, which, to our knowledge, had not been done before.
Transformer models take a (potentially variable-length) sequence of _tokens_ as input, and return a sequence with the same shape. To perform classification, this sequence may then be aggregated into a single token, for example through global average pooling, or through extraction of the embedding corresponding to a special, learned token, such as BERT's [15][18].
Below, we detail some of the considerations specific to Transformers.
### Positional encoding
By itself, the attention mechanism cannot distinguish between positions in the input sequence. As such, additional features, called _positional encoding_, are typically added or concatenated to the input embeddings before they are fed to a Transformer trunk. They may be learnable, or deterministic, for example using Fourier features as in the original Transformer paper.
A significant number of such features may be required for the model to accurately leverage positional information; it is typically on the order of a few hundreds. This represents a significant overhead if the input tokens have low dimensionality.
Some models, such as CvT [21] or Audiomer [19], do away with positional encodings altogether by leveraging the spatially-aware inductive bias of convolutions, applying them to tokens repeatedly within the Transformer trunk.
#### 1.5.1 Training setup
A major drawback of Transformer models thus far has been their high cost of training. Though they do not necessarily have more trainable parameters than comparable alternatives, the most popular Transformers do, and were designed with very-large-scale training setups in mind. Derivative works rely on fine-tuning more often than not, due to the unfeasibility of training those models from scratch on a modestly-sized infrastructure.
One reason for this high cost is the amount of data required by Transformers to generalize well, which is typically extremely large: BERT [18] was pretrained on 3.3 billion words, and GPT-3 [18] on nearly 500 billion tokens; ViT [19] performs much worse than ResNet when pretrained on a dataset of 9 million images, but better when over 90 million images are used. This may be explained by the flexibility of the attention mechanism, which is both a strength and a weakness, as it lacks the inductive bias that led CNNs to revolutionize computer vision in the early 2010s.
To train Transformers in the face of a small dataset, data augmentation is nearly a requirement. While various modalities of data augmentation have been extensively studied and implemented in the vision and audio subfields of machine learning, they may be delicate to implement in the SCA context - beyond simple desynchronization - without destroying the relevant information carried by traces.
Additionally, Transformers are trained with adaptive optimizers such as Adam [14], LARS [21] or LAMB [20]. Use of a learning rate schedule including warm-up is a necessity to ensure stability, and proper hyperparameter tuning is recommended. Detailed training guidelines may be found in [17].
#### 1.5.2 Taming quadratic complexity
Another reason for Transformers' high training cost is the \(O(n^{2})\) time and space complexity of the self-attention mechanism with respect to the length of the input sequence, which typically renders it impractical for sequences longer than a few hundred or a few (low single digits) thousand entries.
A variety of methods exist to work around the latter problem; we describe them below.
Artificial reduction of the sequence lengthThis is the standard, most common workaround. For signal processing, this may be done by splitting the input into potentially-overlapping patches of a set size [15], often with additional preprocessing such as Short-Time Fourier Transforms (STFTs) [16, 17].
Performing self-attention on a fixed-size latent arrayPerceiver [18] and Perceiver IO [18] by DeepMind use a novel architecture, wherein the model's trunk transforms a latent array whose initial value is learned, and whose size is independent of the input sequence. Cross-attention is performed between the input and the latent array several times across a forward pass, but less often than self-attention on the latent array. The resulting model has \(O(m(m+n))\) complexity, where \(m\) is the length of the latent array and \(n\) the length of the input. This architecture is particularly relevant for multi-modality processing (video, audio, sound) with a single model, as modalities can be distinguished using their additional dimensions in the embeddings of the corresponding data.
Reducing the complexity of the attention mechanism itselfThis approach has notably been proposed in the Reformer [19] and Performer [15] papers. Reformers use a memory-efficient training process by computing attention scores separately for each query, to avoid storing a full attention matrix, and leverage locality-sensitive hashing to only compute attention scores against queries that are relatively close a given value; they achieve quasi-linear (\(O(n\log n)\)) complexity. Performers approximate kernelizable attention mechanisms using a method called FAVOR+ (_Fast Attention Via Positive Orthogonal Random Features_) and achieve \(O(n)\) complexity. In addition to its linear complexity, FAVOR+ has the advantage of being a drop-in replacement to regular attention, with no adjustments required besides switching out the attention function. Let it be noted that the Audiomer models [14] use FAVOR+ in order to be computationally tractable.
### Discussion and project direction
As mentioned before, we decided to evaluate the effectiveness of Transformer models in the SCA context. After the initial literature review, the project's initial aim was to build upon P.A.R.T. [13], replacing the senior encoder's LSTMs with Transformer blocks using FAVOR+ attentions to achieve linear complexity, and adapting the architecture's tenets to ASCADv2.
We decided to proceed using JAX [16], which is being positioned by Alphabet as a long-term replacement for Tensorflow, offers significant flexibility due to its ability to perform automatic differentiation (autodiff) on Numpy-like Python code, and delivers high performance thanks to Just-In-Time (JIT) compilation leveraging the XLA compiler. Since JAX, by itself, doesn't offer common neural network primitives such as batch normalization, convolutional layers, trainable parameter representation, etc., we used DeepMind's Haiku library [15] on top of it. Haiku is a thin abstraction layer, and is completed by composing parts of the accompanying ecosystem; for example, optimizers are not included in Haiku, and may be found in Optax [16]. This compositional, build-as-you-go setup involved significantly more friction than using Keras/Tensorflow, and the quickly-evolving, young nature of the JAX ecosystem meant that documentation was often scarce or outdated. In counterpart, this proved to be an excellent learning opportunity, and it was possible to directly leverage Google/DeepMind's latest research projects.
Regarding the training environment, some of the projects we intended to build upon incurred significant training overhead (we measured 30 min / epoch on a single NVIDIA V100 GPU for an ASCADv2 CNN, with training being prolonged to 300 epochs in the corresponding paper, and Transformers are known to be costly to train and memory-hungry). As such, parallel training on powerful GPUs was a must. To this end, our team secured a resource allocation of 10,000 GPU hours on the Jean Zay supercomputer, part of the CNRS's IDRIS scientific computing platform. This allowed us to train on up to eight 32GB V100 GPUs per node simultaneously, with an upper limit of forty GPUs used simultaneously. Access was only fully acquired in early December; before then, training was conducted on a single NVIDIA GTX 1660 Ti.
## 2 Our contribution
A significant portion of the time allocated to this project was dedicated to becoming familiar with its foundations: Side-Channel Attacks, the ASCAD project and surrounding literature, the JAX/Haiku ecosystem, Transformer models and their variations, the Jean Zay scientific computing platform, parallel training, regularization techniques, etc.
In addition to this foundational work, we ran experiments using the following architectures (all reimplementations and adaptations were performed over the course of this project):
* Reimplementation of [Ben+20]'s best synchronized CNN architecture (VGG-16-based) in JAX from Tensorflow
* Reimplementation of [Zai+19]'s best synchronized architecture in JAX from Tensorflow, and experiments with variations of it
* Adaptation of Perceiver [Jae+21b] / Perceiver IO [Jae+21a]
* Adaptation of Performer [Cho+21]
* Adaptation of Audiomer [Sah+22] in PyTorch (too time-consuming to port to JAX)
Across these experiments, we tried various learning rate schedules, preprocessing schemes, input embeddings, optimizers (notably RMSProp, Adam and LAMB), and optimizer hyperparameters. They were carried out primarily using, and in parallel with the development of, the deep-learning software project associated with this work. In addition to code specific to the experiments listed above, it involved developing:
* Training, preprocessing & augmentation utilities, among which:
* A learning rate finder, inspired by [Smi18]
* Multi-GPU training with gradient averaging
* Gradient visualization
* Stochastic Weight Averaging [Izm+19]
* Model checkpointing
* Trace preprocessing (point-wise and global)
* Training progress monitoring & associated plots
* A reproduction of the Signal-to-Noise ratio analyses shown in [Ben+20], whose code was not included in paper's repository
* An exploratory notebook to highlight the properties of Fourier positional encoding
* Various scripts to train models on the Jean Zay platform
We list some of our findings below.
### Reproducing ASCADv1
One of the first thing we noticed while trying to reproduce [Ben+20]'s results was how difficult it was for the network to converge.
We initially thought this was a problem with our code, but even with the authors' original code, the categorical cross-entropy loss goes from 5.5451 on training set and 5.5452 on the validation set at epoch 5, to 5.5448 on the training set and _5.5453_ on the validation set at epoch 20.
After prolonging training runs, we found that the original network usually experienced a sharp drop in the loss around the 30th epoch, in spite of the learning rate remaining unchanged. However, we did not witness such a drop in our JAX reimplementation until we added batch normalization [IS15] within convolutional blocks and in the final classifier. This may be due to minute differences in the implementations of the RMSProp optimizer between Optax and Tensorflow, coupled with the network's inherent difficulty to converge on this task.
### Reproducing [Zai+19]
As previously mentioned, [Zai+19] made no mention of the ASCADv1 variable-key dataset. We found that, unlike the original ASCADv1 VGG-based network, the authors' best network for the fixed-key dataset did not converge on
the variable-key dataset, likely owing to its very low complexity, which may be insufficient to model the interactions between different key bytes and the target variable.
After reimplementing this network within our framework, we experimented with the learning rate schedule. We quickly noticed that even minute changes in learning rate at various stages of training would greatly affect the outcome - sometimes preventing convergence altogether.
Since the appropriate learning rate is affected by the dataset and architecture, we decided to implement a learning rate finder, in order to be able to get LR upper bounds to use with custom schedules. We could not find a ready-made JAX implementation, so we wrote our own. It is loosely based on the approach described in [16], implemented in the fastai library [15]. The learning rate is increased exponentially (as in fastai, and unlike in [16], where the increase is linear) over one epoch; the maximum usable learning rate is then chosen as the maximum value before which the loss stagnates or increases. When it starts exploding, the graph is automatically truncated for readability. See Figure 2. We used this tool pervasively in our notebooks throughout the rest of our experiments.
One we had a learning rate finder, we started experimenting with various schedules, trying to achieve super-convergence [14]. We tried a cosine schedule, but noticed that its peaks hurt the network's performance. We eventually settled for an exponentially-decayed cosine schedule, with a period of one-fifth of the total training time, and a half-life of half the total training time. We observed significantly lower (on the order of 50% lower) training and test set losses with this setup when training over 50 epochs, compared to the one-cycle schedule used in [11], and accordingly lower guessing entropy. We were able to do so even at a batch size of 400 (with which we scaled the learning rate linearly), with across 8 GPUs.
We plot the network's training curve using this schedule on Figure 3.
### Is Categorical Cross-Entropy the right loss function?
While running experiments, we noticed a peculiar phenomenon, which was also described in [10]: it is possible for our networks to keep improving with regards to their guessing entropy, while the validation loss goes _up_ and the training loss goes down - which would normally indicate overfitting.
We hypothesize that this is due to categorical cross-entropy being an inadequate choice of loss function for the problem at hand. It does not adequately model the impact of outliers on the final guess, nor of the importance of the relative order of guesses, moreso than the raw magnitude of their log likelihood.
We found the proposal in [11] of a loss function specifically tailored to the SCA context alluring, but ran into issues when trying to reproduce results using the project's GitHub repository, as the code could not run as-is. Due to time constraints, we did not attempt to port it to JAX.
Figure 2: Learning rate finder output (EMA = Exponential Moving Average)
Figure 3: Training plot of our reimplementation of [Zai+19]’s best CNN in the synchronized cast on ASCADv1. The bidirectional impact of the learning rate on the network’s ability to learn can be seen. “EMA loss” is an exponential moving average of the training loss to account for its high variance across batches.
### Training Transformers
A list of Transformer architectures we explored was given above. The tests conducted with them were particularly resource-intensive, always requiring us to use as many GPUs as possible in parallel in order to get feedback at an acceptable pace.
Perceiver IOAdapting Perceiver IO [11] was relatively straightforward, as the project's GitHub repository comes with an encoder suited for audio processing that we were able to repurpose. As the model is known to be able to directly attend to individual pixels in the ImageNet setting, we used patches of size 1. We used Fourier positional encoding with 64 bands, as well as learnable encoding of a comparable size. We initially used the project's default optimizer settings and learning rate scheduling scheme. As the loss did not decrease to a satisfactory value, even on the training set, we lowered the dropout rate to 0 and attempted various maximum learning rates, to no avail: we still observed no convergence with Fourier embeddings. The model did, however, converge with learnable embeddings, but with poor generalization.
PerformerFor Performer [10], we extracted the FAVOR+ mechanism (of which an implementation was published by Google) and built our own Transformer on top of it. We used two to six Transformer blocks, with four to eight heads, and a query/key/value size of 64 to 128. For input embedding, we either extracted patches of the input traces as-is, or used a shallow set of convolutional layers to encode them. Once again, Fourier embeddings positional were ineffective. With learnable positional embedding, the model did learn on the training set, but did not generalize to the validation set within 50 epochs.
AudiomerWe did a summary test with the Audiomer [14] architecture. Because of tight coupling between the model's architecture and its input size, we could not adapt it to our 1400-dimensional traces in a straightforward way. As a result, we resampled the traces to the length of 8192 timesteps expected by the model. The model's loss diverged with its default precision, which used 16-bit floats. With 32-bit floats, it remained stable. However, we found that, with other parameters left at default values and a max learning rate of \(10^{-3}\), the model did not learn to be better than a random classifier after 300 epochs, even on the training set. We realize that resampling may have introduced artefacts, but the lack of convergence may also indicate that regularization was too strong (with a dropout rate of 0.2), or that the model's complexity was too low.
In spite of our efforts, the results we obtained were not encouraging. Given our previous observations regarding the difficulty of achieving convergence, it seems that application of this family of models to the problem may be ineffective without careful architectural considerations, and further hyperparameter tuning with regards to the length of the optimizer's warm-up period, the weight decay factor, Adam/LAMB's \(\beta_{1}\) and \(\beta_{2}\) parameters, the dropout rate, etc. This is especially likely as Transformers are known to be inherently delicate to train [13].
## 3 Conclusion and future work
We developed our own software project to perform Side-Channel Attacks with deep learning using JAX. With it, we were able to reproduce architectures proposed in [1] and [11]. We optimized aspects of the training process, such as learning rate scheduling, and conducted our own data exploration (gradient visualization, signal-to-noise analyses). We discovered that [11]'s models did not generalize to a methodologically-rigorous setting in which the key is unknown on the attack dataset. We tried applying various Transformer-based models on the ASCADv1 variable-key dataset, with limited success.
More careful input preprocessing may be required in order to apply Transformers to the SCA context. We encourage the investigation of techniques such as STFTs, especially for high-resolution and raw traces, as spectral representation is often advantageous for deep learning in a signal-processing context [12, 13]. We also highlight wavelet decomposition [1] as a possible lead to follow.
In parallel, instead of trying to port the problem to a full Transformer architecture, we suggest trying to incorporate attention gradually into state-of-the-art SCA models. We did not have time to pursue our original goal of building directly upon [12]; similarly to the approach detailed in it, we suggest attempting to use a locally-connected layer as a means of embedding individual clocks cycles before feeding them to a Transformer.
## Code Availability
Our code was made available to the team we worked with at Telecom Paris's LTCI, in the form of a private GitHub repository and corresponding environment on the Jean Zay platform, so that it may be used for further research.
## Acknowledgements
Thanks to Prof. Laurent Sauvage for his supervision and guidance, and to Arnaud Varillon for insights into side-channel attacks.
## Update History
The primary content of this manuscript was completed on February 3, 2022, and originally appeared on the website of the institution's Embedded Systems Concentration. Any revisions after this date pertain to formatting and minor corrections.
|
2309.07105 | Global becomes local: Efficient many-body dynamics for global master
equations | This work makes progress on the issue of global- vs. local- master equations.
Global master equations like the Redfield master equation (following from
standard Born- and Markov- approximation) require a full diagonalization of the
system Hamiltonian. This is especially challenging for interacting quantum
many-body systems. We discuss a short-bath-correlation-time expansion in
reciprocal (energy) space, leading to a series expansion of the jump operator,
which avoids a diagonalization of the Hamiltonian. For a bath that is coupled
locally to one site, this typically leads to an expansion of the global
Redfield jump operator in terms of local operators. We additionally map the
local Redfield master equation to a novel local Lindblad form, giving an
equation which has the same conceptual advantages of traditional local Lindblad
approaches, while being applicable in a much broader class of systems. Our
ideas give rise to a non-heuristic foundation of local master equations, which
can be combined with established many-body methods. | Alexander Schnell | 2023-09-13T17:25:27Z | http://arxiv.org/abs/2309.07105v3 | # Global becomes local: Efficient many-body dynamics for global master equations
###### Abstract
This work makes progress on the issue of global- vs. local master equations. Global master equations like the Redfield master equation (following from standard Born- and Markov approximation) require a full diagonalization of the system Hamiltonian. This is especially challenging for interacting quantum many-body systems. We discuss a short-bath-correlation-time expansion in reciprocal (energy) space, leading to a series expansion of the jump operator, which avoids a diagonalization of the Hamiltonian. For a bath that is coupled locally to one site, this typically leads to an expansion of the global Redfield jump operator in terms of local operators. We additionally map the local Redfield master equation to an approximate Lindblad form, giving an equation which has the same conceptual advantages of traditional local Lindblad approaches, while being applicable in a much broader class of systems. Our ideas give rise to a non-heuristic foundation of local master equations, which can be combined with established many-body methods.
_Introduction.--_ Foundational questions of the theory of open quantum systems have recently experienced a resurgence due to the high control and resolution of recent experiments on open quantum matter [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13] and a new push from the theory side to study the dynamics of open quantum many-body systems [14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30]. While the limit of ultraweak system-bath coupling is well understood [31; 32; 33], leading to the quantum-optical master equation in Lindblad form, the underlying secular approximation is often violated by genuine quantum many-body systems where close degeneracies in the many-body spectrum are expected from exponentially large densities of states and vanishing finite-size gaps. Hence one has to resort to finite system-bath coupling master equations where a multitude of master equations have been proposed [34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67], each with their advantages and drawbacks [57; 58].
Nevertheless, the well-established Redfield master equation [31; 32; 33; 59] still often outperforms other approaches [57; 58; 60] (despite known problems regarding positivity violation and inaccurate steady states [58; 60; 61; 62; 63; 64; 65; 66; 67], which however can be cured efficiently [53][68]). However, it belongs to the class of global master equations [69], meaning that it involves the global eigenstates and -energies of the system. This is challenging for extended quantum many-body systems especially far from equilibrium, where an effective low-energy theory in terms of free quasiparticles is not possible. Also it prevents one from directly applying established many-body methods [26]. Alternatively, one therefore often resorts to local Lindblad equations [69], where for site-locally coupled baths one introduces local Lindblad jump operators [16; 17; 18; 69; 70; 71; 72; 73; 74]. Nevertheless, such approaches often violate consistency with statistical mechanics [58]. Additionally, for out-of-equilibrium systems, e.g. coupled to multiple baths, and strongly coupled systems, the specific details of the environment and how the system is coupled to it influence the steady state [2; 53; 64; 65], so that one needs microscopically derived equations. Here we show an alternative scheme that maps the Redfield master equation into a local form, which on leading order can be further approximated to Lindblad form with local jump operators.
_Redfield master equation.--_ Consider a typical open quantum system setup with total Hamiltonian \(H_{\rm tot}=H_{\rm S}+H_{\rm SB}+H_{\rm B}\), where \(H_{\rm S}\) and \(H_{\rm B}\) are Hamiltonians that act on the system and bath Hilbert space only and \(H_{\rm SB}\) is the system-bath interaction Hamiltonian, which for simplicity we assume is a direct product \(H_{\rm SB}=v\otimes B\) of system- and bath coupling operators \(v\) and \(B\) respectively. Performing Born- and Markov approximations, the dynamics of the reduced density matrix of the system \(\varrho(t)={\rm Tr}_{\rm B}\varrho_{\rm tot}(t)\), is given by the Redfield quantum
Figure 1: (a) Sketch of the Redfield jump operator \(u\) in Eq. (2). In the interaction picture, an initially local coupling operator \(\tilde{v}(0)\) is spreading in space. This operator is convoluted with the bath-correlation function (BCF) \(C(t)\). (b) Convergence of the Taylor series around \(\varepsilon_{0}=-2J\) (top), \(2J\) (bottom) for \(W(E)\) of a pure ohmic bath at temperature \(T=J\) (solid blue line). (c) Sketch of the system under investigation: An XXZ chain is connected at both ends to heat baths of temperature \(T_{l}\), \(T_{r}\) with coupling operators \(v_{l}\), \(v_{r}\). Additionally, an external field gradient \(h_{i}\) in the \(z\)-direction is applied.
master equation [31; 32; 33; 59; 76; 77]
\[\partial_{t}\varrho(t)=-i\left[H_{\rm S},\varrho(t)\right]+\left([u\varrho(t),v ]+{\rm h.c.}\right). \tag{1}\]
Here [76; 53; 77]
\[u=\int_{0}^{\infty}{\rm d}\tau\tilde{v}(-\tau)C(\tau) \tag{2}\]
is an operator which convolutes the coupling operator in the interaction picture \(\tilde{v}(t)=\exp(iH_{\rm S}t)v\exp(-iH_{\rm S}t)\) with the bath-correlation function \(C(\tau)=\langle\tilde{B}(\tau)B\rangle_{\rm B}\), where \(\tilde{B}(t)=\exp(iH_{\rm B}t)B\exp(-iH_{\rm B}t)\). Throughout the manuscript we set \(\hbar=k_{\rm B}=1\).
While the Redfield equation (1) generally leads to more accurate results when compared to heuristic Lindblad descriptions (like local Lindblad equations), it has two major deficiencies that prohibit its straight forward application to genuine quantum many-body systems: 1) Since it does not obey Lindblad structure, its dynamics will generally violate positivity, even though it has been shown that this issue can be mitigated relatively easily by making connections to statistical mechanics [53]. And 2) in order to practically implement the Redfield equation (1) one generally introduces the system's eigenbasis \(H_{\rm S}=\sum_{k}E_{k}|k\rangle\langle k|\), giving \(u=\sum_{kq}\hat{v}_{kq}W(E_{k}-E_{q})\)[78] with eigenspace-projected coupling operator \(\hat{v}_{kq}=|k\rangle\langle k|v|q\rangle\langle q|\) and Fourier-Laplace transform \(W(E)=\int_{0}^{\infty}{\rm d}\tau{\rm e}^{-iE\tau}C(\tau)\) of the bath correlation function. Nevertheless this is problematic since it requires a full diagonalization of the many-body Hamiltonian which is generally a hard task. And even then, if the coupling operator \(v\) can be represented as a sparse matrix (for example if \(v\) is an operator that is local in space) the matrix \(u\) is generally a dense matrix, making the numerical solution of the equation memory- and time consuming. Consequentially, one therefore might call the Redfield equation (1) a global master equation since it involves the global eigenstates of the system. Put differently, this generally hinders us from even 'writing down' the global Redfield master equation for a generic many-body system. In the following, we propose a way around this problem by performing an expansion of the function \(W(E)\) around a given transition energy \(E=\varepsilon_{0}\). Note that, different from here, the term 'global master equation' is sometimes also used specifically for the quantum optical master equation [31; 32; 33], a Lindblad master equation that can be derived from this Redfield equation after rotating-wave approximation.
_Short correlation-time expansion--_To write the Redfield master eq. (1) in local form, we use ideas similar to derivations of quantum Brownian motion [31] where one expands \(\tilde{v}(-\tau)\) in first order for short times \(\tau\), which is valid for short bath-correlation times, cf. Fig. 1(a). However here we expand to arbitrary order and we use an additional freedom to first rewrite Eq. (2) by inserting \(1={\rm e}^{-i(E-\varepsilon_{0})\tau}|_{E=\varepsilon_{0}}\) as
\[u=\int_{0}^{\infty}{\rm d}\tau{\rm e}^{-i([H_{\rm S},\cdot]- \varepsilon_{0}\cdot)\tau}[v]\;{\rm e}^{-iE\tau}|_{E=\varepsilon_{0}}C(\tau). \tag{3}\]
By expanding the first exponential in a series in \(\tau\), which is valid as long as \(C(\tau)\) decays on a bath-correlation timescale that is short against the timescale of operator spreading of \(\tilde{v}(-\tau)\) (typically confined by Lieb-Robinson bounds [79; 80], cf. Fig. 1(a)), then using \(-i\tau=\frac{\partial}{\partial E}\) and exchanging the integral and the derivative, we find a Taylor-like expansion
\[u\simeq u^{(N)}=W(\varepsilon_{0})v+\sum_{n=1}^{N}\frac{W^{(n)}( \varepsilon_{0})}{n!}\big{(}[H_{\rm S},\cdot]-\varepsilon_{0}\cdot\big{)}^{n} [v] \tag{4}\]
with \(n\)th derivative \(W^{(n)}(\varepsilon_{0})\) and \(([H_{\rm S},\cdot]-\varepsilon_{0}\cdot)^{n}\) being the \(n\)th concatenation of the application of the superoperator. As we show in the supplemental material (SM) [78], in energy space, this can be understood as an expansion of the Redfield jump operator \(u\) around \(E=\varepsilon_{0}\), i.e. at transition energies \(E\) that are close to \(\varepsilon_{0}\) (see Fig. 1(b)). We also formally prove [78] that the series in Eq. (4) converges as long as all relevant transition energies are in the convergence radius of the scalar Taylor series around \(\varepsilon_{0}\). Eq. (4) has the significant advantage that if the expansion is truncated, a diagonalization of the system Hamiltonian is not required, and if \(v\) and \(H_{\rm S}\) have a sparse representation, the nested commutators also will have such a representation. Especially in the case where \(v\) is a site-local operator and \(H_{\rm S}\) involves nearest-neighbor terms only, Eq. (4) can be understood as an expansion in local operators, where the leading term is a site-local operator, and each of the next terms couples to one additional neighboring site. Also note that by setting \(N=1\) and \(\varepsilon_{0}=0\), we recover the standard result for quantum Brownian motion [31].
In order to understand what the transition energies must be small compared to, let us consider a pure ohmic bath, \(H_{\rm B}=\sum_{\alpha}\omega_{\alpha}b_{\alpha}^{\dagger}b_{\alpha}\), \(B=\sum_{\alpha}c_{\alpha}(b_{\alpha}^{\dagger}+b_{\alpha})\), with a continuum of modes \(\alpha\) and spectral density \(J(E)=\sum_{\alpha}c_{\alpha}^{2}[\delta(E-\omega_{\alpha})-\delta(E+\omega_{ \alpha})]=\gamma E\) with dissipation strength \(\gamma\). In this case \(W(E)\) has no imaginary part and takes the form \(W(E)=J(E)/[\exp(E/T)-1]\), where \(T\) is the temperature of the bath. Due to these vanishing imaginaries, the Redfield master equation actually gives rise to a valid description of the reduced dynamics at second order of the system-bath coupling at all times, since the correction term that was found in Ref. [53] vanishes. Expanding around \(\varepsilon_{0}=0\) we find \(W(0)=\gamma T\), \(W^{\prime}(0)=-\gamma/2\), \(W^{\prime\prime}(0)=\gamma/(6T)\), we observe that the terms in Eq. (4) represent an expansion in \(\Delta/T\) where \(\Delta=E-\varepsilon_{0}\) is the deviation of the transition energy \(E\) (due to the commutator structure) from \(\varepsilon_{0}\) of a thought quantum jump that is induced by \(u\). This
is also confirmed by our intuition that at high temperatures \(T\) the bath-correlation function decays rapidly and for \(T\to\infty\) tends do a delta function which leads to the leading \(N=0\) term in Eq. (4), as illustrated in Fig. 1(a).
In Fig. 2(a) we plot the typical relaxation dynamics of an XXZ spin chain with external field in the \(z\)-direction [cf. Fig. 1(c)] that we will introduce later in the paper. The solid lines show the exact relaxation dynamics of the many-body eigenstates under the Redfield master equation for an intermediate temperature \(T=2.2J\) with nearest-neighbor interaction constant \(J\). The dashed green lines show the approximate Redfield dynamics with the operator \(u^{(N)}\) in Eq. (4) where the expansion is cut off at order \(N=4\) and expansion energy \(\varepsilon_{0}=0.6J\). We observe good agreement both for the dynamics and for the steady state. Fig. 2(b),(c) show the trace distance, \(||\varrho||=\mathrm{Tr}\sqrt{\varrho^{2}}\), of the steady state error for the approximate Redfield master equation at order \(N=1\) (b), \(N=4\) (c) to the exact Redfield steady state [additional plots for other truncation orders \(N\) are shown in the SM [78]]. We observe that, as expected, the expansion of the operator \(u\) converges well at high temperatures \(T\gg J\) for a wide range of expansion energies \(\varepsilon_{0}\) with steady state errors below \(10^{-2}=1\%\) for \(T>2J\) at order \(N=4\). Nevertheless, also at intermediate temperatures \(T\sim J\) good results are found if the expansion energy \(\varepsilon_{0}\) is chosen to be high, \(\varepsilon_{0}\approx 2J\). An intuitive explanation on why higher values of \(\varepsilon_{0}\) perform better, can be found from Fig. 1(b) where we depict the convergence of the Taylor series for \(T=J\): If \(\varepsilon_{0}=-2J\) (top panel) the linear term \(N=1\) leads to negative rates \(W(E)\) at transition energies \(E\gtrsim 0.5J\), which can cause the Redfield dynamics to become unstable. For \(\varepsilon_{0}=2J\) (bottom panel) such negativities only occur at much higher transition energies \(E\gtrsim 2.5J\) which are unlikely at \(T=J\). Note that the error plots in Fig. 2(b),(c) qualitatively do not change by increasing the system size \(L\). Hence, for a given bath model, the expansion parameter \(\varepsilon_{0}\) can be optimized at small system sizes and can then be used to study large systems. In the SM [78] we additionally show the approximate Redfield dynamics for a different bath model, a Lorentz-Drude bath. For such a bath, the imaginary part of \(W(E)\) is nonzero. We find similar good performance as long as the Drude cutoff energy is relatively high.
_Approximate Lindblad form_-- This analytical form of \(u\) also allows us to approximate the dynamics with a Lindblad form, which is relevant since it allows us to unravel the dynamics with standard quantum-trajectory algorithms [69; 81; 82; 83]. We follow Ref. [50] and first rewrite Eq. (1) in the form \(\partial_{t}\varrho=-i\left[H_{\mathrm{eff}},\varrho\right]+\mathcal{D}[\varrho]\), with effective Hamiltonian \(H_{\mathrm{eff}}=H_{\mathrm{S}}-\frac{i}{2}(vu-u^{\dagger}v)\) and dissipator \(\mathcal{D}[\varrho]=\left(u\varrho v-\frac{1}{2}\{vu,\varrho\}+\mathrm{h.c.}\right)\). We observe that the dissipator has the structure \(\mathcal{D}[\varrho]=D_{L_{+}}[\varrho]-D_{L_{-}}[\varrho]\) with Lindblad superoperator form \(D_{L}[\varrho]=\left(L\varrho L^{\dagger}-\frac{1}{2}\{L^{\dagger}L,\varrho\}\right)\) and jump operators \(L_{\pm}=\left(u\pm W(\varepsilon_{0})v\right)/\sqrt{2W(\varepsilon_{0})}\), where we assume that \(W(\varepsilon_{0})\) is real (a derivation and the expressions for complex \(W(\varepsilon_{0})\) is given in the SM [78]) and nonzero, \(W(\varepsilon_{0})\neq 0\). Note that the dissipator comprises a negative 'rate' which is why the Redfield dissipator \(\mathcal{D}\) violates Lindblad form. However, by using the expansion in Eq. (4), we observe that \(L_{-}\) vanishes in leading order \(N=0\), and since it appears twice in the dissipator, by neglecting it, we omit terms of order \((\Delta/T)^{2}\). Thus, the expansion presented here sheds also light on the recently proposed truncation [50].
Finally, the Redfield eq. (1) can therefore approximately be rewritten in Lindblad form
\[\partial_{t}\varrho\approx-i\left[H_{\mathrm{eff}},\varrho\right]+\left(L_{+ }\varrho L_{+}^{\dagger}-\frac{1}{2}\{L_{+}^{\dagger}L_{+},\varrho\}\right). \tag{5}\]
We plot the corresponding error in Fig. 2(d). This form is highly desired because it allows for stochastic unraveling with standard quantum trajectory algorithms, and since the resulting jump operator \(L_{+}\approx\left(u^{(N)}+W(\varepsilon_{0})v\right)/\sqrt{2W(\varepsilon_{0})}\) is sparse, it is of great advantage over the potential full-rank exact many-body operator \(u\). This Lindblad form with local jump operators also allows for combining with other approaches
Figure 2: (a) Population dynamics of the many-body eigenstates \(k\) of the XXZ model in Eq. (6) coupled to two ohmic baths at the same temperature \(T_{l}=T_{r}=T=2.2J\). Solid: exact Redfield dynamics, dashed: Redfield with approximate operator \(u^{(N)}\), Eq. (4), with \(\varepsilon_{0}=0.6J,N=4\), dotted: approximate Lindblad form, Eq. (5), dashed-dotted: local Lindblad, Eq. (7). Similar plot for the coherences in the SM [78]. (b), (c) Trace distance \(d\) of the steady state of the approximate Redfield and the exact Redfield equation as a function of temperature \(T\) and expansion energy \(\varepsilon_{0}\) for order \(N=1\) (b) and \(N=4\) (c). (d) As in (c) but for the approximate Lindblad form in Eq. (5). Other parameters \(\gamma=0.25J,L=6,\Delta=0.7,h=J,\delta=-0.07\), initial state is a pure state \(|\psi(0)\rangle=[(|\uparrow\rangle+|\downarrow\rangle)/\sqrt{2}]^{\otimes L}\).
like mean-field equations or tensor network methods [83]. Also note that there exist quantum trajectory unravelings directly of the Redfield equation [84; 85; 86; 87], which can easily be combined with our local Redfield equation.
_Benchmark against local Lindblad equation--_ Equation (5) provides an alternative to the widely used local Lindblad equations while, at the same time, not suffering from typical problems of the standard local Lindblad equations [69]. To this end we study an XXZ model with external field \(h_{i}\) in the \(z\)-direction,
\[H_{\mathrm{S}}=-J\sum_{i=1}^{L-1}\left(\sigma_{i}^{x}\sigma_{i+1 }^{x}+\sigma_{i}^{y}\sigma_{i+1}^{y}+\Delta\sigma_{i}^{z}\sigma_{i+1}^{z} \right)+\sum_{i=1}^{L}h_{i}\sigma_{i}^{z}, \tag{6}\]
where the field \(h_{i}=h+(i-1)\delta\) can be tuned to have a gradient for \(\delta\neq 0\), cf. sketch in Fig. 1(c). We couple the system to two pure ohmic baths, with temperature \(T_{l}\) (\(T_{r}\)) to the leftmost (rightmost) site operator \(v_{l}=\sigma_{1}^{x}\) (\(v_{r}=\sigma_{L}^{x}\)). Here, all dissipators are given as the sum of the coupling to the individual baths. Using a standard local Lindblad approach yields the master equation [69]
\[\partial_{t}\varrho=-i\left[H_{\mathrm{S}},\varrho\right]+\sum_{i=1,L}\left(\gamma_{i}^{-}D_{\sigma_{i}^{-}}[\varrho]+\gamma_{i}^{+}D_{\sigma_{ i}^{+}}[\varrho]\right), \tag{7}\]
with rates \(\gamma_{i}^{\pm}=\mathrm{Re}[W(\pm 2h_{i})]/2\). For pure ohmic baths there is no lamb shift contribution to the Hamiltonian since \(W(E)\) is real. The XXZ chain with local Lindblad equation has attracted a lot of attention because it can have exactly solvable steady states [16; 17].
As we observe in Fig. 2(a), the populations of the many-body eigenstates are only poorly described by the local Lindblad master equation. This poor performance is confirmed by Fig. 3(a) where we plot the distance measure \(d\) for the distance of the steady state of the local Lindblad and the Redfield equation and again for the approximate Redfield and Lindblad master equations from Fig. 2(c),(d) but for fixed \(\varepsilon_{0}=0.6J\). Indeed, it is expected since one way to derive the local Lindblad eq. (7) microscopically [69] is in the limit where \(J\ll h\), which is violated here. Nevertheless, such local jumps have attracted considerable attention also in regimes where there is no microscopic justification [69], as well as in the context of collisional models [88; 89]. Our approximate Lindblad eq. (5) provides an alternative form that has similar properties to that equation, but can overcome some of its flaws: As we show in Fig. 3(b), it predicts a steady-state magnetization profile that is much closer to the actual thermal result predicted by the Redfield equation [78]. In Fig. 3(c) we additionally show the site-resolved magnetization current [69], \(j_{i}(t)=-i\langle[H_{S}^{i,i+1},\sigma_{i}^{z}]\rangle\) with \(H_{S}^{i,i+1}\) denoting the nearest neighbor interaction term in Eq. (6) at site \(i\). We observe that even though the system is connected to two heat baths with identical temperature \(T_{l}=T_{r}=T\), the local Lindblad master equation, due to the tilted external field \(h_{i}\), predicts a finite steady-state current through the system which is unphysical. The Redfield equation does not suffer from such unphysical behavior, even after approximating \(u\approx u^{(N)}\). The approximate Lindblad eq. (5) slightly violates the correct steady state behavior, however the violation is less severe than what is predicted by the 'traditional' local Lindblad approach. Additionally in the SM [78] we show that the approximate Lindblad eq. shows thermalization of the many-body eigenstates and therefore fulfills the three criteria for a consistent Markovian master equation that were formulated in Ref. [60]: It obeys 1) complete positivity, 2) local conservation laws and 3) shows ther
Figure 3: (a) Distance measure \(d\) as in Fig. 2(c),(d) but for fixed \(\varepsilon_{0}=0.6J\), and additionally for the local Lindblad eq. (7). (b) Steady state average magnetization \(\langle\sigma_{i}^{z}\rangle\) in the \(z\)-direction and (c) local magnetization current \(j_{i=1}(t)\) at the leftmost site, \(i=1\), as a function of time \(t\) for the different approaches for the parameters as in Fig. 2(a). (d) Snapshots of the magnetization profile \(\langle\sigma_{i}^{z}\rangle\) for the approximate- (dashed-dotted) and the traditional local Lindblad (dotted) master equation for chain length \(L=19\) at three different times (red, green, blue). The results were obtained using quantum trajectory unravelings with \(N_{\mathrm{traj}}=1900\) trajectories. Parameters as in Fig. 2(a), but \(N=1\), \(\varepsilon_{0}=0.5J\) and \(|\psi(0)\rangle=|\!\uparrow,\dots,\uparrow\rangle\).
Figure 4: Site-resolved magnetization current (a) for the approximate Linblad and (b) for the traditional local Lindblad equation. Results obtained from quantum trajectory unraveling for the parameters of Fig. 3(d).
malization.
_Quantum trajectories for approximate Lindblad form_--We combine the approximate Lindblad form, Eq. (5), and the local Lindblad Eq. (7), with standard quantum trajectory methods [69; 81; 82; 83]. Since both forms yield jump operators that have a sparse representation, they allow us to treat extremely large systems. In Fig. 3(d) and 4 we demonstrate this by increasing the length of the spin chain to \(L=19\), where the Hilbert space dimension is \(2^{L}=524288\). With global master equations such system sizes are numerically out of reach, since both computational time and memory for diagonalization of \(H_{\text{S}}\), storage of the dense jump operator \(u\) and propagation with the dense operator \(u\) typically limit one to Hilbert space dimensions of \(\sim 10^{4}\) on current hardware [87; 53]. We observe again that the magnetization profile \(\langle\sigma_{i}^{z}\rangle\) in Fig. 3(d) that is predicted by the traditional local Lindblad approach deviates strongly from the more accurate approximate Lindblad dynamics. Also the traditional local Lindblad equation overestimates the magnetization currents that we plot in Fig. 4 in the initial relaxation dynamics. At later times, there are current oscillations in the middle of the chain that are not captured by the local Lindblad eq. and its steady state again suffers more strongly from finite currents that occur even though the system should reach equilibrium. Note that our approach is conceptually much simpler than combining the Redfield eq. with matrix product operator methods that was proposed in recent works [26], while being able to simulate similar system sizes. By combining our approach with matrix product state methods for quantum trajectories, we expect that even larger system sizes are attainable.
_Summary._--Even though the Redfield master equation is typically considered a global master equation, we emphasize that for coupling to local baths with short bath-correlation time, the jump operator will remain localized in space. We provide a Taylor-like expansion of the Redfield jump operator in terms of local operators, and show convergence for high and intermediate bath temperatures for ohmic baths. This additionally has the advantage that a diagonalization of the full Hamiltonian is not required. However, there might exist other expansions with better convergence especially in the case of spectral densities with a low cutoff. We show that our method can be further approximated to a Lindblad form with local jump operators, outperforming the traditonal local Lindblad ansatz. We combine our results with quantum trajectories to simulate the Redfield dynamics for Hilbert space dimensions that were previously challenging. Our method can be augmented with mean-field- or matrix-product methods to solve the Redfield equation for extended many-body systems.
We acknowledge discussions with Igor Lesanovsky, Tobias Becker, Juzar Thingna, Francesco Petiziol, Gabriel Landi, Michiel Wouters, and Zala Lenarcic. We are grateful for comments on the manuscript by Andre Eckardt and Archak Purkayastha.
|
2309.10479 | RECALL+: Adversarial Web-based Replay for Continual Learning in Semantic
Segmentation | Catastrophic forgetting of previous knowledge is a critical issue in
continual learning typically handled through various regularization strategies.
However, existing methods struggle especially when several incremental steps
are performed. In this paper, we extend our previous approach (RECALL) and
tackle forgetting by exploiting unsupervised web-crawled data to retrieve
examples of old classes from online databases. In contrast to the original
methodology, which did not incorporate an assessment of web-based data, the
present work proposes two advanced techniques: an adversarial approach and an
adaptive threshold strategy. These methods are utilized to meticulously choose
samples from web data that exhibit strong statistical congruence with the no
longer available training data. Furthermore, we improved the pseudo-labeling
scheme to achieve a more accurate labeling of web data that also considers
classes being learned in the current step. Experimental results show that this
enhanced approach achieves remarkable results, particularly when the
incremental scenario spans multiple steps. | Chang Liu, Giulia Rizzoli, Francesco Barbato, Andrea Maracani, Marco Toldo, Umberto Michieli, Yi Niu, Pietro Zanuttigh | 2023-09-19T09:50:30Z | http://arxiv.org/abs/2309.10479v2 | # RECALL+: Adversarial Web-based Replay for Continual Learning in Semantic Segmentation
###### Abstract
Catastrophic forgetting of previous knowledge is a critical issue in continual learning typically handled through various regularization strategies. However, existing methods struggle especially when several incremental steps are performed. In this paper, we extend our previous approach (RECALL) and tackle forgetting by exploiting unsupervised web-crawled data to retrieve examples of old classes from online databases. Differently from the original approach that did not perform any evaluation of the web data, here we introduce two novel approaches based on adversarial learning and adaptive thresholding to select from web data only samples strongly resembling the statistics of the no longer available training ones. Furthermore, we improved the pseudo-labeling scheme to achieve a more accurate labeling of web data that also considers classes being learned in the current step. Experimental results show that this enhanced approach achieves remarkable results, especially when multiple incremental learning steps are performed.
Continual Learning, Semantic Segmentation, Web-based Replay, Self-teaching.
## I Introduction
Continual learning strategies allows machine learning models to be trained incrementally over multiple steps instead of relying on a single-shot learning approach with a large dataset [1]. This capability is crucial in practical applications where privacy concerns or licensing constraints may render the original training data unavailable when new tasks are introduced. The problem has been widely investigated for image classification and several approaches have been proposed to deal with the difficulties in learning new classes while preserving the knowledge of old ones [2, 3, 4]. Particularly, when a model is forced to learn a new task without additional constraints, the optimization will result in the so-called catastrophic forgetting phenomenon: the model tends to overfit the new data, forgetting the knowledge of old concepts. In recent years, the problem has been investigated in more challenging tasks, such as semantic segmentation. The standard approaches for class-incremental semantic segmentation take inspiration from classification works, extending knowledge-distillation objectives to dense predictions [5, 6]. While such approaches can alleviate the rate at which old knowledge is lost, they face issues when multiple incremental steps are considered or when the semantic content of the classes varies.
In our conference paper (RECALL, Replay-based continual learning in semantic segmentation [7]), we approached the problem from a unique perspective. Rather than relying on additional losses or regularization strategies, our proposal involved generating examples of old classes through generative methods or utilizing web-crawled data. This framework effectively resulted in an exemplar-free replay-based solution. Considering that web examples lack ground-truth labels, we generated pseudo-labels using a side labeling module, which requires minimal extra storage. Besides, a self-inpainting strategy was adopted to reduce background shift by re-assigning the background region with predictions of the previous model.
Our initial approach had limitations in selecting data that are closely aligned with the initial distribution. To further improve performance and strengthen the control over the web-crawled data, in this paper, we propose an image selection technique that combines an adversarial and a thresholding strategy to obtain images as close as possible to the original training dataset distribution. In particular, we trained a discriminator network to distinguish between in-distribution and out-of-distribution samples and use it to preserve only the images that are able to fool it. The thresholding strategy, instead, is based on the pixel-class distribution, computed from the training dataset ground truth maps. We use the distribution function of the pixels of each class to extract class-specific thresholding parameters. On top of the filtering strategies, we also introduce a refined inpainting strategy, where the knowledge of new classes is propagated to the replay samples, reducing significantly the _background_ shift. An example of this procedure is reported in Figure 2.
Fig. 1: Replay images of previously seen classes are retrieved by a web crawler and then filtered by a domain discriminator, after which the network is incrementally trained with a mixture of new and replay data.
Our contributions can be summarized below: 1) we present RECALL+, an exemplar-free replay-based approach for continual semantic segmentation that leverages web-crawled images; 2) we propose a selection technique for web images that reduces the domain gap with real-world images and no-longer-available training data; 3) we devise a knowledge inpainting strategy allowing to introduce information about the current task in the replay images; 4) the proposed approach achieves state-of-the-art results in a wide range of scenarios, especially when performing multiple incremental steps.
We remark that the conference version of this work did not include the selection strategy for the web-crawled images and the information related to current tasks in the replay images was not exploited. This extended version addresses these limitations by: (i) designing a combined adversariallearning and thresholding scheme for image selection, and (ii) by proposing a new knowledge-updating strategy.
## II Related Works
**Continual Learning (CL).** Many different techniques have been proposed to tackle catastrophic forgetting in continual learning. The first possible strategy is to use dynamic architectures, both allowing the growth of new network branches during the incremental steps [8, 9], or assuming that some network weights are available for certain tasks only [10, 11, 12]. Some approaches introduce additional loss terms to regularize training [4, 13] or distill knowledge from the model at previous steps [9, 14, 15, 16].
The task becomes simpler if we relax the assumption that no previous samples can be used: rehearsal-based approaches store a set of samples of past tasks that can be exploited to stabilize the training [2, 17]. A viable solution to exploit rehearsal without storing previous samples is to rely on generative models [18, 19, 20] to generate artificial samples. Generative replay strategies typically exploit GANs [18, 19, 21] or auto-encoders [20].
This work can be considered a further insight into the replay strategy: we adopt web-crawled images to prevent forgetting, avoiding both storing samples to preserve previous knowledge [2, 17], and the usage of trained generative models to synthesize images [18, 19, 21], thus reducing memory and computation time.
**CL in Semantic Segmentation.** Class-incremental continual learning has been widely studied in the image classification field, while only recently it has been tackled also in the more challenging semantic segmentation task [22]. Early approaches utilized knowledge distillation and regularization techniques such as parameters freezing or class re-weighting schemes [22, 23, 15, 24] to ensure the introduction of new classes while preserving the knowledge from previously learned ones. Subsequently, regularization and contrastive mechanisms at the feature level were explored to improve class-conditional capabilities and the preservation of spatial relationships [25, 16, 26, 5]. Further research was employed in the proposal of new objective functions suited for CL, among which: a class-similarity-weighted loss function to relate new classes with previously seen ones [27]; a structure-preserving loss to maintain the discriminative ability of previous classes [28]; a biased-context-insensitive consistency loss that rectifies the context of old classes with respect to new ones [29]. Moreover, similar to classification tasks, dynamic architecture methods have been proposed for CL in semantic segmentation: [12] incorporates model compensation with a re-parameterization technique to preserve model complexity; [30] embeds a dynamic balance parameter calculated by the ratio of new classes to merge frozen and trained branches. In addition to the aforementioned advancements, CL in semantic segmentation has also extended to other expanding fields, such as weakly supervised segmentation [31, 32], leveraging CL through transformers [6, 33], and exploring CL within distributed learning frameworks [34].
**Webly-Supervised Learning** is a new research direction where large amounts of web data are used to train deep learning models [35, 36, 37]. It has also been employed for semantic segmentation, however, a critical challenge is that image [38, 39] and video [40] data from the web comes with only weak image-level class labels, while the pixel-level semantic labelling is missing. Current research directions include understanding how to query and select images [41] and how to exploit weakly supervised data [42, 43] (e.g., computing pseudo-labels). To our knowledge, however, the only approach exploiting web data in continual learning as a replay strategy is the conference version of this work [7].
## III Problem Formulation
The work focuses on the semantic segmentation task, i.e., pixel-wise classification of an input image. Formally, given a class set \(\mathcal{C}=\{b,c_{1},\ldots,c_{C-1}\}\) containing \(C\) classes including a _background_ class \(b\), an image \(\mathbf{X}\in\mathcal{X}\subset\mathbb{R}^{H\times W\times 3}\) is processed by a deep learning model \(M\), that typically exploits an encoder-decoder architecture, \(M=D\circ E\) to produce a
Fig. 2: Comparison of predictions between the model trained with knowledge self-inpainting and without inpainting. Inpainting the replay images accelerates convergence and improves segmentation accuracy.
segmentation map \(\hat{\mathbf{Y}}\in\mathcal{Y}\subset\mathcal{C}^{H\times W}\), obtained via an \(\arg\max\) operator.
In the standard supervised deep learning setting, the model is tuned in a single stage on the training set \(\mathcal{T}\subset\mathcal{X}\times\mathcal{Y}\), where all classes and samples are available. In the class-incremental setting, instead, the training is assumed to happen over a sequence of _steps_\(k=0,\ldots,K\) each providing a (possibly) different subset \(\mathcal{T}_{k}\) of \(\mathcal{T}\) to the algorithm.
We identify with \(k=0\) the first incremental step, where the network is trained on the subset \(\mathcal{C}_{0}\subset\mathcal{C}\) of all classes. At convergence we obtain the model \(D_{0}\circ E_{0}=M_{0}:\mathcal{X}\mapsto\mathbb{R}^{H\times W\times|\mathcal{ C}_{0}|}\).
In each of the subsequent steps \(k>0\) we add a new set of classes \(\mathcal{C}_{k}\) to the group of learned classes \(\mathcal{C}_{0\rightarrow(k-1)}\) up to the previous step, leading to \(\mathcal{C}_{0\rightarrow k}=\mathcal{C}_{0\rightarrow(k-1)}\cup\mathcal{C}_ {k}\). Given the incremental setting, we assume that \(\mathcal{C}_{0\rightarrow(k-1)}\cap\mathcal{C}_{k}=\varnothing\).
Importantly, since the encoder is fixed and only the decoder is trained during the incremental steps, the model after each step \(k\) is defined as \(D_{k}\circ E_{0}=M_{k}:\mathcal{X}\mapsto\mathbb{R}^{H\times W\times| \mathcal{C}_{0\rightarrow k}|}\).
We investigate the performance of our approach in two standard settings [16, 22, 24]: _Disjoint_ and _Overlapped_. In both settings, only the pixels belonging to the classes of the current task \(\mathcal{T}_{k}\subset\mathcal{T}\) are labeled. Nonetheless, in the former scenario, images of the current task contain only pixels from the current and the old sets of classes \(\mathcal{C}_{0\rightarrow k}\), whereas, in the latter scenario, pixels can belong to any set of classes, even the future ones.
## IV General Architecture
In the incremental learning setting, when performing an incremental training step \(k\), only samples related to new classes \(\mathcal{C}_{k}\) are assumed to be available. Following the simplest approach, we could initialize our model's weights from the previous step (\(M_{k-1}\), \(k\geq 1\)) and learn the segmentation task over classes \(\mathcal{C}_{0\rightarrow k}\) by optimizing the standard objective \(\mathcal{L}_{cc}(M_{k};\mathcal{C}_{0\rightarrow k},\mathcal{T}_{k})\) with data from the current training partition \(\mathcal{T}_{k}\). However, simple fine-tuning leads to catastrophic forgetting, being unable to preserve previous knowledge.
### _Architecture of the Replay Block._
To tackle this issue, a web-based replay strategy is proposed. Our goal is to retrieve task-related knowledge of past classes to be blended into the ongoing incremental step, all without accessing training data of previous iterations. To this end, we introduce a Replay Block, whose target is twofold. Firstly, it has to provide images resembling instances of classes from previous steps, by retrieving them from an available source (e.g., a web database). Secondly, it has to obtain reliable semantic labels of those images, by resorting to learned knowledge from past steps. The Replay Block's image retrieval task is executed by what we call Source Block:
\[S:\;\mathcal{C}_{k}\mapsto\mathcal{X}_{\mathcal{C}_{k}}^{rp} \tag{1}\]
This module takes in input a set of classes \(\mathcal{C}_{k}\) (background excluded) and provides images whose semantic content can be ascribed to those categories (e.g., \(\mathbf{X}^{rp}\in\mathcal{X}_{\mathcal{C}_{k}}^{rp}\)). In this work, we focus on the usage of web-crawled data for this block, as detailed in Sec. V.
The Source Block provides unlabeled image data (if we exclude the weak image-level classification labels), and for this reason, we introduce an additional Label Evaluation Block \(\{L_{\mathcal{C}_{k}}\}_{\mathcal{C}_{k}\subset\mathcal{C}}\), which aims at annotating examples provided by the replay module. This block is made of separate instances \(L_{\mathcal{C}_{k}}=D_{\mathcal{C}_{k}}^{H}\circ E_{0}\), each denoting a segmentation model to classify a specific set of semantic categories \(\mathcal{C}_{k}\cup\{b\}\) (i.e., the classes in \(\mathcal{C}_{k}\) plus the background) :
\[L_{\mathcal{C}_{k}}:\mathcal{X}_{\mathcal{C}_{k}}\rightarrow\mathbb{R}^{H \times W\times(|\mathcal{C}_{k}\cup\{b\}|)} \tag{2}\]
All \(L_{\mathcal{C}_{k}}\) modules share the encoder section \(E_{0}\) from the initial training step so that only a minimal portion of the segmentation network is stored for each block's instance (see [7] for more details). Notice that a single instance recognizing all classes could be used, leading to an even more compact representation, but it experimentally led to lower performance [7].
Provided that \(S\) and \(L_{\mathcal{C}_{k}}\) are available, replay training data can be collected for classes in \(\mathcal{C}_{k}\). A query to \(S\) outputs a generic image example \(\mathbf{X}_{\mathcal{C}_{k}}^{rp}=S(\mathcal{C}_{k})\), which is then associated with its prediction \(\hat{\mathbf{Y}}_{\mathcal{C}_{k}}^{rp}=\underset{c\in\mathcal{C}_{k}\cup\{b \}}{\arg\max}\,L_{\mathcal{C}_{k}}(\mathbf{X}_{\mathcal{C}_{k}}^{rp})[c]\). By retrieving multiple replay examples, we build a replay dataset \(\mathcal{R}_{\mathcal{C}_{k}}=\{(\mathbf{X}_{\mathcal{C}_{k}}^{rp},\mathbf{Y}_ {\mathcal{C}_{k}}^{rp})_{n}\}_{n=1}^{N_{r}}\), where \(N_{r}\) is a fixed hyperparameter empirically set (see Section VI).
### _Self-teaching Strategies._
To deal with the background shift phenomenon, we propose a novel inpainting mechanism to introduce knowledge from the previous and current models into both the current step images and the web-crawled data. While in the conference version [7], background inpainting was used only on background regions of current samples, here we use it also on web-crawled data.
**Background Self-Inpainting.** The first strategy, derived from the conference version, aims at reducing the background shift and, at the same time, at bringing a regularization effect similar to knowledge distillation [22, 24]. Despite serving the same purpose, it differs in the implementation.
At every step \(k\), we take the background region of each ground truth map of the training set \(\mathcal{T}_{k}\) and we label it with the associated prediction from the model at the previous step \(M_{k-1}\) (see Fig. 4). We call this procedure _background inpainting_ since the background regions in label maps are changed according to a self-teaching scheme based on the prediction of the old model. More formally, we replace each original label map \(\mathbf{Y}\) available at step \(k>0\) with its inpainted version \(\mathbf{Y}^{bi}\):
\[\mathbf{Y}^{bi}[h,w]\!=\!\begin{cases}\mathbf{Y}[h,w]&\text{if }\mathbf{Y}[h,w]\! \in\!\mathcal{C}_{k}\\ \underset{c\in\mathcal{C}_{0\rightarrow k-1}}{\arg\max}\,M_{k-1}(\mathbf{X})[h, w][c]&\text{otherwise}\end{cases} \tag{3}\]
where \((\mathbf{X},\mathbf{Y})\in\mathcal{T}_{k}\), while \([h,w]\) denotes the pixel coordinates. Labels at step \(k=0\) are not inpainted, as in that case there are no previous classes. When background inpainting is performed, each set \(\mathcal{T}_{k}^{bi}\subset\mathcal{X}\times\mathcal{Y}_{\mathcal{C}_{0 \rightarrow k}}\) (\(k>0\)) contains all samples of \(\mathcal{T}_{k}\) after being inpainted.
**Inpainting Web Data with Current Classes' Knowledge.** Images downloaded from the web typically contain also instances of the set of classes currently being learned. As
an example, in images of the _bicycle_ or _sofa_ classes, it is likely that there are also instances of the _person_ class. The helper decoders can only label already learned classes: e.g., if we are learning the _person_ class, the optimization for the corresponding instances existing in the replay data are ignored during the training. To exploit also this information, we let the main model update the labeling of the replay images during training, namely _knowledge self-inpainting_ (see Fig. 4). Notice that, in the case of knowledge self-inpainting, the model is already familiar with the information from the replayed classes. This introduces the possibility of expanding the old-class pixels through self-inpainting, which could mislead the model and lead to a drop in performance. To address this concern, we introduce a third term in the process of knowledge inpainting. This term acts as a constraint, ensuring that the background pixel labels predicted by the current model for the old classes remain unchanged. This constraint prevents the model from being misled by self-inpainting, thereby maintaining performance stability (see Section. VIII).
By defining \(\mathbf{Y^{max}}[h,w]=\underset{c\in\mathcal{C}_{k}}{\max}M_{k}(\mathbf{X})[h,w][c]\), the knowledge self-inpainting strategy can be defined as:
\[\mathbf{Y}^{ki}[h,w]\!=\!\begin{cases}\mathbf{Y^{TP}}[h,w]&\text{if }\mathbf{Y^{ TP}}[h,w]\!\in\!\mathcal{C}_{\mathcal{C}_{0\to k-1}}\\ \mathbf{Y^{max}}[h,w]&\text{if }\mathbf{Y^{ TP}}[h,w]\!\notin\!\mathcal{C}_{\mathcal{C}_{0\to k-1}}\\ &\wedge\mathbf{Y^{max}}[h,w]\in\mathcal{C}_{k}\\ \mathbf{Y^{TP}}[h,w]&\text{if }\mathbf{Y^{ TP}}[h,w]\!\notin\!\mathcal{C}_{\mathcal{C}_{0\to k-1}}\\ &\wedge\mathbf{Y^{max}}[h,w]\notin\mathcal{C}_{k}\\ \end{cases} \tag{4}\]
Notice how the labels are updated only in the second case of Equation (4), i.e., for background pixels that are labeled with one of the currently being learned classes.
### _Incremental Training with Replay Block._
The training procedure of the proposed approach is summarized in Algorithm 1 and depicted in Fig. 3. Let us focus on a generic incremental step \(k\), where only samples of classes in \(\mathcal{C}_{k}\) from partition \(\mathcal{T}_{k}\) are available. Firstly, the Replay Block is used to retrieve unlabeled web data for steps from \(0\) to \(k-1\) among all the past classes. Then, the domain discriminator followed by the semantic selection technique is applied over the replay images of each incremental class set \(\mathcal{C}_{i},i=0,...,k-1\). The replay training dataset for step \(k\) is the union of the replay sets of all previous steps: \(\mathcal{R}^{SC}_{\mathcal{C}_{0\to(k-1)}}=\bigcup\limits_{i=0}^{k-1} \mathcal{R}^{SC}_{\mathcal{C}_{i}}\). Once we have assembled \(\mathcal{R}^{SC}_{\mathcal{C}_{0\to(k-1)}}\), by merging it with \(\mathcal{T}^{bi}_{k}\) we get an augmented step-\(k\) training dataset \(\mathcal{T}^{rp}_{k}=\mathcal{T}^{bi}_{k}\cup\mathcal{R}^{SC}_{\mathcal{C}_{0 \to(k-1)}}\). This new set, in principle, includes annotated samples containing both old and new classes, thanks to replay data. Therefore, we can effectively learn the segmentation model \(M_{k}\) through the cross-entropy objective \(\mathcal{L}_{ce}(M_{i};\mathcal{C}_{0\to k},\mathcal{T}^{rp}_{k})\) on replay-augmented training data. This mitigates the bias toward new classes, thus preventing catastrophic forgetting.
Then, we fine-tune the domain discriminator using the \(\mathcal{T}_{k}\cup\mathcal{R}_{\mathcal{C}_{0\to(k-1)}}\) set (see Sec. V-A) and optimize the decoder \(D^{H}_{\mathcal{C}_{k}}\) to segment images from \(\mathcal{T}_{k}\) by minimizing \(\mathcal{L}_{ce}(L_{\mathcal{C}_{k}};\mathcal{C}_{k}\cup\{b\},\mathcal{T}_{k})\). These stages are not necessary for the current step, but will be exploited for the future ones.
During a standard incremental training stage, we follow a mini-batch gradient descent scheme, where batches of annotated training data are sampled from \(\mathcal{T}^{rp}_{k}\). However, to
Fig. 4: Background self-inpainting process and knowledge painting process. At step \(k\), the background self-inpainting technique updates the past knowledge (i.e., _horse_) on the current step training images before the training step starts. During training, the knowledge self-inpainting updates the label information of the classes being learned (i.e., _person_) to the web downloaded images.
Fig. 3: Overview of RECALL+: class labels from the past steps are retrieved by Source Block, which consists of a domain-discriminator and filters the duplicate and near-duplicate images for each class. Then these selected images are further filtered by a CDF-based thresholding strategy. Finally, the segmentation network is incrementally trained with both replay data and new class data.
guarantee a proper stream of information, we opt for an interleaving sampling policy, rather than a random one. In particular, at a generic iteration of training, a batch of data \(\mathcal{B}^{rp}\) supplied to the network is made of \(r_{new}\) samples from the current training partition \(\mathcal{T}_{k}^{bi}\) and \(r_{old}\) replay samples from \(\mathcal{R}_{\mathcal{C}_{0\to(k-1)}}^{SC}\). The ratio between \(r_{new}\) and \(r_{old}\) controls the proportion of replay and new data (details discussed in [7]). We need, in fact, to carefully control how new data is balanced with respect to replay one so that enough information about new classes is provided within the learning process, while concurrently we assist the network in recalling knowledge acquired in past steps to prevent catastrophic forgetting.
## V Image Selection Strategies
We assume to collect images from a generic photo-sharing website. For the results we use Flickr, which proved to be a good choice for multiple reasons. 1) Images from Flickr contain user-uploaded pictures that implicitly contain real world noise, making the model more robust during deployment; 2) differently from machine learning training datasets, in which images may be acquired in a controlled setup, these images are collected in-the-wild, thus covering a wide range of pose and lighting conditions, making the dataset more unbiased and diverse, but also less reliable; 3) using randomly selected uploaded images, additionally, allows to reduce the domain gap between the training data and real-world deployment.
Querying for photos on the web via the class label name yields uncontrolled results that include images useful for the training, but also images without the expected content or with other anomalies (see Fig. 5 and Fig. 7) making them useless or even misleading. In this section, we present the strategies we introduce in this extended version to select suitable pictures from the web data for training. To this aim, we use the combination of an adversarial learning approach with a threshold-based selection mechanism. Firstly, we use the adversarial learning strategy to filter out images with statistical distributions that do not resemble the training dataset ones. Then an adaptive threshold filtering strategy based on the predicted segmentation chooses only the images that have enough pixels predicted as belonging to the selected class.
### _Adversarial Learning for Image Selection._
It is crucial for a training dataset to be unbiased, to have enough diversity and to properly capture the statistical properties of the different classes.
To get the desired unbiased and diverse images, we downloaded them from the Flickr website since it contains billions of images uploaded by common users. Then, to ensure diversity we discarded duplicate images by a simple thresholding based on the peak signal-to-noise ratio (PSNR) between couples of images.
Following the download and verification process, we obtained a collection of 10K unique images for each class. We remark that a practical system should download images on-the-fly during training, however we downloaded a fixed set to ensure repeatability of results across multiple experiments. Yet, this does not guarantee that these images have statistical properties suitable for the training. We assume that the images in the Pascal dataset used for supervised training had the desired characteristics, and we aim to find web images that closely resemble the no longer available Pascal ones.
**Adversarial Training strategy** For this purpose, we train a discriminator network to distinguish between Pascal images and web-downloaded ones.
Rather than training an individual discriminator network for each incremental step, which would lead to a memory consumption increase proportional to the number of steps, we utilize a single discriminator and employ the following incremental training approach: at step \(0\), the negative samples are selected from the web images for classes in \(\mathcal{C}_{0}\) and the positive samples are taken from the Pascal data also for classes in \(\mathcal{C}_{0}\); for each incremental step \(k>0\), the images of the currently learned classes are selected using the same strategy, however, for the no-longer available positive samples of the old classes, we used web-images that prove effective at deceiving the discriminator. These images do not increase the memory burden since they can be easily obtained through web queries at each step \(k\).
Given an input \(\mathbf{X}\), the discriminator produces confidence score (logits) \(z=[z_{p},z_{rp}]\in\mathbb{R}_{0+}^{2}\) for training dataset images and web-replay ones respectively, where negative values are excluded since \(z\) is passed through a ReLU activation function.
Normally, the discriminator compares \(z_{p}\) and \(z_{rp}\) to produce a classification outcome: if \(z_{p}>z_{rp}\), \(\mathbf{X}\) is classified as original training data, and vice versa. However, this relationship is not strong enough to select the images that will be used to update the discriminator, therefore we introduce the set of _core_ Replay Images \(\mathcal{R}_{\mathcal{C}_{0\to(k-1)}}^{core}\) defined as follows:
\[\mathcal{R}_{\mathcal{C}_{0\to(k-1)}}^{core}=\{\mathbf{X}_{\mathcal{C}_{0\to(k -1)}}^{rp}\mid z_{p}>\alpha\:z_{rp}\} \tag{5}\]
Here the hyper-parameter \(\alpha\) controls the rate between the two scores and we set it much larger than \(1\) (for the results
we empirically set \(\alpha=100\)). This means that only the replay samples that mislead the discriminator with very high confidence are chosen in order to achieve a more accurate training for the old classes.
To summarize, in each incremental step \(k\) the discriminator is trained using \(\mathbf{X}_{\mathcal{C}_{0\to k}}^{rep}\) as negative samples, and \(\mathbf{X}_{\mathcal{C}_{k}}\bigcup\mathcal{R}_{\mathcal{C}_{0\to(k-1)}}^{ core}\) as positive samples. Notice that, \(\mathbf{X}_{\mathcal{C}_{0\to(k-1)}}^{rep}\bigcap\mathcal{R}_{\mathcal{C}_{0 \to(k-1)}}^{core}=\varnothing\). The network and implementation procedure are detailed in Section VI. The discriminator is intentionally trained to achieve reasonably good accuracy, but not excessively high. This allows us to utilize it as a filter: when we obtain a new image from the web, we input it to the discriminator and retain only the images that successfully deceive the discriminator and are classified as original training set images. This approach enables us to preserve the images that show properties closely resembling those of the original training set. Some examples of the selected and discarded images are shown in Fig. 5. It can be observed that the discriminator-selected images exhibit a larger diversity.
### _Image Selection with Semantic Content_
Despite reducing the domain shift between replay and training images, the domain discriminator is not able to distinguish between a meaningful sample and a less relevant one (both in the semantic and spatial sense). As visible in Fig. 7, some _airplane_ images web-selected images show a lack of information to a human observer. To filter out non-helpful images, we rely on the currently trained segmentation network to further analyze their information content. The rationale is to preserve only the images that have a reasonable number of pixels belonging to the expected class.
Different objects have different sizes and using a fixed threshold for all classes proved to lead to unsatisfactory performance. To tackle this issue, we analyzed the probability distribution functions of the fraction of pixels for each class into the corresponding images to get a reference for thresholding. That is, for each class \(c\) we computed the Cumulative Distribution Function (CDF, \(\mathcal{F}_{c}\)) of the distribution \(\mathcal{P}_{(\mathbf{X},\mathbf{Y})\sim\mathcal{T}_{k}|c\in\mathcal{C}_{0 \to k}}^{c}[\mathbf{Y}=c]\) and we used it to extract suitable thresholds for the object sizes according to a quantile strategy. Therefore we computed the threshold value \(t_{c}^{size}\) for the number of pixels of class \(c\) in an image correspoding to that class as follows:
\[t_{c}^{size}=\mathcal{F}_{c}^{-1}(0.5) \tag{6}\]
The samples considered acceptable by our strategy are those whose fraction of pixels in the corresponding class belongs to \([t_{c}^{size},1]\). An example is reported in Fig. 6 (a): the plot shows the CDF curves for three representative classes. In around \(80\%\) of the _bottle_ images the object pixels are no more than \(20\%\) of the total image size. On the other side over half of the _bus_ images have buses with a relative size of more than \(30\%\), which shows how there is a large difference from one class to another. Fig. 6 (b)-(d) shows some examples of the pixels of three classes at different thresholds.
## VI Implementation Details
Following other works in continual semantic segmentation, we use DeepLab-V3 [44] with ResNet-101 [45] as our base architecture. Nonetheless, notice how the proposed strategy is independent of backbone architecture. The encoder has been initialized using a pretrained model on ImageNet [46] and all the parameters are trained during the initial step \(0\). For the following incremental steps, we freeze the encoder part thus only the main decoder is trained, together with additional \(\{D_{\mathcal{C}_{k}}^{H}\}_{k}\) helper decoders, which are essential for annotating replay samples in the future steps. For fair comparison, we used the same training parameters of the conference version [7] for the initial model \(M_{0}\). Following [16], we crop images to \(512\times 512\) and apply data augmentation (i.e., random scaling the images with a factor from 0.5 to 2.0 and random left-right flipping). We adopt SGD for weight optimization, with initial learning set to \(5\times 10^{-4}\) and decreased to
Fig. 5: Examples of filtered and accepted images according to our discriminator, notice that the excluded samples resemble each other, especially in object position and orientation.
Fig. 6: Visualization of the pixels with different ratios. The CDF curves of different classes differ from each other.
Fig. 7: Examples of airplane images that provide erroneous input from a semantic point of view. We can recognize four cases: missing class, wrong class, object too small, and non-dominant class. These cases are tackled by CDF-based filtering.
by polynomial decay of power \(0.9\). We train the model for \(|\mathcal{C}_{k}|\times 1000\) steps in both disjoint and overlapped setups. The knowledge self-inpainting is performed once when training progress reaches around 60%. Each helper decoder \(D_{C_{k}}^{H}\) is trained with a polynomial decaying learning rate starting from \(2\times 10^{-4}\) and ending at \(2\times 10^{-6}\) for \(|\mathcal{C}_{k}|\times 1000\) steps. The interleaving ratio \(r_{old}/r_{new}\) is set to \(1\). For the web images we downloaded a set of \(10000\) images for each class (this allowed to perform all the tests with the same data thus ensuring reproducibility) and apply to them the selection strategies of Section V. For the experiments on Pascal VOC 2012 [47], we exploit the first 500 selected replay images of each class for training (the first 100 replay images for ADE20K [48] ).
The discriminator of the adversarial module consists of a pre-trained EfficientNet-B0 encoder [49] followed by a three-layer full connected network (FCN) with dimensions \(1000\), \(256\) and \(2\). Training images are resized to \(224\times 224\). We employed the dataset images used for the training as positive samples and web-downloaded images as negative samples. A chunk of \(20\%\) of the training set is let out for validation. The discriminator is trained for 10 epochs using SGD with an initial learning rate set to \(1\times 10^{-3}\) and decreased with a rate of \(0.8\) every 2 epochs. The confidence parameter \(\alpha\) is set to 100 during the discriminator training. The entire framework is implemented with Pytorch [50] and trained on a single NVIDIA GTX 1080 Ti. Training time varies depending on the setup, with the longest run taking about \(10\) hours.
## VII Experimental Results
In this section, we present the experimental evaluation on the Pascal VOC 2012 [47] and ADE20K [48] datasets. Following previous works tackling this topic [14, 15, 22, 24], we start by analyzing the performance on three widely used incremental scenarios: i.e., the addition of the last class (19-1), the addition of the last 5 classes at once (15-5) and the addition of the last 5 classes sequentially (15-1). Moreover, we report the performance on three more challenging scenarios in which 10 classes are added sequentially one by one (10-1), in 2 batches of 5 elements each (10-5) and all at once (10-10). Classes for the incremental steps are selected according to the alphabetical order. The naive fine-tuning approach (FT) represents the lower limit to the accuracy of an incrementally learned model, while the joint training on the complete dataset in one step corresponds to the upper bound. We also report the results of a simple Store and Replay (S&R) method, where at each incremental step we store a certain number of true samples for newly added classes, such that the respective size in average matches the size of the helper decoders needed by RECALL+. As comparison, we include \(2\) methods extended from classification (i.e., LwF [9] and its single-headed version LwF-MC [2]) and some of the most relevant methods designed for continual segmentation (i.e., ILT [22], CIL [23], MiB [24], SDR [16], RECALL[7], PLOP[25], RCLL[12], REMI[27], SPPA [28], RBC[29], AWT[52] and EWF[30]). Exhaustive quantitative results in terms of mIoU are shown in Table I for Pascal dataset and Table II for ADE20K. For each setup we report the mean accuracy for the initial set of classes, for the classes in the incremental steps and for all classes, computed after the overall training. Besides, since the performance of the joint-trained model is different, here we give another metric namely "\(\Delta\)" (the smaller the better) to evaluate the proposed approach. More in detail, \(\Delta\) is given by the difference between the joint and the incremental models (respectively Joint and _all_, in both Table I and II).
**Addition of the last class.** In the first basic experiment, the model is trained over the first 19 classes during the initial step and then a single incremental step on the _tv/monitor_ class is performed. Looking at Table I, fine-tuning leads to a large performance drop. Using replay data from the web and inpainting strategies our approach is able to preserve past knowledge and properly learn the new classes. Notice how there is a performance improvement of about \(5\%\) in the disjoint scenario and \(2\%\) in the overlapped one, showing how the new image selection strategies allowed for a much better data replay than in the previous conference version. The overall mIoU is better than most competitors even if a couple of the most recent ones outperform our approach, but as already pointed out our approach is optimized for more realistic settings with many incremental steps.
**Addition of last 5 classes.** Then we considered two more challenging settings where 15 classes are learned in the initial step and the last 5 in an incremental way, in one shot (15-5) or sequentially one at a time (15-1), respectively, leading to a more severe catastrophic forgetting.
Taking a closer look at the results in Table I (upper mid and right sections), FT has very low performances in this case and continual learning methods are required. Again the proposed approach effectively tackles catastrophic forgetting. The trend can be observed both in the disjoint and overlapped settings in both the 15-5 setup and more evidently in the 15-1 one. Notice how exploiting web replay samples proves to effectively restore knowledge of past classes and the additional refinements introduced in this journal version allow for a large performance increase in all settings, compared to RECALL [7].
When comparing with competitors it is possible to notice how our approach scales much better when multiple incremental steps are performed: in the 15-5 setting, results are similar to the best competitors (nevertheless, according to the \(\Delta\) metric, that reflects the best-case scenario, our approach stands out as the top performer). In the 15-1 (where 5 incremental steps are performed), we clearly outperform all competitors.
**Addition of last 10 classes.** For further analysis, we consider even more challenging settings where the initial step contains only 10 classes. The other 10 can be learned in a single step (10-10), in 2 steps of 5 classes (10-5), or one-by-one (10-1). Especially the latter two settings where the learning happens in multiple stages prove to be very challenging for most continual learning approaches. The introduction of replay data allows for a very large performance improvement, outperforming competitors in the settings with more learning steps. In this case, the gap with [7] of the work is a bit smaller, but notice that in the most challenging setting (10-1), the new approach has the best performance in both overlapped and disjoint settings with a noticeable gap with respect to the conference
version and other competitors. As a final remark, notice that on one side it is true that the approach exploits additional external data but, on the other side, note how this is unsupervised data easy to obtain from the web. Furthermore, we do not exploit any additional provision in the network training like additional modules or loss functions.
**ADE20K.** Table II shows the results on the challenging ADE20K dataset [48]. The first considered setting is 100-50 (2 steps). Here the web-crawled images allow us to outperform competitors. The 100-10 setting has 6 steps: despite the baseline network performance being worse than some competitors, our approach still gains competitive performance, as proved by the \(\Delta\) metric that is the second best. For the more challenging setting 100-5 (11 steps), RECALL+ achieves the state of the art (w.r.t. the \(\Delta\) metric), outperforming other methods, which verifies the robustness of RECALL+ in tasks with a large number of incremental steps. We also notice that for the 50-50 task, RECALL+ is slightly weaker than some competitors. We conjecture that both the frozen backbone and the large number of new classes in every single step drop the performance.
**Qualitative evaluation.** We further show the qualitative results of different methods in the 15-1 disjoint setting in Fig. 8. In the first row, PLOP and REMI are confused by the background, which causes mislabeling of the _person_. RCIL and RECALL preserve the knowledge of _person_, while the capabilities of the _bike_ seem to be diminished. Compared with other methods, RECALL+ not only prevent mislabeling, but preserve most area of _bike_ and _person_ w.r.t joint-trained model predicted label. For the second row, PLOP and REMI lose control of the _dining table_ and few _potted plant_ occurred on the RCIL predicted label. RECALL preserves the _person_ well but the information about _dining table_ is also forgotten. By using the proposed image-selecting strategy, diverse replay images help our RECALL+ preserve the information about _dining table_ and prevent prediction to be more coarse. Finally, for the last row, similar to the former situation, PLOP and REMI give more coarse predictions, and RCIL totally mislabels the _chair_ to the _potted plant_. Compared with RECALL, beneficial from the replay images and powerful helpers, the prediction by RECALL+ is more similar to the ground truth label, showing the effectiveness of the proposed image-selecting strategy.
## VIII Ablation Study
To further validate the effectiveness of the various components of our approach, we perform some ablation studies. Firstly, we discuss the impact of the background and knowledge self-inpainting techniques introduced in Section IV-B. Then, we evaluate the proposed image selection methods to evaluate their impact on the achieved accuracy.
**Inpainting Strategies.** We start by analyzing the contribution of the background self-inpainting and knowledge self-inpainting in the most challenging task, i.e., the 10-1 setting. Results are presented in Table III: the background inpainting technique provides a solid contribution to the preservation of past knowledge, as expected, since it forces the optimization to focus also on data of the past classes, playing a role similar to knowledge distillation strategies used in other approaches. By inpainting the past knowledge in the new images, we achieve an improvement of 1.9% (from 64.0% to 65.9% ) and 10.9% (from 45.4% to 56.3%) on old and new classes respectively. The knowledge self-inpainting on replay data brings a smaller improvement of \(0.9\%\) (from 65.0% to 65.9%) and \(0.7\%\) (from 55.6% to 56.3%) on old and new classes, respectively. This is expected since it concerns a more limited number of classes and exploits the model for the new classes that is being learned in the current step, so it is still not completely reliable. Notice that differently from background self-inpainting, knowledge self-inpainting do not affect labels of old classes. Since the learning progress tends to be more and more coarse, performing inpainting of replay images without any constraint reduces the performance (see Table IV). Specifically, in the Table, we studied the effect on Equation 4 of introducing a further constraint on the classes already present (i.e., the third condition).
**Image Selection Methods** Table V shows the ablation studies concerning the proposed image selection strategies, i.e., the adversarial learning-based (AL) selection strategy and the CDF-based Threshold (CTH) selection strategy. Besides, the fixed threshold strategy (FTH) is also listed as a reference. FTH uses a fixed label size threshold: \(25\%\) of the image area, to perform selecting strategy.
The baseline strategy without any selection strategy leads to an accuracy of \(51.6\). Notice that the self-inpainting techniques are not adopted here.
Let's consider the AL first. By incorporating the AL filtering strategy, it brings an improvement of \(2.5\%\) w.r.t the baseline, which indicates that the dataset with more accurately selected images is beneficial to the model. When separately using CTH, it causes a slight drop of \(0.3\%\) in performance, while using FTH separately gains an improvement of \(2.2\%\), which indicates that the replay images have to contain a larger label area to preserve the knowledge. When combining AL with FTH and CTH respectively, AL+CTH gains an improvement of \(1.9\%\) w.r.t AL+FTH, proving that semantic content thresholding without considering the sizes of different objects may cause degradation of performance.
## IX Conclusions
In this paper, we tackle catastrophic forgetting by proposing an efficient yet effective replay strategy using web-crawled images. We extracted from web images the most suitable for training using a selection strategy exploiting two main insights: an adversarial learning strategy, that aims at selecting images with statistics resembling the original training ones; and an analysis of the semantic prediction, which filters images whose semantic labels do not match the expected semantic content. Besides, we extend the background self-inpainting technique to the web-downloaded images, further improving the performance. The exhaustive experimental evaluation proves the effectiveness and robustness of the proposed approach. New research will focus on exploring how to exploit unsatisfactory web-crawled data instead of just discarding it. Besides, the weakly-supervision will also be considered as a strategy to replace pixel-wise labelling and tackle more challenging settings where only image-level labeling is available in the incremental steps.
Fig. 8: Qualitative results comparison on disjoint incremental setup 15-1 on the Pascal VOC 2012 validation dataset. |
2301.13752 | Quantum mechanics at high school: an online laboratory on wave-particle
duality | The interest in studying quantum mechanics is always increasing in our
society and schools. Especially in the latter case, this leads researchers to
implement suitable actions to meet social needs of knowledge of quantum
physics. We present an online laboratory on wave-particle duality for high
school students (17-19 years old). The activity has been carried out in the
period December 2021 - May 2022 at the Physics Department of the University of
Cagliari and more than 100 students from different high schools in Sardinia
have been involved. We will show the design of the activity and the experiments
performed. We will show and discuss qualitatively results about a satisfaction
questionnaire. A brief discussion about motivational issues will be done. | Matteo Tuveri, Daniela Fadda, Carlo Maria Carbonaro | 2023-01-31T16:39:12Z | http://arxiv.org/abs/2301.13752v1 | # Quantum mechanics at high school: an online laboratory on wave-particle duality
###### Abstract
The interest in studying quantum mechanics is always increasing in our society and schools. Especially in the latter case, this leads researchers to implement suitable actions to meet social needs of knowledge of quantum physics. We present an online laboratory on wave-particle duality for high school students (17-19 years old). The activity has been carried out in the period December 2021 - May 2022 at the Physics Department of the University of Cagliari and more than 100 students from different high schools in Sardinia have been involved. We will show the design of the activity and the experiments performed. We will show and discuss qualitatively results about a satisfaction questionnaire. A brief discussion about motivational issues will be done.
Quantum mechanics at high school: an online laboratory on wave-particle duality
M. Tuveri
1
## 1 Introduction
Quantum mechanics is around us, and the interest in studying this subject is increasing in our society. For example, topics related to quantum physics are now part of high schools' programs. Newspapers, tv shows and science communication profiles on social media often talk about quantum technologies around us. To learn and to be informed about quantum mechanics and its application in our research as well as in our everyday life is important for cultural reasons and to become consciousness citizens [1]. From this point of view, researchers play an important role in society, they have to implement suitable actions to meet social needs of knowledge of quantum physics.
Starting from schools, many strategies can be used to face with the quantum world, focusing on technological aspects [2, 3] or on historical and informal ones [5]. The educational content of these approaches can focus on different subjects, from conceptual and linguistic aspects [4, 6], where the language, both the natural and the mathematical one, is used as an instrument to introduce the peculiar features of the quantum world [7].
Another possibility can be to focus on one of the main conceptual issues of quantum mechanics, that is the wave-particle duality and develop suitable learning strategies to highlight the manifestation of the dual nature of matter and light.
In this paper we present a laboratory on wave-particle duality for high school students (17-19 years old). The experimental activities had been carried out in the period December 2021 - May 2022 at the Physics Department of the University of Cagliari. More than one hundred students from different high schools in Sardinia have been involved, whose participation was online due to the pandemic. Main aim is to show the design of the activity and the experiments performed. Inspired by previous research on this field [1], we also wrote a research questionnaire to understand how the online laboratory affected students' motivation and interest in physics, their vision of the scientific method and the influence of the laboratory on their understanding of physics and the concepts studied at school. A detailed analysis will appear in a forthcoming paper. Research methodology and a qualitative analysis of data are presented.
## 2 Methods
The main topic of the laboratory was the wave-particle duality. We focus on waves (mechanical and electromagnetic) and their properties, as well as on particular macroscopic properties and phenomenology of matter (such as scattering). Four different experiments dealing with the undulatory and particle properties of matter were shown and discussed.
The first experiment deal with mechanical waves propagating in a fluid. In this case, researchers focused on diffraction as a key phenomenon to introduce the dual behavior of matter according to the experimental set-up and conditions. The second experiment was a flipper-like apparatus, with marbles hitting a screen passing through a slit. This was to explain and show the particle behavior of matter, that is that massive particles and, in general, macroscopic objects (with a length of the order of centimeters or more) do not diffract. Also in this case, researchers focused on the phenomenon of elastic scattering and the relationship between the dimension of the marbles and the slit. The third experiment concerned the diffraction of light. A red light emitted by a laser (with a wavelength of about 650 nm) passes through some lenses and a slit to be coherently collimated in a beam. The slit can be opened or closed until its size becomes comparable with the laser wavelength. Then, diffraction occurs. Finally, in the fourth experiment, researchers showed electron diffraction through a suitable experimental set-up (the electron diffraction system build by Phywe). In this case, the diffraction manifests with rings on a fluorescent screen. This experiment shows that, under suitable conditions, that is an electron passing through a slit (graphite planes) of dimensions comparable with its wavelength, even what is typically thought as a particle manifest an undulatory phenomenology. To also show that in this process, the electron does not loose its charge, we used a magnet to move the diffraction figure along all the screen.
The methodological structure of the laboratory was as follows. Firstly, an introductory game was proposed using the "Quizziz" platform to qualitatively measure students' expectations about phenomena showed during the activity. The questions had not any evaluation intent, rather they just measured their feeling or previous knowledge (especially for students attending the last years in high school) on the subject. This activity lasts ten minutes. After that, the experimental activities started (duration: 40 minutes). The laboratory ended with a general recap on the physics dealt and the results of the introductory game was discussed in the light of phenomena observed. We left a detailed
discussion on the pedagogical approach to a future paper, where further details will be given.
The total duration of the activity was about one hour. Contents where targeted: the more the participants were young the less technicalities and details were inserted in the discussion. Despite the online environment, a certain level of interaction with the class through the mediation of the teacher was guaranteed.
Participants to the synchronous online session were 104 high-school students attending the last three years of Lyceums in Sardinia (in the metropolitan area of Cagliari, 1 "humanities" and 5 "scientific"). The total number of participants was obtained by summing the in-class counting made by teachers once the synchronous session started. Teachers and students attended the online laboratory from their classrooms and a Zoom connection was established, with cameras filming the researchers and the experiments connected to a laptop with a Raspberry system. Every class attended the laboratory separately, thus the total number of meeting was 6.
We wrote a satisfaction questionnaire to investigate students' feedback on their experience with the online laboratory (2 items); on the influence of the laboratory on their understanding of physics and the concepts studied at school (2 items); on their vision of science and of the scientific method (3 items); on the interaction with researchers (2 items). Students could answer by using a 5-point Likert scale, from 1 (completely disagree) to 5 (completely agree). For each class, data were collected from 15 days to one month after the end of the synchronous meeting. The questionnaire was written in Italian and imported in Microsoft Forms. The teacher distributed it as a link via email to students. Students' participation was voluntary with no positive or negative inducements. The questionnaire was anonymous, no information on gender or class was obtained. The number of answers collected was 104.
In the following, we just show and discuss the qualitative results related to students's satisfaction on the 4 domains cited above.
## 3 Results
Concerning students' feedback on their experience with the online laboratory, most of them (78.8%) affirmed that the topics of the lab were interesting. One half of them (51.1%) thought the lab fostered their curiosity on the topics of the lab, whereas one third of the sample (30.8%) stay neutral on this item. Concerning the influence of the laboratory on their understanding of physics and the concepts studied at school, 46.2% of students affirmed that thanks to the lab, he/she could explore the physics phenomena he/she was studying at school. The 29.8% of the sample was neutral. The capacity of the lab to engage students in in studying physics was rated as good by the 66.3% of students.
Concerning their vision of science and of the scientific method, the 68.2% of students affirmed that the lab helps them in understanding the importance of taking, analyzing, collect, and interpret the data. Most of them (67.3%) affirmed that the lab helps them in understanding how to carry a scientific research on. The same happens when we asked students if the lab allowed them to think about and explain the observed phenomena: in this case, the 57.5% of students agreed with this item. Concerning the interaction with researchers, the majority of them (78.8%) affirmed that the interactions with researchers were useful. Finally, the item: "attending the remote lab with the researcher helps me in understanding the experiment" was positively rated by the 65.3% of the sample.
## 4 Discussion and conclusions
The qualitative results on students' interest and curiosity towards physics, as well as on their motivation in participating to the online activity are encouraging. Moreover students appreciated to interact with researchers even in an online environment. Most of them also affirmed that the interaction with researchers was also helpful in understanding the physics behind the experiments. This result suggests that interaction is a key point in learning and in outreaching activities, too. Another interesting result is that students affirmed that our initiative helps them in understanding the scientific method. The laboratory seemed to have a discrete influence also on students' understanding of physics concepts studied at school.
Some criticality emerged: teacher-mediated interaction between researchers and students did not encourage a constant and active participation of the latter to the lecture. Students appeared to be scared by a possible judgements of their teacher if they were wrong in talking with researchers. This is a crucial point to be faced up in order to find strategies to implement online learning of physics in school in curricular timetable. A possible solution can be to include teachers in the design of the activity, thus making them co-authors of the project. In this sense, if they already know what researchers will show, then, they can explain concepts to their class during the synchronous activity, e.g., when internet connection arises or when he/she think this is needed.
For the future, we hope to increase the sample to better the statistics and to understand the efficacy of our methodology, possibly introducing a quantitative measure of students' learning of concepts proposed in the laboratory with a pre and a post questionnaire. We are also planning to implement the laboratory in a proper education and learning platform allowing us to study all the steps of participants' online learning. This platform will allow us to follow the students also in the asynchronous phase, when they re-elaborate the supplementary and learning material uploaded on the platform and focus on the content of the laboratory. All these activities are left for a future study.
###### Acknowledgements.
The author acknowledge faculties, teachers and students who participated in the study.
|
2309.08236 | Future Research Perspective on the Interfacial Physics of Non-Invasive
Glaucoma Testing in Pathogen Transmission from the Eyes | Non-contact Tonometry (NCT) is a non-invasive ophthalmologic technique to
measure intraocular pressure (IOP) using an air puff for routine glaucoma
testing. Although IOP measurement using NCT has been perfected over many years,
various phenomenological aspects of interfacial physics, fluid structure
interaction, waves on corneal surface, and pathogen transmission routes to name
a few are inherently unexplored. Research investigating the interdisciplinary
physics of the ocular biointerface and of the NCT procedure is sparse and hence
remains to be explored in sufficient depth. In this perspective piece, we
introduce NCT and propose future research prospects that can be undertaken for
a better understanding of the various hydrodynamic processes that occur during
NCT from a pathogen transmission viewpoint. In particular, the research
directions include the characterization and measurement of the incoming air
puff, understanding the complex fluid-solid interactions occurring between the
air puff and the human eye for measuring IOP, investigating the various waves
that form and travel; tear film breakup and subsequent droplet formation
mechanisms at various spatiotemporal length scales. Further, from ocular
disease transmission perspective, the disintegration of the tear film into
droplets and aerosols poses a potential pathogen transmission route during NCT
for pathogens residing in nasolacrimal and nasopharynx pathways. Adequate
precautions by opthalmologist and medical practioners are therefore necessary
to conduct the IOP measurements in a clinically safer way to prevent the risk
associated with pathogen transmission from ocular diseases like conjunctivitis,
keratitis and COVID-19 during the NCT procedure. | Durbar Roy, Saptarshi Basu | 2023-09-15T08:14:43Z | http://arxiv.org/abs/2309.08236v1 | Future Research Perspective on the Interfacial Physics of Non-Invasive Glaucoma Testing in Pathogen Transmission from the Eyes
###### Abstract
Non-contact Tonometry (NCT) is a non-invasive ophthalmologic technique to measure intraocular pressure (IOP) using an air puff for routine glaucoma testing. Although IOP measurement using NCT has been perfected over many years, various phenomenological aspects of interfacial physics, fluid structure interaction, waves on corneal surface, and pathogen transmission routes to name a few are inherently unexplored. Research investigating the interdisciplinary physics of the ocular biointerface and of the NCT procedure is sparse and hence remains to be explored in sufficient depth. In this perspective piece, we introduce NCT and propose future research prospects that can be undertaken for a better understanding of the various hydrodynamic processes that occur during NCT from a pathogen transmission viewpoint. In particular, the research directions include the characterization and measurement of the incoming air puff, understanding the complex fluid-solid interactions occurring between the air puff and the human eye for measuring IOP, investigating the various waves that form and travel; tear film breakup and subsequent droplet formation mechanisms at various spatiotemporal length scales. Further, from ocular disease transmission perspective, the disintegration of the tear film into droplets and aerosols poses a potential pathogen transmission route during NCT for pathogens residing in nasolacrimal and nasolaphynx pathways. Adequate precautions by ophthalmologist and medical practioners are therefore necessary to conduct the IOP measurements in a clinically safer way to prevent the risk associated with pathogen transmission from ocular diseases like conjunctivitis, keratitis and COVID-19 during the NCT procedure.
## I Introduction
Non-contact tonometry (NCT)[1] is a widely used technique for measuring intraocular pressure (IOP)[2; 3]; a key ophthalmologic diagnostic indicator for various ocular conditions, including glaucoma[4; 5; 6; 7]. Early detection of glaucoma helps ophthalmologists treat conditions originating from high IOP. Excess and irregular IOP can cause stress in the optic nerve causing damage to nerve fibers and resulting in the formation of peripheral blind spots, tunnel vision, and permanent blindness[8]. Intraocular pressure (IOP) refers to the fluid pressure inside the anterior chamber of the eye, which is maintained by the balance of inflow and outflow of aqueous humor; a clear watery fluid that circulates within the eye[2; 3]. IOP is an important parameter in evaluating ocular health. It is used as a screening test for several conditions that can cause irreversible damage to the optic nerve and result in vision loss[9]. IOP is generally measured as a gauge pressure (i.e., pressure above the atmospheric pressure) in mm of Hg. Given the importance of such non-invasive procedures like NCT in ocular health diagnostics, it is important to investigate the safety of various ophthalmologic procedures from a clinical and mechanistic perspective. Previous studies shows[10; 11] micro-aerosols and drop formation can occur from corneal tear film during NCT procedure. Such drops and aerosols generated can lead to new pathogen transmission routes for microorganisms present in nasolacrimal and nasolaphynx pathways during non-invasive eye procedures. Pathogen present in human tears can spread from drops and aerosols originating due to tear film destabilization due to a complex hydrodynamic interaction that occurs between the corneal tear film and the incoming air-puff as was shown in our previous studies[10; 12]. Further in-depth research into various mechanistic processes are hence required to understand pathogen transmission during NCT in a quantitative perspective. This article highlights some of the future research prospects that can be taken in the future to expand our understanding of NCT and pathogen transmission during NCT.
Tonometry in general is a diagnostic test to measure the intraocular pressure (IOP)[13; 1] and is an essential tool in diagnosing and managing various ocular conditions. IOP measurements are performed using specialized instruments called tonometers, which can be subdivided into two major classes, contact and non-contact[14; 15]. Contact tonometry involves touching the cornea with a small, flat probe to measure the force needed to flatten a specific corneal area. The most commonly used contact tonometer and the gold standard is the Goldmann applanation tonometer[16; 17]. The test requires a topical anesthetic (local anesthesia) to numb the eye's surface before gently applying the probe to the cornea[18]. Non-contact tonometry, on the other hand, uses a puff of air to measure IOP. The non-contact test is performed using a device called a non-contact tonometer or air-puff tonometer[19]. The patient is seated in front of the device, and a puff of air is directed at the cornea. The device then measures the IOP based on the corneal deflection in response to the air puff. While both contact and non-contact tonometry are accurate methods of measuring IOP, there are some differences between the two techniques[19]. Contact tonometry is considered the gold standard for IOP measurement, as it provides more precise and reliable readings. However, it requires anesthetic drops and may be uncomfortable for some patients. Non-contact tonometry is a more comfortable alternative and helpful in screening patients who cannot tolerate contact tonometry. It is important to note that IOP readings can vary, just like blood pres
sure throughout the day, and a single measurement may not be sufficient to diagnose a condition such as glaucoma [20; 21]. Hence, a series of IOP measurements over time may be necessary to accurately assess changes in IOP and determine the best course of treatment. For repeated temporal measurements of IOP, non-contact methods are more practical and comfortable for the patients. Further, in a general hospital setting non contact mode is more efficient and reliable to handle a large volume of patients. NCT is a safe and non-invasive procedure that can be performed in a doctor's office or clinic. It is a crucial part of routine eye exams, particularly for patients at high risk for developing glaucoma. Early detection and management of elevated IOP can help prevent vision loss and preserve ocular health. Contact and non-contact tonometry are accurate methods of measuring IOP, and the choice of technique depends on the individual patient's needs and preferences.
The human eye, an essential part of the sensory nervous system, is one of our body's most complex organs and involves many physical, biochemical, and physiochemical processes [8; 22]. Human eye consists of various substances in different states of matter like solids, liquids, gels, colloids and soft materials to perform essential physiological processes and functions. For example, intraocular pressure (IOP) is maintained due to fluid pressure of aqueous humour in the eye's anterior chamber. Vitreous humour, another clear gel found inside the posterior chamber of our eyes helps to provide essential nutrients and maintains the shape of the eye. The corneal tear film is another important fluid responsible for several important physiological processes like protection against infection, remove free radicals, lubrication of the ocular surface [23; 24]. Further, the tear film also provides a smooth optical surface for light refraction. A thorough mechanistic understanding of the various processes and ophthalmologic conditions that occur inside our eyes is still elusive; and requires in-depth future investigation and analysis. Ophthalmologic measurements like IOP using NCT are based on several unexplored hydrodynamic processes and poses a challenge for scientists, engineers and medical professionals. Understanding fluid mechanics in the context of the human eye is fascinating. It has significant implications for the accuracy of measuring devices like tonometers and hence is also important from a clinical perspective.
The measurement of IOP using NCT is a transient hydrodynamic process and involves the interaction of an high speed air puff (velocity scale of the order of \(5m/s\)) with the cornea [10]. The critical interplay of external air pressure and intraocular pressure governs the corneal dynamics and its response. Further, corneal properties like elasticity, stiffness, and viscoelasticity are also important mechanical properties that play a crucial role in determining the corneal displacement profile and response time scale [8]. The amount of pressure required to flatten a specific area of the Cornea is used to calculate the IOP. Normal IOP ranges between 10 and 21 mmHg (millimeters of mercury) above atmospheric pressure but can vary between individuals and even throughout the day. Factors such as age [15], genetics, and body position can all influence IOP levels. Higher IOP values are associated with an increased risk of developing glaucoma. Fig. 1 shows a schematic representation of a human eye cross-section. The components labeled are the Cornea, Iris, Lens, Pupil, Aqueous Humour, Suspendory Ligaments, Cilia muscle, Eye muscle, Vitreous Humour, Sclera, Retina, and optic nerve. The Cornea is a transparent protective front eye part covering the anterior chamber, Iris, and pupil. The deformation and response of the Cornea on external loading are used to determine IOP. The important fluid elements present in our eye are the tear film, Aqueous Humour, and Vitreous Humour (refer 1 and 2 (a)). Several hydrodynamic mechanisms regulate IOP, including the production and outflow of aqueous humor in the anterior chamber (the region between the Iris and the Cornea) [25]. The ciliary body, a structure located behind the Iris and close to the ciliary muscle (refer 1), is responsible for producing aqueous humor [26]. The fluid then flows through the pupil into the anterior chamber (fluid influx to the anterior chamber). Some fraction of it is drained from the eye via the trabecular meshwork and Schlemm's canal (fluid efflux from the anterior chamber). In some cases, the outflow of aqueous humor can become obstructed, leading to an increase in IOP [27]. Some of the processes involving the dynamics of the tear film, aqueous humour, and vitreous humour under various conditions have been studied by mathematicians, physicists, and engineers over many years [28; 29; 30; 31; 32]. However, similar comprehensive works in the context of non-contact tonometry is relatively sparse [10]. Some recent works in the last decade have probed the fluid-solid interaction between the impinging air puff and the Cornea from a computational perspective using discretization schemes like finite volume, finite element, and arbitrary Eulerian-Lagrangian frameworks [33; 34; 35]. Fig. 2 depicts various fluid mechanical phenomena schematically in the context of the human eye (refer Fig. 2(a)) and IOP measurements in general (refer Fig. 2(b), (c)). For example, consider saccadic eye movements shown in Fig. 2(a) that are responsible for transient flow field structures in the posterior chamber containing liquefied vitreous humour (ophthalmologic condition when the vitreous humour is liquefied or replaced after vitrectomy) [36]. The acceleration gravity is vertically downward depicted by \(g\) refering to upright configuration of the eye. Flow structures also exist in the anterior chamber containing aqueous humour. Flows in the aqueous humour are generated due to the combined effects of tear film evaporation; buoyancy-driven flow generated due to temperature gradients between the ambient (\(T_{a}\)) and the iris \(T_{b}\)\(\sim\)\(310K>T_{a}\); and aqueous humour secretion by the ciliary bodies (refer Fig. 2(a)). These unique flow patterns in the vitreous and aqueous humour can be important for patient drug delivery. Fig. 2(b) depicts some of the hydrodynamic processes and the air puff flow field during NCT. The air puff consists of a leading vortex followed by a trailing jet. Further the tear film sheet ejection is also depicted schematically initiated by the leading vortex. The trailing jet on impinging the corneal surface causes surface deflection. The corneal surface at various time instants (\(t=t_{1},t_{2}\)) is shown schematically. Fig. 2(c) depicts capillary waves radially propagating outwards on the surface of the tear film attached to the corneal surface. The tear film ejection as a
the optic nerve and loss of peripheral vision. In general, glaucoma is asymptomatic. However, specific symptoms such as eye pain, redness, and headaches can be associated with elevated IOP values [37]. If left untreated, high IOP levels can result in irreversible vision loss. Treatment options for elevated IOP depend on the underlying cause and severity of the condition. Sometimes, lifestyle modifications such as exercise and a healthy diet can help lower IOP levels. Medications such as beta-blockers, prostaglandin analogs, and carbonic anhydrase inhibitors can also be used to lower IOP [38; 39; 40]. In more severe cases, surgical procedures such as trabecectomy [41; 42; 43], laser trabeculoplasty [44; 45; 46; 47], or drainage implants [48; 49; 50; 51] may be necessary. Understanding the fluid mechanical phenomena in regulating IOP and identifying risk factors for elevated IOP can help prevent vision loss associated with ocular conditions. Regular monitoring of IOP is hence recommended, and NCT is one of the standard safe procedures.
When a puff of air from the tonometer nozzle is directed toward the cornea, a small indentation on the corneal surface is formed. The amount of indentation depends on various flow field parameters of the impinging air puff, such as velocity, pressure, and the distance between the air nozzle and the cornea. The corneal indentation required to estimate IOP will also depend on the anterior chamber's hydrodynamics. The flow field in the aqueous humour plays a critical role in determining the IOP. At a normal upright condition of the eye in an individual, at ambient temperatures lower than the average body temperature of \(T_{b}\)\(\sim\)37\({}^{\circ}C\), buoyancy effects cause convective flow fields to establish and generate velocity scales of the order of \(\sim\)0.1\(mm/s\) in the Aqueous Humour (refer Fig. 2(a)) [31]. If the temperature of the ambient is higher than the body temperature \(T_{b}\), there can be significant changes in the corneal tear film. Most importantly, the corneal tear film becomes thinner and becomes sticky as the concentration of mucus and lipids increases due to higher evaporation rates. Further, due to the heat transfer into the eye from the high temperature ambient, qualitatively the direction of the convection rolls reverses its direction in the aqueous humour as compared to the case where the environment temperature is lower than \(T_{b}\). The sudden interaction of the air puff with the cornea will set the aqueous humour into transient dynamics for a short time during the measurement process. Hence, the proper estimation of the IOP should be based on the dynamic nature of the flow field in the aqueous humour compared to hydrostatic pressure considered in the most estimation of IOP. The indentation of the cornea causes a displacement of the aqueous humor. This displacement generates a series of waves that propagate through the fluid. The laws of continuum mechanics govern the propagation of these waves. The aqueous humor is an incompressible fluid, meaning its density cannot be changed significantly by applying external pressure. The incompressibility is due to a large bulk modulus of liquids in general and aqueous humor in specific. Therefore, any fluid
Figure 1: Schematic showing the cross sectional view of human eye
Figure 2: (a) Schematic showing various flow fields that can exist in human eye under various conditions [28; 31; 32; 36]. (b) Deformation of the corneal surface during a typical NCT procedure and possible tear film ejection from watery eyes [10]. (c) Schematic depicting the capillary waves/ripples propagating on the tear film surface on top of corneal surface. Hydrodynamic instabilities cause disintegration of tear sheet into droplets.
displacement must be accommodated by a corresponding displacement elsewhere in the fluid. This displacement generates a pressure wave that propagates throughout the fluid. The speed of this wave depends on the bulk modulus and density of the fluid. During NCT, the pressure wave propagates through the aqueous humor and reaches the cornea. The cornea acts as a boundary condition for the wave, reflecting part of the wave into the aqueous humor and transmitting part into the eye. The transmitted wave propagates through the eye and is eventually dissipated as it encounters various structures in the eye. During the air puff ejection process, the air puff tonometer device also emits a collimated light beam that gets reflected from the cornea's surface to a measuring photocell sensor [52]. The instrument is calibrated to measure the time delay for maximum reflection during the cornea's maximum deformation. The time delay of maximum reflection is calibrated with the force required to deform the cornea, hence measuring the IOP. The time delay is intricately related to the deformation of the cornea, which in turn is related to the various hydrodynamic interaction that occurs during the measurement process. In a recent study, a group showed that a correlation exists between the tear film thickness and the IOP measured during NCT [53]. A higher value of the tear film thickness was shown to cause higher IOP measurements. Further, in our recent study, we also deciphered the mechanism of tear film destabilization that leads to droplet generation and possibly a pathogen transmission mechanism through the corneal tear film disintegration and aerosolization [10; 12]. The tear film disintegration process and aerosol generation were later confirmed computationally by a recent work of Zhou et al. [54]. The mechanics of NCT are, therefore, complex and includes phenomena on various spatiotemporal scales. The measurement of the IOP depends on accurately estimating the applanation distance (distance between the tonometer nozzle exit and the cornea), flow field characteristics of the incoming jet, corneal elastic properties, and the aqueous humour. The understanding of these fluid-solid interactions is crucial for the optimization of the NCT instrument and the accuracy of the measurements. Further understanding the fundamental fluid-solid interactions during NCT can help medical practitioners deal with and prevent ophthalmologic complications like corneal perforation [55]. In this perspective, we provide some research directions that could improve our understanding of the phenomena at hand.
## II Research directions
Figure 3 depicts the time sequence of various fluid mechanical processes occurring during non-contact tonometry [10] classified as different phases temporally. Some of the important phases of the non-contact tonometry process are the air puff ejection from the tonometer nozzle and the subsequent propagation of the air puff towards the cornea. As the air puff approaches the cornea, an initial tear sheet expansion occurs due to pressure gradients followed by the corneal deflection and propagation of capillary waves on the corneal tear film surface. The capillary waves in addition to the unsteady external air flow field expands the tear film into a 3D tear film sheet that breaks in droplets due to Rayleigh Taylor and Rayleigh Plateau instability [10]. All the phases shown in Figure 3 could be its own research field. Some of the most important research directions are mentioned below.
### Understanding the incoming air puff
The air puff non-contact tonometer measures the force required to flatten a small area of the corneal surface with a puff of air. The air puff generated by the non-contact tonometer is an essential part of the measurement process. Understanding how its structure and flow field features affect the measurement process is crucial. The air puff generated by the non-contact tonometer is a short air burst directed towards the eye [52]. The force of the air puff is calibrated to be strong enough to deform a small area of the Cornea but not strong enough to cause any damage or discomfort. The air puff is typically generated by a small compressor that is built into the tonometer. When the measurement button is pressed, the compressor releases a brief burst of air that is directed toward the eye. The force of the air puff is calibrated based on the thickness and curvature of the Cornea. The tonometer measures the force required to flatten the Cornea and uses this measurement to calculate the intraocular pressure. While the air puff generated by the non-contact tonometer is not harmful, it can be startling to some patients. The sudden rush of air can cause the eye to blink, and some patients may feel a slight discomfort or pressure sensation. Proper optimized air puff design can help mitigate the discomfort. However, these effects are temporary and generally go away quickly. It is important to note that the air puff tonometer may not be suitable for all patients and age groups. Patients with certain eye conditions, such as corneal abnormalities or injuries, may not be able to tolerate the air puff test. Additionally, patients anxious or nervous about the test may find it challenging to keep their eyes open or remain still during the test. The air puff generated by the non-contact tonometer is essential to the intraocular pressure measurement process. While it may cause temporary discomfort or startle some patients, it is generally a safe and effective way to measure the pressure inside the eye. Although the measurement process of IOP using the air puff is being perfected by various device manufacturers, several aspects related to the structure of the incoming puff/jet are still unknown due to trade secrets and requires further experimental and theoretical investigations. In our recent work [10], using smoke flow visualization methods, we showed that the air puff essentially consists of a leading vortex followed by a trailing jet for the tonometer we used (refer to our previous work for details). Further, we could also measure the approximate air puff velocity using high-speed shadowgraphy [56] and particle tracking methods [57]. Characterizing the incoming air puff is essential to properly understand the initial flow field features and characteristics subjected to the Cornea for IOP measurements. The incoming air puff could be characterized quantitatively with very high accuracy using high-speed particle imaging velocimetry [58; 59], particle tracking [57] and shadowgraphy methods [56]. Further combining experimental with
theoretical and computational studies will help scientists, engineers, and medical practitioners to understand the overall phenomena rigorously, which can help clinical practice.
### Fluid solid interaction between the air puff and the eye
The air puff interacting with the eye deforms the cornea. The cornea's deformation is calibrated and directly related to the IOP values used by ophthalmologists. However, this process is not as simple as it sounds, as a complex fluid-solid interaction occurs during the measurement process. When the puff of air is directed at the cornea, it causes a deformation of the cornea's surface. This deformation creates a propagating wave, which travels through the cornea and the aqueous humor. The aqueous humor is a clear, watery fluid that fills the space between the cornea and the eye's lens [26; 27]. The wave generated by the air puff propagates through the aqueous humor and interacts with the eye's lens. This interaction causes the lens to move slightly, which can affect the measurement of the IOP. The movement of the lens is known as the Ocular Response Analyzer (ORA) effect [60; 61; 62; 63]. Some tonometers use a dual-pulse system to compensate for the ORA effect. This system uses two puffs of air, one that is stronger than the other. The first puff is used to generate the propagating wave, and the second puff is used to measure the IOP. The difference between the two puffs allows for compensating the ORA effect. In addition to the ORA effect, a fluid-solid interaction occurs between the cornea and the aqueous humor. The fluid in the aqueous humor can act as a cushion and absorb some of the energy from the air puff. The energy loss can lead to an underestimation of the IOP. Some tonometers use a correction factor that considers the cornea's properties, such as its thickness and curvature, to compensate for the ORA effect. This correction factor helps to ensure that the measurement of the IOP is as accurate as possible [64; 15; 65]. The complex fluid-solid interaction that occurs during non-contact tonometry measurements can affect the measurement of the IOP, and various methods are used to compensate for its effects. Eye care professionals need to be aware of these effects and use the appropriate techniques to obtain accurate measurements of the IOP. The interaction between the air puff and the human eye can be
Figure 3: Various fluid mechanical processes occuring during non-contact tonometry process labelled according to temporal evolution. The red dotted arrow depicts the order of events in time (A-F) [10].
mapped to a fluid-solid interaction (FSI) framework, which is quite a general approach. Some recent numerical works are aligned in this direction [33, 34, 35, 66]. However, such computational models need to be corroborated with experimental measurements. Further, the computational model's initial conditions of the incoming jet should be taken from the kind of experimental/theoretical investigations mentioned in the previous section II. A. for determining the initial flow field structure of the incoming air puff/jet.
### Understanding various types of waves formed during the interaction
The air puff interacting with the cornea and the tear film causes the formation of various types of waves like surface waves, capillary waves, shear waves, and lamb waves, to name a few. Understanding these waves is essential for accurate measurements of the IOP. The surface wave or S-wave is a type of wave that forms on the cornea and can travel through the entire eye during the measurement process. This wave is created by the sudden displacement of the air that is directed at the cornea. The surface wave is a small, high-frequency wave [67] that spreads outwards from the point where the air puff is directed at the cornea. High-frequency S waves being highly energetic, dissipates quickly. In contrast, the capillary wave [68] is a low-frequency wave that forms on the surface of the tear film on the cornea. Capillary wave is caused by the interaction of the air puff with the tear film surface, and surface tension plays a significant role in determining the wave characteristics. The capillary wave is slower and more long-lasting than the surface wave. Acoustic waves are another low-frequency wave that forms in the eye's aqueous humor [69]. This wave is created by the impact of the air puff on the cornea and travels through the aqueous humor towards the lens of the eye. Another low-frequency elastic wave that forms on the surface of the cornea and the entire spherical portion of the eye is Lamb wave [70]. These waves have different properties, such as frequency and amplitude, and can measure different aspects of the cornea and the eye. For example, the capillary wave can be used to measure the thickness of the tear film on the cornea. In contrast, the acoustic wave can be used to measure the depth of the eye's anterior chamber, and the Lamb waves could be used to measure corneal elasticity [71]. Understanding these waves is essential for accurate measurements of the IOP and for diagnosing and managing various ocular conditions in general.
### Tear film interaction and dynamics, breakup and subsequent droplet formation mechanics
The corneal tear film is a thin layer of fluid that covers the Cornea and is essential for maintaining the optical properties of the eye. The tear film comprises multiple fluid layers (mucus, aqueous, and lipids) [28]. Fig. **?**(a) schematically depicts the various layers present in the tear film. The mucus layer of thickness \(0.2-0.5\mu m\) is the first layer just next to the Cornea, followed by a thick aqueous layer of thickness \(2.5-5.0\mu m\). An extremely thin lipid layer of thickness \(0.02-0.05\mu m\) exists just next to the aqueous layer and is the outermost layer exposed to the surrounding air. The tear film thickness varies between individuals ranging from dry eyes to watery eyes [72, 73, 74]. The tear film profile is almost uniform except at the base, where it meets the lower eyelid. The tear film thickness at the bottom-most point is the maximum. The interaction between the air puff and the tear film (especially the bottom part) can lead to sheet formation and subsequent breakup in the case of watery eyes, affecting the accuracy of the IOP measurement. The deformation of the corneal surface due to the incoming air puff leads to various types of waves on the corneal surface, as discussed in section II.C above. These waves can cause disturbances in the tear film, leading to sheet formation followed by several hydrodynamic instabilities and resulting in tear sheet breakup. The breakup of the tear film can affect the measurement of the IOP in two ways. First, the breakup can decrease the force required to flatten the Cornea, which can further underestimate the IOP. Second, the breakup of the tear film can cause irregularities in the corneal surface, which can affect the accuracy of the measurement. Some tonometers use a double-shot technique to compensate for the breakup of the tear film, similar to that used for ORA compensation effects discussed in section II. B. This technique involves directing two air puffs at the Cornea, one stronger than the other. The first puff is used to generate the surface and capillary waves, and the second puff is used to measure the IOP. The difference between the two puffs allows for the compensation of the effects of the tear film breakup. In addition to the double-shot technique, other methods are used to reduce the effects of the tear film breakup. One method involves using a sodium hyalurone solution [75] to coat the Cornea before the measurement. This solution helps to stabilize the tear film and reduce its breakup. Another method involves using a video-based imaging system to monitor the tear film breakup during the measurement [10]. This system can provide information on the stability and quality of the tear film, which can help to ensure the accuracy of the IOP measurement. The tear film is a crucial barrier in the eye that helps to protect against infection. During non-contact tonometry, the interaction between the air puff and the tear film can lead to its breakup, increasing the risk of pathogen transmission and infection. When the tear film breaks up, the underlying corneal surface is exposed, which can create a pathway for pathogen transmission. Pathogens such as bacteria and viruses can enter the eye through the exposed corneal surface, potentially leading to infections such as conjunctivitis [76], keratitis, or endophthalmitis [77]. The risk of pathogen transmission is further increased when the tonometer probe comes in close proximity to the aerosols generated from the tear film resulting in the formation of fomites. Various precautions can be taken to reduce the risk of pathogen transmission during non-contact tonometry. These include ensuring that the tonometer probe is properly sterilized before and after each use. Dissposable tonometer probes can also be used to prevent cross-contamination. In addition, proper hand hygiene should be observed before and after performing non-contact tonometry.
Proper hygiene can reduce the risk of transferring pathogens from the hands to the tonometer probe or from the probe to the eye. The COVID-19 pandemic has highlighted the risks of pathogen transmission during non-contact tonometry [10]. The interaction between the air puff and the tear film can lead to its breakup, increasing the risk of COVID-19 transmission and infection. The SARS-CoV-2 virus is primarily transmitted through respiratory droplets and aerosols. However, the virus has also been found in tears [78], which suggests that transmission through the eyes is possible [10, 11, 12]. Various precautions can be taken to reduce the risk of COVID-19, like disease transmission during non-contact tonometry. These include using personal protective equipment (PPE) such as face masks, gloves, and eye protection. It is also important to note that the risk of COVID-19 transmission during non-contact tonometry may vary depending on the prevalence of the virus in the community. In areas with high transmission rates, additional precautions may be necessary to reduce the risk of transmission. Therefore, understanding the tear film interaction and dynamics with subsequent droplet formation will be an important study to investigate the NCT process and its associated pathogen transmission routes and mechanisms. Several experimental techniques from fluid mechanics like interferometry [79], shadowgraphy [56], and laser Doppler anemometry [80], to mention a few, could be used to study the tear film dynamics and its disintegration into droplets quantitatively.
## III Conclusion
In conclusion, we introduced a non-invasive eye procedure called non-contact tonometry highlighting various processes that occurs during IOP measurement. Further, we listed a series of research directions highlighting open and unexplored areas. Some of the research directions highlighted in this work include characterization and measurement of the incoming air puff, understanding the fluid-solid interaction problem between the impinging jet and the Cornea, understanding the various kinds of waves that form and travel on the eye during the NCT measurement process; and tear film breakup and subsequent droplet formation mechanisms at various spatiotemporal scales.
## Conflict of interest
The authors declare no conflict of interest.
## Data availability statement
Data sharing is not applicable to this article as no new data were created or analyzed in this study.
|
2309.17316 | Robust Stochastic Optimization via Gradient Quantile Clipping | We introduce a clipping strategy for Stochastic Gradient Descent (SGD) which
uses quantiles of the gradient norm as clipping thresholds. We prove that this
new strategy provides a robust and efficient optimization algorithm for smooth
objectives (convex or non-convex), that tolerates heavy-tailed samples
(including infinite variance) and a fraction of outliers in the data stream
akin to Huber contamination. Our mathematical analysis leverages the connection
between constant step size SGD and Markov chains and handles the bias
introduced by clipping in an original way. For strongly convex objectives, we
prove that the iteration converges to a concentrated distribution and derive
high probability bounds on the final estimation error. In the non-convex case,
we prove that the limit distribution is localized on a neighborhood with low
gradient. We propose an implementation of this algorithm using rolling
quantiles which leads to a highly efficient optimization procedure with strong
robustness properties, as confirmed by our numerical experiments. | Ibrahim Merad, Stéphane Gaïffas | 2023-09-29T15:24:48Z | http://arxiv.org/abs/2309.17316v1 | # Robust Stochastic Optimization via Gradient Quantile Clipping
###### Abstract
We introduce a clipping strategy for Stochastic Gradient Descent (SGD) which uses quantiles of the gradient norm as clipping thresholds. We prove that this new strategy provides a robust and efficient optimization algorithm for smooth objectives (convex or non-convex), that tolerates heavy-tailed samples (including infinite variance) and a fraction of outliers in the data stream akin to Huber contamination. Our mathematical analysis leverages the connection between constant step size SGD and Markov chains and handles the bias introduced by clipping in an original way. For strongly convex objectives, we prove that the iteration converges to a concentrated distribution and derive high probability bounds on the final estimation error. In the non-convex case, we prove that the limit distribution is localized on a neighborhood with low gradient. We propose an implementation of this algorithm using rolling quantiles which leads to a highly efficient optimization procedure with strong robustness properties, as confirmed by our numerical experiments.
## 1 Introduction
Stochastic gradient descent (SGD) [73] is the core optimization algorithm at the origin of most stochastic optimization procedures [45, 21, 43]. SGD and its variants are ubiquitously employed in machine learning in order to train most models [46, 6, 48, 78, 11, 55]. The convergence properties of SGD are therefore subjects of major interest. The first guarantees [62, 28] hold under strong statistical assumptions which require data to follow light-tailed sub-Gaussian distributions and provide error bounds in expectation. With the recent resurgence of interest for robust statistics [35, 23, 49, 71], variants of SGD based on clipping are shown to be robust to heavy-tailed gradients [29, 80], where the gradient samples are only required to have a finite variance. The latter requirement has been further weakened to the existence of a \(q\)-th moment for some \(q>1\) in [77, 65]. In this paper, we go further and show that another variant of clipped SGD with proper thresholds is robust both to heavy tails _and_ outliers in the data stream.
Robust statistics appeared in the 60s with the pioneering works of Huber, Tukey and others [81, 39, 37, 76, 30]. More recently, the field found new momentum thanks to a series of works about robust scalar mean estimation [16, 1, 42, 53] and the more challenging multidimensional case [33, 17, 52, 59, 20, 22, 50, 25]. These paved the way to the elaboration of a host of robust learning algorithms [32, 71, 49, 51, 67] which have to date overwhelmingly focused on the batch learning setting. We consider the setting of streaming stochastic optimization [10, 12, 57], which raises an additional difficulty coming from the fact that algorithms can see each sample only once
and must operate under an \(\mathcal{O}(d)\) memory and complexity constraint for \(d\)-dimensional optimization problems. A limited number of papers [80, 60, 26] propose theoretical guarantees for robust algorithms learning from streaming data.
This work introduces such an algorithm that learns from data on the fly and is robust both to heavy tails and outliers, with minimal computational overhead and sound theoretical guarantees.
We consider the problem of minimizing a smooth objective
\[\min_{\theta\in\mathbb{R}^{d}}\mathcal{L}(\theta):=\mathbb{E}_{\zeta}[\ell( \theta,\zeta)] \tag{1}\]
using observations \(G(\theta,\zeta_{t})\) of the unknown gradient \(\nabla\mathcal{L}(\theta)\), based on samples \((\zeta_{t})_{t\geq 0}\) received sequentially that include corruptions with probability \(\eta<1/2\). Formulation (1) is common to numerous machine learning problems where \(\ell\) is a loss function evaluating the fit of a model with parameters \(\theta\) on a sample \(\zeta\), the expectation \(\mathbb{E}\) is w.r.t the unknown uncorrupted sample distribution.
We introduce quantile-clipped SGD (QC-SGD) which uses the iteration
\[\theta_{t+1}\!=\!\theta_{t}-\alpha_{\theta_{t}}\beta G(\theta_{t},\zeta_{t}) \ \ \text{with}\ \ \alpha_{\theta_{t}}\!=\!\min\Big{(}1,\frac{\tau_{\theta_{t}}}{\|G(\theta_{t},\zeta_{t})\|}\Big{)}, \tag{2}\]
where \(\beta>0\) is a constant step size and \(\alpha_{\theta_{t}}\) is the clipping factor with threshold chosen as the \(p\)-th quantile \(\tau_{\theta_{t}}=Q_{p}(\|\widetilde{G}(\theta_{t},\zeta_{t})\|)\) with \(\widetilde{G}(\theta_{t},\zeta_{t})\) an uncorrupted sample of \(\nabla\mathcal{L}(\theta_{t})\) and \(p\in(0,1)\) (details will follow). Quantiles are a natural choice of clipping threshold which allows to handle heavy tails [75, 9] and corrupted data. For instance, the trimmed mean offers a robust and computationally efficient estimator of a scalar expectation [53]. Since the quantile \(Q_{p}(\|\widetilde{G}(\theta_{t},\zeta_{t})\|)\) is non-observable, we introduce a method based on rolling quantiles in Section 5 which keeps the procedure \(\mathcal{O}(d)\) both memory and complexity-wise.
Contributions.Our main contributions are as follows:
* For small enough \(\eta\) and well-chosen \(p\), we show that, whenever the optimization objective is smooth and strongly convex, QC-SGD converges _geometrically_ to a limit distribution such that the deviation around the optimum achieves the _optimal_ dependence on \(\eta\).
* In the non-corrupted case \(\eta=0\) and with a strongly convex objective, we prove that a coordinated choice of \(\beta\) and \(p\) ensures that the limit distribution is sub-Gaussian with constant of order \(\mathcal{O}(\sqrt{\beta})\). In the corrupted case \(\eta>0,\) the limit distribution is sub-exponential.
* For a smooth objective (non-convex) whose gradient satisfies an identifiability condition, we prove that the total variation distance between QC-SGD iterates and its limit distribution vanishes sub-linearly. In this case, the limit distribution is such that the deviation of the objective gradient is optimally controlled in terms of \(\eta\).
* Finally, we provide experiments to demonstrate that QC-SGD can be easily implemented by estimating \(Q_{p}(\|\widetilde{G}(\theta_{t},\zeta_{t})\|)\) with rolling quantiles. In particular, we show that the iteration is indeed robust to heavy tails and corruption on synthetic stochastic optimization tasks.
Our theoretical results are derived thanks to a modelling through Markov chains and hold under an \(L_{q}\) assumption on the gradient distribution with \(q>1\).
Related works.Convergence in distribution of the Markov chain generated by constant step size SGD, relatively to the Wasserstein metric, was first established in [27]. Another geometric convergence result was derived in [87] for non-convex, non-smooth, but quadratically growing objectives, where a convergence statement relatively to a weighted total variation distance is given
and a CLT is established. These papers do not consider robustness to heavy tails or outliers. Early works proposed stochastic optimization and parameter estimation algorithms which are robust to a wide class of noise of distributions [56, 68, 69, 72, 79, 19, 18, 61], where asymptotic convergence guarantees are stated for large sample sizes. Initial evidence of the robustness of clipped SGD to heavy tails was given by [88] who obtained results in expectation. Subsequent works derived high-confidence sub-Gaussian performance bounds under a finite variance assumption [29, 80] and later under an \(L_{q}\) assumption [77, 65] with \(q>1\).
Robust versions of Stochastic Mirror Descent (SMD) are introduced in [60, 44]. For a proper choice of the mirror map, SMD is shown to handle infinite variance gradients without any explicit clipping [85]. Finally, [26] studies heavy-tailed and outlier robust streaming estimation algorithms of the expectation and covariance. On this basis, robust algorithms for linear and logistic regression are derived. However, the involved filtering procedure is hard to implement in practice and no numerical evaluation of the considered approach is proposed.
Agenda.In Section 2 we set notations, state the assumptions required by our theoretical results and provide some necessary background on continuous state Markov chains. In Section 3, we state our results for strongly convex objectives including geometric ergodicity of QC-SGD (Theorem 1), characterizations of the limit distribution and deviation bounds on the final estimate. In Section 4, we remove the convexity assumption and obtain a weaker ergodicity result (Theorem 2) and characterize the limit distribution in terms of the deviations of the objective gradient. Finally, we present a rolling quantile procedure in Section 5 and demonstrate its performance through a few numerical experiments on synthetic data.
## 2 Preliminaries
The model parameter space is \(\mathbb{R}^{d}\) endowed with the Euclidean norm \(\|\cdot\|\), \(\mathcal{B}(\mathbb{R}^{d})\) is the Borel \(\sigma\)-algebra of \(\mathbb{R}^{d}\) and we denote by \(\mathcal{M}_{1}(\mathbb{R}^{d})\) the set of probability measures over \(\mathbb{R}^{d}\). We assume throughout the paper that the objective \(\mathcal{L}\) is smooth.
**Assumption 1**.: _The objective \(\mathcal{L}\) is \(L\)-Lipschitz-smooth, namely_
\[\mathcal{L}(\theta^{\prime})\leq\mathcal{L}(\theta)+\langle\nabla\mathcal{L}( \theta),\theta^{\prime}-\theta\rangle+\frac{L}{2}\|\theta-\theta^{\prime}\|^{2}\]
_with \(L<+\infty\) for all \(\theta,\theta^{\prime}\in\mathbb{R}^{d}\)._
The results from Section 3 below use the following
**Assumption 2**.: _The objective \(\mathcal{L}\) is \(\mu\)-strongly convex, namely_
\[\mathcal{L}(\theta^{\prime})\geq\mathcal{L}(\theta)+\langle\nabla\mathcal{L}( \theta),\theta^{\prime}-\theta\rangle+\frac{\mu}{2}\|\theta-\theta^{\prime}\| ^{2}\]
_with \(\mu>0\) for all \(\theta,\theta^{\prime}\in\mathbb{R}^{d}\)._
An immediate consequence of Assumption 2 is the existence of a unique minimizer \(\theta^{\star}=\operatorname*{argmin}_{\theta\in\mathbb{R}^{d}}\mathcal{L}(\theta)\). The next assumption formalizes our corruption model.
**Assumption 3** (\(\eta\)-corruption).: _The gradients \((G(\theta_{t},\zeta_{t}))_{t\geq 0}\) used in Iteration (2) are sampled as \(G(\theta_{t},\zeta_{t})=U_{t}\widetilde{G}(\theta_{t})+(1-U_{t})\widetilde{G} (\theta_{t},\zeta_{t})\) where \(U_{t}\) are i.i.d Bernoulli random variables with parameter \(\eta<1/2,\,\widetilde{G}(\theta_{t})\sim\mathcal{D}_{\mathcal{O}}(\theta_{t})\) with \(\mathcal{D}_{\mathcal{O}}(\theta_{t})\) an arbitrary distribution and \(\widetilde{G}(\theta_{t},\zeta_{t})\sim\mathcal{D}_{\mathcal{I}}(\theta_{t})\) follows the true gradient distribution and is independent from the past given \(\theta_{t}\)._
Assumption 3 is an online analog of the Huber contamination model [36, 39] where corruptions occur with probability \(\eta\) and where the distribution of corrupted samples is not fixed and may depend on the current iterate \(\theta_{t}\). The next assumption requires the true gradient distribution to be unbiased and diffuse.
**Assumption 4**.: _For all \(\theta\), non-corrupted gradient samples \(\widetilde{G}(\theta,\zeta)\sim\mathcal{D}_{\mathcal{I}}(\theta)\) are such that_
\[\widetilde{G}(\theta,\zeta)=\nabla\mathcal{L}(\theta)+\varepsilon_{\theta}, \tag{3}\]
_where \(\varepsilon_{\theta}\) is a centered noise \(\mathbb{E}[\varepsilon_{\theta}|\theta]=0\) with distribution \(\delta\nu_{\theta,1}+(1-\delta)\nu_{\theta,2}\) where \(\delta>0\) and \(\nu_{\theta,1},\nu_{\theta,2}\) are distributions over \(\mathbb{R}^{d}\) such that \(\nu_{\theta,1}\) admits a density \(h_{\theta}\) w.r.t. the Lebesgue measure satisfying_
\[\inf_{\|\omega\|\leq R}h_{\theta}(\omega)>\varkappa(R)>0\]
_for all \(R>0\), where \(\varkappa(\cdot)\) is independent of \(\theta\)._
Assumption 4 imposes a weak constraint, since it is satisfied, for example, as soon as the noise \(\varepsilon_{\theta}\) admits a density w.r.t. Lebesgue's measure. Our last assumption formalizes the requirement of a finite moment for the gradient error.
**Assumption 5**.: _There is \(q>1\) such that for \(\widetilde{G}(\theta,\zeta)\sim\mathcal{D}_{\mathcal{I}}(\theta),\) we have_
\[\mathbb{E}\big{[}\|\varepsilon_{\theta}\|^{q}\mid\theta\big{]}^{1/q}=\mathbb{ E}\big{[}\big{\|}\widetilde{G}(\theta,\zeta)-\nabla\mathcal{L}(\theta)\big{\|}^{q} \mid\theta\big{]}^{1/q}\leq A_{q}\|\theta-\theta^{\star}\|+B_{q} \tag{4}\]
_for all \(\theta\in\mathbb{R}^{d},\) where \(A_{q},B_{q}>0\). When \(\mathcal{L}\) is not strongly convex, we further assume that \(A_{q}=0.\)_
The bound (4) captures the case of arbitrarily high noise magnitude through the dependence on \(\|\theta-\theta^{\star}\|.\) This is consistent with common strongly convex optimization problems such as least squares regression. For non-strongly convex \(\mathcal{L}\), we require \(A_{q}=0\) since \(\theta^{\star}\) may not exist.
**Definition 1**.: _If \(X\) is a real random variable, we say that \(X\) is \(K\)-sub-Gaussian for \(K>0\) if_
\[\mathbb{E}\exp(\lambda^{2}X^{2})\leq e^{\lambda^{2}K^{2}}\quad\text{for}\quad |\lambda|\leq 1/K. \tag{5}\]
_We say that \(X\) is \(K\)-sub-exponential for \(K>0\) if_
\[\mathbb{E}\exp(\lambda|X|)\leq\exp(\lambda K)\quad\text{for all}\quad 0\leq \lambda\leq 1/K. \tag{6}\]
The convergence results presented in this paper use the following formalism of continuous state Markov chains. Given a step size \(\beta>0\) and a quantile \(p\in(0,1),\) we denote by \(P_{\beta,p}\) the Markov transition kernel governing the Markov chain \((\theta_{t})_{t\geq 0}\) generated by QC-SGD, so that
\[\mathbb{P}(\theta_{t+1}\in A\mid\theta_{t})=P_{\beta,p}(\theta_{t},A)\]
for \(t\geq 0\) and \(A\in\mathcal{B}(\mathbb{R}^{d})\). The transition kernel \(P_{\beta,p}\) acts on probability distributions \(\nu\in\mathcal{M}_{1}(\mathbb{R}^{d})\) through the mapping \(\nu\rightarrow\nu P_{\beta,p}\) which is defined, for all \(A\in\mathcal{B}(\mathbb{R}^{d})\), by \(\nu P_{\beta,p}(A)=\int_{A}P_{\beta,p}(\theta,A)d\nu(\theta)=\mathbb{P}( \theta_{t+1}\in A\mid\theta_{t}\sim\nu).\) For \(n\geq 1,\) we similarly define the multi-step transition kernel \(P_{\beta,p}^{n}\) which is such that \(P_{\beta,p}^{n}(\theta_{t},A)=\mathbb{P}(\theta_{t+n}\in A\mid\theta_{t})\) and acts on probability distributions \(\nu\in\mathcal{M}_{1}(\mathbb{R}^{d})\) through \(\nu P_{\beta,p}^{n}=(\nu P_{\beta,p})P_{\beta,p}^{n-1}.\) Finally, we define the total variation (TV) norm of a signed measure \(\nu\) as
\[2\|\nu\|_{\mathrm{TV}}=\sup_{f:|f|\leq 1}\int f(\theta)\nu(d\theta)=\sup_{A\in \mathcal{B}(\mathbb{R}^{d})}\nu(A)-\inf_{A\in\mathcal{B}(\mathbb{R}^{d})}\nu( A).\]
In particular, we recover the TV _distance_ between \(\nu_{1},\nu_{2}\in\mathcal{M}_{1}(\mathbb{R}^{d})\) as \(d_{\mathrm{TV}}(\nu_{1},\nu_{2})=\|\nu_{1}-\nu_{2}\|_{\mathrm{TV}}.\)
## 3 Strongly Convex Objectives
We are ready to state our convergence result for the stochastic optimization of a strongly convex objective using QC-SGD with \(\eta\)-corrupted samples.
**Theorem 1** (Geometric ergodicity).: _Let Assumptions 1-5 hold and assume there is a quantile \(p\in[\eta,1-\eta]\) such that_
\[\kappa:=(1-\eta)p\mu-\eta L-(1-p)^{-\frac{1}{q}}A_{q}(1-p(1-\eta))>0. \tag{7}\]
_Then, for a step size \(\beta\) satisfying_
\[\beta<\frac{1}{4}\frac{\kappa}{\mu^{2}+6L^{2}+16\eta^{-\frac{2}{q}}A_{q}^{2}}, \tag{8}\]
_the Markov chain \((\theta_{t})_{t\geq 0}\) generated by QC-SGD with parameters \(\beta\) and \(p\) converges geometrically to a unique invariant measure \(\pi_{\beta,p}\): for any initial \(\theta_{0}\in\mathbb{R}^{d},\) there is \(\rho<1\) and \(M<\infty\) such that after \(T\) iterations_
\[\left\|\delta_{\theta_{0}}P_{\beta,p}^{T}-\pi_{\beta,p}\right\|_{\mathrm{TV}} \leq M\rho^{T}\big{(}1+\|\theta_{0}-\theta^{\star}\|^{2}\big{)},\]
_where \(\delta_{\theta_{0}}\) is the Dirac measure located at \(\theta_{0}.\)_
The proof of Theorem 1 is given in Appendix C.2 and relies on the geometric ergodicity result of [58, Chapter 15] for Markov chains with a geometric drift property. A similar result for quadratically growing objectives was established by [87] and convergence w.r.t. Wasserstein's metric was shown in [27] assuming uniform gradient co-coercivity. However, robustness was not considered in these works. The restriction \(p\in[\eta,1-\eta]\) comes from the consideration that other quantiles are not estimable in the event of \(\eta\)-corruption. Condition (7) is best interpreted for the choice \(p=1-\eta\) in which case it translates into \(\eta^{1-1/q}\leq\mathcal{O}(\mu/(L+A_{q}))\) implying that it is verified for \(\eta\) small enough within a limit fixed by the problem conditioning. A similar condition with \(q=2\) appears in [26, Theorem E.9] which uses a finite variance assumption.
The constants \(M\) and \(\rho\) controlling the geometric convergence speed in Theorem 1 depend on the parameters \(\beta,p\) and the initial \(\theta_{0}\). Among choices fulfilling the convergence conditions, it is straightforward that greater step size \(\beta\) and \(\theta_{0}\) closer to \(\theta^{\star}\) lead to faster convergence. However, the dependence in \(p\) is more intricate and should be evaluated through the resulting value of \(\kappa\). We provide a more detailed discussion about the value of \(\rho\) in Appendix B.
The choice \(p=1-\eta\) appears to be ideal since it leads to optimal deviation of the invariant distribution around the optimum \(\theta^{\star}\) which is the essence of our next statement.
**Proposition 1**.: _Assume the same as in Theorem 1 and condition (7) with the choice \(p=1-\eta\). For step size \(\beta\) satisfying (8), \(q\geq 2\), and additionally:_
\[\beta\leq\eta^{2-2/q}/\kappa, \tag{9}\]
_for \(\theta\sim\pi_{\beta,1-\eta}\), we have the following upper bound:_
\[\mathbb{E}\|\theta-\theta^{\star}\|^{2}\leq\Big{(}\frac{6\eta^{1-1/q}B_{q}}{ \kappa}\Big{)}^{2}.\]
Proposition 1 is proven in Appendix C.3. An analogous result holds for \(q\in(1,2)\) but requires a different proof and can be found in Appendix C.4. Proposition 1 may be compared to [87, Theorem 3.1] which shows that the asymptotic estimation error can be reduced arbitrarily using a small step size. However, this is impossible in our case since we consider corrupted gradients.
The performance of Proposition 1 is best discussed in the specific context of linear regression where gradients are given as \(G(\theta,(X,Y))=X(X^{\top}\theta-Y)\) for samples \(X,Y\in\mathbb{R}^{d}\times\mathbb{R}\) such that \(Y=X^{\top}\theta^{\star}+\epsilon\) with \(\epsilon\) a centered noise. In this case, a finite moment of order \(k\) for the data implies order \(k/2\) for the gradient corresponding to an \(\eta^{1-2/k}\) rate in Proposition 1. Since Assumption 5 does not include independence of the noise \(\epsilon\) from \(X,\) this corresponds to the negatively correlated moments assumption of [2] being unsatisfied. Consequently, Proposition 1 is information-theoretically optimal in \(\eta\) based on [2, Corollary 4.2]. Nonetheless, the poor dimension dependence through \(B_{q}\) may still be improved. If the gradient is sub-Gaussian with constant \(K\), we would have \(B_{q}\lesssim K\sqrt{q}\) for \(q\geq 1\) (see [83] for a reference), in which case, the choice \(q=\log(1/\eta)\) recovers the optimal rate in \(\eta\sqrt{\log(1/\eta)}\) for the Gaussian case.
We now turn to showing strong concentration properties for the invariant distribution \(\pi_{\beta,p}.\) For this purpose, we restrict the optimization to a bounded and convex set \(\Theta\subset\mathbb{R}^{d}\) and replace Iteration (2) by the projected iteration
\[\theta_{t+1}=\Pi_{\Theta}\big{(}\theta_{t}-\alpha_{\theta_{t}}\beta G(\theta_ {t},\zeta_{t})\big{)}, \tag{10}\]
where \(\Pi_{\Theta}\) is the projection onto \(\Theta\). Assuming that the latter contains the optimum \(\theta^{\star}\in\Theta,\) one can check that the previous results continue to hold thanks to the inequality
\[\|\Pi_{\Theta}(\theta)-\theta^{\star}\|=\|\Pi_{\Theta}(\theta)-\Pi_{\Theta}( \theta^{\star})\|\leq\|\theta-\theta^{\star}\|,\]
which results from the convexity of \(\Theta.\) The restriction of the optimization to a bounded set allows us to uniformly bound the clipping threshold \(\tau_{\theta},\) which is indispensable for the following result.
**Proposition 2**.: _In the setting of Theorem 1, consider projected QC-SGD (10) and let \(\overline{\tau}=\sup_{\theta\in\Theta}\tau_{\theta},D=\mathrm{diam}(\Theta)\) the diameter of \(\Theta\) and \(\overline{B}_{q}=A_{q}D+B_{q}.\)_
* _Consider the non-corrupted case_ \(\eta=0\) _and set the quantile_ \(p\) _such that_ \(p\geq 1-(\beta\mu)^{\frac{q}{2(q-1)}}.\) _Then, for_ \(\theta\sim\pi_{\beta,p},\) _the variable_ \(\|\theta-\theta^{\star}\|\) _is sub-Gaussian in the sense of Definition_ 1 _with constant_ \[K=4\sqrt{\frac{2\beta(\overline{B}_{q}^{2}+\overline{\tau}^{2})}{p\mu}}.\]
* _Consider the corrupted case_ \(\eta>0,\) _and set the quantile_ \(p\in[\eta,1-\eta]\) _such that Inequality (_7_) holds. Then, for_ \(\theta\sim\pi_{\beta,p},\) _the variable_ \(\|\theta-\theta^{\star}\|\) _is sub-exponential in the sense of Definition_ 1 _with constant_ \[K=\frac{7\overline{\tau}+(1-p)^{1-1/q}\overline{B}_{q}}{p\mu}.\]
The proof can be found in Appendix C.5. The strong concentration properties given by Proposition 2 for the invariant distribution appear to be new. Still, the previous result remains asymptotic in nature. High confidence deviation bounds for an iterate \(\theta_{t}\) can be derived by leveraging the convergence in Total Variation distance given by Theorem 1 leading to the following result.
**Corollary 1**.: _In the setting of Proposition 2, in the absence of corruption \(\eta=0,\) after \(T\) iterations, for \(\delta>0,\) we have_
\[\mathbb{P}\bigg{(}\big{\|}\theta_{T}-\theta^{\star}\big{\|}>4\sqrt{\overline{ B}_{q}^{2}+\overline{\tau}^{2}}\sqrt{\frac{2\beta\log(e/\delta)}{p\mu}} \bigg{)}\leq\delta+\rho^{T}M\big{(}1+\|\theta_{0}-\theta^{\star}\|^{2}\big{)}.\]
Choosing a smaller step size \(\beta\) in Corollary 1 allows to improve the deviation bound. However, this comes at the cost of weaker confidence because of slower convergence due to a greater \(\rho\). See Appendix B for a discussion including a possible compromise. Corollary 1 may be compared to the results of [29, 80, 77, 65] which correspond to \(\beta\approx 1/T\) and have a similar dependence on the dimension through the gradient variance. Although their approach is also based on gradient clipping, they use different thresholds and proof methods. In the presence of corruption, the invariant distribution is not sub-Gaussian. This can be seen by considering the following toy Markov chain:
\[X_{t+1}=\begin{cases}\alpha X_{t}+\xi&\text{w.p.}\quad 1-\eta\\ X_{t}+\tau&\text{w.p.}\quad\eta\end{cases}\]
where \(\alpha<1,\tau>0\) are constants and \(\xi\) is a positive random noise. Using similar methods to the proof of Theorem 1, one can show that \((X_{t})_{t\geq 0}\) converges (for any initial \(X_{0}\)) to an invariant distribution whose moments can be shown to grow linearly, indicating a sub-exponential distribution and excluding a sub-Gaussian one. We provide additional details for the underlying argument in Appendix C.6. For the corrupted case, the sub-exponential property stated in Proposition 2 holds with a constant \(K\) of order \(\overline{\tau}/\mu\), which is not satisfactory and leaves little room for improvement due to the inevitable bias introduced by corruption. Therefore, we propose the following procedure in order to obtain a high confidence estimate, similarly to Corollary 1.
```
Input: Step size \(\beta>0\), quantile index \(p\in(0,1)\), initial parameter \(\theta_{0}\in\Theta\), horizon \(T\) and number of concurrent iterates \(N\geq 1\). Optimize multiple parameters \(\theta_{t}^{(1)},\ldots,\theta_{t}^{(N)}\) starting from a common \(\theta_{0}=\theta_{0}^{(n)}\) for \(n\in\llbracket N\rrbracket=:\{1,\ldots,N\}\) and \(T\) steps \(t=0,\ldots,T\) using the following cycling iteration: \[\theta_{t+1}^{(n)}\!=\!\begin{cases}\theta_{t}^{(n)}\!-\!\alpha_{ \theta_{t}^{(n)}}\beta G\!\left(\theta_{t}^{(n)},\zeta_{t}\right)&\text{ if }t\!\equiv\!n\!-\!1\bmod N,\\ \theta_{t}^{(n)}&\text{ otherwise.}\end{cases}\] (11) Compute \(r_{ij}=\left\|\theta_{T}^{(i)}-\theta_{T}^{(j)}\right\|\) for \(i,j\in\llbracket N\rrbracket\). For \(j\!\in\llbracket N\rrbracket\), let \(r^{(j)}\!\in\!\mathbb{R}_{+}^{N}\) be the vector \(r_{j,:}\!:=\![r_{j,1},\ldots,r_{j,N}]\) sorted in non decreasing order. Compute the aggregated estimator as \(\widehat{\theta}=\theta_{T}^{(i)}\) with \(\widehat{i}=\operatorname*{argmin}_{i\in\llbracket N\rrbracket}r_{N/2}^{(i)}\). return\(\widehat{\theta}\)
```
**Algorithm 1**Aggregation of cycling iterates
Algorithm 1 uses ideas from [35] (see also [59, 44]) and combines a collection of _weak_ estimators (only satisfying \(L_{2}\) bounds) into a strong one with sub-exponential deviation. The aggregated estimator \(\widehat{\theta}\) satisfies the high probability bound given in the next result.
**Corollary 2**.: _Assume the same as in Theorem 1 and Proposition 1. Consider \(\widehat{\theta}\) given by Algorithm 1, with the assumption that the gradient sample sets used for each \(\left(\theta_{T}^{(n)}\right)_{n\in\llbracket N\rrbracket}\) in Equation (11) are independent. For \(\delta>0,\) if \(N\geq 16\log(1/\delta)\) and \(T\) satisfies_
\[T\geq N\log(15M(1+\|\theta_{0}-\theta^{\star}\|^{2}))/\log(1/\rho),\]
_then, with probability at least \(1-\delta,\) we have_
\[\left\|\widehat{\theta}-\theta^{\star}\right\|\leq\frac{27\eta^{1-\frac{1}{ \delta}}\overline{B}_{q}}{\kappa}. \tag{12}\]
We obtain a high confidence version of the bound in expectation previously stated in Proposition 1. As argued before, the above bound depends optimally on \(\eta\). Similar bounds to (12) are obtained for \(q=2\) in [26] for streaming mean estimation, linear and logistic regression. Their results enjoy better dimension dependence but are less general than ours. In addition, the implementation of the associated algorithm is not straightforward whereas our method is quite easy to use (see Section 5).
## 4 Smooth Objectives
In this section, we drop Assumption 2 and consider the optimization of possibly non-convex objectives. Consequently, the existence of a unique optimum \(\theta^{\star}\) and the quadratic growth of the objective are no longer guaranteed. This motivates us to use a uniform version of Assumption 5 with \(A_{q}=0\) since the gradient is no longer assumed coercive and its deviation moments can be taken as bounded. In this context, we obtain the following weaker (compared to Theorem 1) ergodicity result for QC-SGD.
**Theorem 2** (Ergodicity).: _Let Assumptions 1, 3, 4 and 5 hold with \(A_{q}=0\) (uniformly bounded moments) and positive objective \(\mathcal{L}\). Let \((\theta_{t})_{t\geq 0}\) be the Markov chain generated by QC-SGD with step size \(\beta\) and quantile \(p\in[\eta,1-\eta].\) Assume that \(p\) and \(\beta\) are such that \(3p(1-\eta)/4>L\beta+\eta\) and that the subset of \(\mathbb{R}^{d}\) given by_
\[\left\{\theta:\frac{1}{2}\big{\|}\nabla\mathcal{L}(\theta)\big{\|}^{2}\leq \frac{B_{q}^{2}\big{(}(1\!-\!p)^{-\frac{2}{q}}(L\beta\!+\!2\eta^{2})\!+\!2\eta ^{2-\frac{2}{q}}\big{)}}{p(1\!-\!\eta)(3p(1\!-\!\eta)/4\!-\!L\beta\!-\!\eta)} \right\} \tag{13}\]
_is bounded. Then, for any initial \(\theta_{0}\in\mathbb{R}^{d},\) there exists \(M<+\infty\) such that after \(T\) iterations_
\[\big{\|}\delta_{\theta_{0}}P_{\beta,p}^{T}-\pi_{\beta,p}\big{\|}_{\rm TV}\leq \frac{M}{T}, \tag{14}\]
_where \(\pi_{\beta,p}\) is a unique invariant measure and where \(\delta_{\theta_{0}}\) is the Dirac measure located at \(\theta_{0}.\)_
The proof is given in Appendix C.9 and uses ergodicity results from [58, Chapter 13]. Theorem 2 provides convergence conditions for an SGD Markov chain on a smooth objective in a robust setting. We are unaware of anterior results of this kind in the literature. Condition (13) requires that the true gradient exceeds the estimation error at least outside of a bounded set. If this does not hold, the gradient would be dominated by the estimation error, leaving no hope for the iteration to converge. Observe that, for no corruption (\(\eta=0\)), the condition is always fulfilled for some \(\beta\) and \(p\). Note also that without strong convexity (Assumption 2), convergence occurs at a slower sublinear rate which is consistent with the optimization rate expected for a smooth objective (see [13, Theorem 3.3]).
As previously, we complement Theorem 1 with a characterization of the invariant distribution.
**Proposition 3**.: _Under the conditions of Theorem 2, assume that the choice \(p=1-\eta\) is such that the set (13) is bounded. For step size \(\beta\leq\eta^{2}/L,\) the stationary measure \(\theta\sim\pi_{\beta,1-\eta}\) satisfies_
\[\mathbb{E}\big{\|}\nabla\mathcal{L}(\theta)\big{\|}^{2}\leq\frac{5\eta^{2- \frac{2}{q}}B_{q}^{2}}{p(1-\eta)\big{(}3p(1-\eta)/4-L\beta-\eta\big{)}}. \tag{15}\]
The statement of Proposition 3 is clearly less informative than Propositions 1 and 2 since it only pertains to the gradient rather than, for example, the excess risk. This is due to the weaker assumptions that do not allow to relate these quantities. Still, the purpose remains to find a critical point and is achieved up to \(\mathcal{O}(\eta^{1-1/q})\) precision according to this result. Due to corruption, the estimation error on the gradient cannot be reduced beyond \(\Omega(\eta^{1-1/q})\)[70, 34, 24]. Therefore, one may draw a parallel with a corrupted mean estimation task, in which case, the previous rate is, in fact, information-theoretically optimal.
## 5 Implementation and Numerical Experiments
The use of the generally unknown quantile \(Q_{p}(\|\widetilde{G}(\theta_{t},\zeta_{t})\|)\) in QC-SGD constitutes the main obstacle to its implementation. For strongly convex objectives, one may use a proxy such as \(a\|\theta_{t}-\theta_{\mathrm{ref}}\|+b\) with positive \(a,b\) and \(\theta_{\mathrm{ref}}\in\mathbb{R}^{d}\) an approximation of \(\theta^{\star}\) serving as reference point. This choice is consistent with Assumptions 1 and 5, see Lemma 2 in Appendix C.
```
Input: Step size \(\beta>0\), quantile index \(p\in(0,1)\), initial parameter \(\theta_{0}\in\mathbb{R}^{d}\), \(\tau_{\mathrm{unif}}>0\), buffer \(B\) of size \(S\) and horizon \(T\). Fill \(B\) with \(S-1\) values equal to \(\tau_{\mathrm{unif}}\). for\(t=0\dots T-1\)do Draw a sample \(G(\theta_{t},\zeta_{t})\) and add \(\|G(\theta_{t},\zeta_{t})\|\) to \(B\). \(\widehat{Q}_{p}\leftarrow\lfloor pS\rfloor\) rank element of \(B\). \(\theta_{t+1}\leftarrow\theta_{t}-\beta\mathrm{clip}(G(\theta_{t},\zeta_{t}), \widehat{Q}_{p})\) Delete the oldest value in \(B\). end for return\(\theta_{T}\)
```
**Algorithm 2**Rolling QC-SGD
In the non-strongly convex case, a constant threshold can be used since the gradient is a priori uniformly bounded, implying the same for the quantiles of its deviations. In practice, we propose a simpler and more direct approach: we use a rolling quantile procedure, described in Algorithm 2. The latter stores the values \((\|G(\theta_{t-j},\zeta_{t-j})\|)_{1\leq j\leq S}\) in a buffer of size \(S\in\mathbb{N}^{*}\) and replaces \(Q_{p}(\|\widetilde{G}(\theta_{t},\zeta_{t})\|)\) in QC-SGD by an estimate \(\widehat{Q}_{p}\) which is the \(\lfloor pS\rfloor\)-th order statistic in the buffer. Note that only the norms of previous gradients are stored in the buffer, limiting the memory overhead to \(\mathcal{O}(S)\). The computational cost of \(\widehat{Q}_{p}\) can also be kept to \(\mathcal{O}(S)\) per iteration thanks to a bookkeeping procedure (see Appendix A).
We implement this procedure for a few tasks and compare its performance with relevant baselines. We do not include a comparison with [26] whose procedure has no implementation we are aware of and is difficult to use in practice due to its dependence on a number of unknown constants. All our experiments consider an infinite horizon, dimension \(d=128\), and a constant step size for all methods.
Figure 1: Evolution of \(\|\theta_{t}-\theta^{\star}\|\) (\(y\)-axis) against iteration \(t\) (\(x\)-axis) for the expectation estimation task, averaged over 100 runs at different corruption levels \(\eta\) (bands widths correspond to the standard deviation of the 100 runs). For \(\eta=0.04\), the evolution on a single run is also displayed. We observe good performance for RQC-SGD for increasing \(\eta\) while CMOM and GMOM are more sensitive.
Expectation estimation.We estimate the expectation of a random vector \(X\) by minimizing the objective \(\mathcal{L}(\theta)=\frac{1}{2}\|\theta-\theta^{\star}\|^{2}\) with \(\theta^{\star}=\mathbb{E}[X]\) using a stream of both corrupted and heavy-tailed samples, see Appendix A for details. We run RQC-SGD (Algorithm 2) and compare it to an online version of geometric and coordinate-wise Median-Of-Means (GMOM and CMOM) [14, 15] which use block sample means to minimize an \(L_{1}\) objective (see Appendix A). Although these estimators are a priori not robust to \(\eta\)-corruption, we ensure that their estimates are meaningful by limiting \(\eta\) to \(4\%\) and using blocks of \(10\) samples. Thus, blocks are corrupted with probability \(<1/2\) so that the majority contains only true samples. Figure 1 displays the evolution of \(\|\theta_{t}-\theta^{\star}\|\) for each method averaged over 100 runs for increasing \(\eta\) and constant step size. We also display a single run for \(\eta=0.04\). We observe that RQC-SGD is only weakly affected by the increasing corruption whereas the performance of GMOM and CMOM quickly degrades with \(\eta\), leading to unstable estimates.
Linear regression.We consider least-squares linear regression and compare RQC-SGD with Huber's estimator [38] and clipped SGD (designated as CCl(\(\lambda\))) with three clipping levels \(\lambda\sigma_{\max}\sqrt{d}\) for \(\lambda\in\{0.8,1.0,1.2\}\) where \(\sigma_{\max}\) is a fixed data scaling factor. These thresholds provide a rough estimate of the gradient norm. We generate covariates \(X\) and labels \(Y\) both heavy-tailed and corrupted. Corruption in the data stream is generated according to Assumption 3 with outliers represented either by aberrant values or _fake_ samples \(Y=X^{\top}\theta_{\mathrm{fake}}+\epsilon\) using a false parameter \(\theta_{\mathrm{fake}}\), see Appendix A for further details on data generation and fine tuning of the Huber parameter. All methods are run with constant step size and averaged results over \(100\) runs are displayed on Figure 2 (top row).
As anticipated, Huber's loss function is not robust to corrupted covariates. In contrast, using gradient clipping allows convergence to meaningful estimates. Although this holds true for a constant threshold, Figure 2 shows it may considerably slow the convergence if started away from the optimum. In addition, the clipping level also affects the final estimation precision and requires tuning. Both of the previous issues are well addressed by RQC-SGD whose adaptive clipping level allows fast progress of the optimization and accurate convergence towards a small neighborhood of the optimum.
Logistic regression.Finally, we test the same methods on logistic regression. Huber's baseline is represented by the modified Huber loss (also known as quadratic SVM [89]). We generate data similarly to the previous task except for the labels which follow \(Y\sim\mathrm{Bernoulli}(\sigma(X^{\top}\theta^{\star}))\) with \(\sigma\) the sigmoid function. Corrupted labels are either uninformative, flipped or obtained with a fake \(\theta_{\mathrm{fake}}\) (see details in Appendix A). Results are displayed on the bottom row of Figure 2.
As previously, Huber's estimator performs poorly with corruption. However, constant clipping appears to be better suited when the gradient is bounded, so that the optimization is less affected by its underestimation. We observe, nonetheless, that a higher clipping level may lead to poor convergence properties, even at a low corruption rate. Note also that the constant levels we use are based on prior knowledge about the data distribution and would have to be fine tuned in practice. Meanwhile, the latter issue is well addressed by quantile clipping. Finally, notice that no algorithm truly approaches the true solution for this task. This reflects the difficulty of improving upon Proposition 3 which only states convergence to a neighborhood where the objective gradient is comparable to the estimation error in magnitude.
## 6 Conclusion
We introduced a new clipping strategy for SGD and proved that it defines a stochastic optimization procedure which is robust to both heavy tails and outliers in the data stream. We also provided an efficient rolling quantile procedure to implement it and demonstrated its performance through
numerical experiments on synthetic data. Future research directions include improving the dimension dependence in our bounds, possibly by using sample rejection rules or by considering stochastic mirror descent [63, 4] clipped with respect to a non Euclidean norm. This may also procure robustness to higher corruption rates. Another interesting research track is the precise quantification of the geometric convergence speed of the Markov chain generated by constant step size SGD on a strongly convex objective.
#### Acknowledgements
This research is supported by the Agence Nationale de la Recherche as part of the "Investissements d'avenir" program (reference ANR-19-P3IA-0001; PRAIRIE 3IA Institute).
|
2301.13746 | Measures of Spin Ordering in the Potts Model with a Generalized External
Magnetic Field | We formulate measures of spin ordering in the $q$-state ferromagnetic Potts
model in a generalized external magnetic field that favors or disfavors spin
values in a subset $I_s = \{1,...,s\}$ of the total set of $q$ values. The
results are contrasted with the corresponding measures of spin ordering in the
case of a conventional external magnetic field that favors or disfavors a
single spin value out of total set of $q$ values. Some illustrative
calculations are included. | Shu-Chiuan Chang, Robert Shrock | 2023-01-31T16:30:04Z | http://arxiv.org/abs/2301.13746v1 | # Measures of Spin Ordering in the Potts Model with a Generalized External Magnetic Field
###### Abstract
We formulate measures of spin ordering in the \(q\)-state ferromagnetic Potts model in a generalized external magnetic field that favors or disfavors spin values in a subset \(I_{s}=\{1,...,s\}\) of the total set of \(q\) values. The results are contrasted with the corresponding measures of spin ordering in the case of a conventional external magnetic field that favors or disfavors a single spin value out of total set of \(q\) values. Some illustrative calculations are included.
## I Introduction
The \(q\)-state Potts model [1] has long been of interest as a classical spin model in which each spin can take on any of \(q\) values in the interval \(I_{q}=\{1,2,...,q\}\), with a Kronecker delta function spin-spin interaction between spins on adjacent sites [2; 3]. In contrast to the \(q=2\) case, which is equivalent to the Ising model, for \(q\geq 3\), there are several different ways that one can incorporate the effect of a symmetry-breaking (uniform) external magnetic field. The conventional way is to define this field as favoring one particular spin value out of the
\(q\) possible values in the set \(I_{q}\), e.g., [4]. In [5; 6; 7; 8] we defined and studied properties of the \(q\)-state Potts model with a generalized external magnetic field that favors or disfavors a subset consisting of more than just one value in \(I_{q}\). By convention, with no loss of generality, we take this subset to consist of the first \(s\) values, denoted as the interval \(I_{s}=\{1,...,s\}\). The orthogonal subset in \(I_{q}\) is denoted \(I_{s}^{\perp}=\{s+1,...,q\}\). In the case that we considered, the value of the magnetic field is a constant, consistent with its property as being applied externally. More general models with magnetic-like variables whose field values depend on the vertices have also been discussed [9; 10; 11], but we willl not need this generality here.
In the present paper we continue the study of the \(q\)-state Potts model in this generalized uniform external magnetic field. We discuss measures of spin ordering in the presence of the external field and formulate an order parameter for this model. The results are contrasted with the corresponding measures of spin ordering in the case of a conventional external magnetic field that favors or disfavors a single spin value in \(I_{q}\).
## II Definition and basic properties of the Potts model in a generalized magnetic field
In this section we review the definition and basic properties of the model that we study. We will consider the Potts model on a graph \(G(V,E)\) defined by its set \(V\) of vertices (site) and its set \(E\) of edges (bonds). For many physical applications, one usually takes \(G\) to be a regular \(d\)-dimensional lattice, but we retain the general formalism of graph theory here for later use. In thermal equilibrium at temperature \(T\), the partition function for the \(q\)-state Potts model on the graph \(G\) in a generalized magnetic field is given by
\[Z=\sum_{\{\sigma_{i}\}}e^{-\beta{\cal H}}\, \tag{1}\]
with the Hamiltonian
\[{\cal H}=-J\sum_{e_{ij}}\delta_{\sigma_{i},\sigma_{j}}-\sum_{p=1}^{q}H_{p}\sum _{\ell}\delta_{\sigma_{\ell},p}\, \tag{2}\]
where \(i,\ j,\ \ell\) label vertices of \(G\); \(\sigma_{i}\) are classical spin variables on these vertices, taking values in the set \(I_{q}=\{1,...,q\}\); \(\beta=(k_{B}T)^{-1}\); \(e_{ij}\) is the edge (bonds) joining vertices \(i\) and \(j\); \(J\) is the spin-spin interaction constant; and
\[H_{p}=\left\{\begin{array}{ll}H&\mbox{if $p\in I_{s}$}\\ 0&\mbox{if $p\in I_{s}^{\perp}$}\end{array}\right.. \tag{3}\]
Unless otherwise stated, we restrict our discussion to the ferromagnetic (\(J>0\)) version of the model, since the antiferromagnetic model in a (uniform) external field entails complications due to competing interactions and frustration. If \(H>0\), the external field favorably weights spin values in the interval \(I_{s}\), while if \(H<0\), this field favorably weights spin values in the orthogonal interval \(I_{s}^{\perp}\). This model thus generalizes a conventional magnetic field, which would favor or disfavor one particular spin value. The zero-field Potts model Hamiltonian \({\cal H}\) and partition function \(Z\) are invariant under the global transformation in which \(\sigma_{i}\to g\sigma_{i}\ \ \forall\ \ i\in V\), with \(g\in{\cal S}_{q}\), where \({\cal S}_{q}\) is the symmetric (= permutation) group on \(q\) objects. In the presence of the generalized external field defined in Eq. (2.3), this symmetry group of \({\cal H}\) and \(Z\) is reduced from \(S_{q}\) at \(H=0\) to the tensor product
\[{\cal S}_{s}\otimes{\cal S}_{q-s}. \tag{2.4}\]
This simplifies to the conventional situation in which the external magnetic field favors or disfavors only a single spin value if \(s=1\) or \(s=q-1\), in which case the right-hand side of Eq. (2.4) is \({\cal S}_{q-1}\).
We use the notation
\[K=\beta J\,\quad h=\beta H\,\quad y=e^{K}\,\quad v=y-1\,\quad w=e^{h}. \tag{2.5}\]
The physical ranges of \(v\) are \(v\geq 0\) for the Potts ferromagnet, and \(-1\leq v\leq 0\) for the Potts antiferromagnet. For fixed \(J\) and \(H\), as \(T\to\infty\), \(v\to 0\) and \(w\to 1\), while for \(T\to 0\) (with our ferromagnetic choice \(J>0\)), \(v\to\infty\); and \(w\to\infty\) if \(H>0\) while \(w\to 0\) if \(H<0\). Recall that for \(q=2\), the equivalence with the Ising model with standard Hamiltonian (denoted with \(Is\))
\[{\cal H}_{Is}=-J_{Is}\sum_{e_{ij}}\sigma_{i}^{(Is)}\sigma_{j}^{(Is)}-H_{Is} \sum_{i}\sigma_{i}^{(Is)}\, \tag{2.6}\]
where \(\sigma_{i}^{(Is)}=\pm 1\) makes use of the relations \(J=2J_{Is}\) and \(H=2H_{Is}\).
One can express the Potts model partition function on a graph \(G\) in a form that does not make any explicit reference to the spins \(\sigma_{i}\) or the summation over spin configurations in (2.1), but instead is expressed in a purely graph-theoretic manner, as a sum of terms arising from the spanning subgraphs \(G^{\prime}\subseteq G\), where \(G^{\prime}=(V,E^{\prime})\) with \(E^{\prime}\subseteq E\). In zero field, this was done in [12], with the result
\[Z(G,q,v)=\sum_{G^{\prime}\subseteq G}v^{e(G^{\prime})}\ q^{k(G^{\prime})}. \tag{2.7}\]
For the model with a conventional external magnetic field that favors or disfavors a single spin value in the set \(I_{q}\), a spanning-subgraph formula for the partition function was given in
[4]. For the model with a generalized magnetic field that favors or disfavors a larger set \(I_{s}\) consisting of two or more spin values in the set \(I_{q}\) and has a value \(H_{i,p}=H_{p}\) that is the same for all vertices \(i\in V\), a spanning-subgraph formula for the partition function was presented in Ref. [5] (see also [6]) and is as follows. Given a graph \(G=(V,E)\), the numbers of vertices, edges, and connected components of \(G\) are denoted, respectively, by \(n(G)\equiv n\), \(e(G)\), and \(k(G)\). The purely graph-theoretic expression of the partition function of the Potts model in a generalized magnetic field in this case is [5]
\[Z(G,q,s,v,w)=\sum_{G^{\prime}\subseteq G}v^{e(G^{\prime})}\ \prod_{i=1}^{k(G^{ \prime})}\left(q-s+sw^{n(G^{\prime}_{i})}\right)\,, \tag{2.8}\]
where \(G^{\prime}_{i}\), \(1\leq i\leq k(G^{\prime})\) denotes one of the \(k(G^{\prime})\) connected components in a spanning subgraph \(G^{\prime}\) of \(G\). The formula (2.8) shows that \(Z\) is a polynomial in the variables \(q\), \(s\), \(v\), and \(w\), hence our notation \(Z(G,q,s,v,w)\). For the case where the magnetic field favors (or disfavors) only a single spin value, i.e., for the case \(s=1\), the formula (2.8) reduces to the spanning subgraph formula for \(Z\) given in [4] (see also [13]). Parenthetically, we mention further generalizations that are different from the one we study here. First, one can let the spin-spin exchange constants \(J\) be edge-dependent, denoted as \(J_{ij}\) on the edge \(e_{ij}\) joining vertices \(i\) and \(j\). Second, one can let the value of the magnetic-field-type variable be different for different vertices \(\ell\in V\), so \(H_{p}\) is replaced by \(H_{\ell,p}\). With these generalizations, a spanning-subgraph formula for the partition function was given in [10] and studied further in [11].
Focusing on the term \(w^{n(G^{\prime}_{i})}\) in (2.8) and letting \(\ell=n(G^{\prime}_{i})\) for compact notation, one can use the factorization relation
\[w^{\ell}-1=(w-1)\sum_{j=0}^{\ell-1}w^{j} \tag{2.9}\]
to deduce that the variable \(s\) enters in \(Z(G,q,s,v,w)\), only in the combination
\[t=s(w-1). \tag{2.10}\]
Hence, the special case of zero external field, \(H=0\), i.e., \(w=1\), is equivalent to the formal value \(s=0\) (outside the interval \(I_{s}\)).
Several relevant identities were derived in [5, 6], including
\[Z(G,q,s,v,1)=Z(G,q,v)\, \tag{2.11}\]
\[Z(G,q,s,v,0)=Z(G,q-s,v)\, \tag{2.12}\]
\[Z(G,q,q,v,w)=w^{n}\,Z(G,q,v)\, \tag{2.13}\]
\[Z(G,q,s,v,w)=w^{n}Z(G,q,q-s,v,w^{-1}). \tag{2.14}\]
The identity (2.14) establishes a relation between the model with \(H>0\) and hence \(w>1\), and the model with \(H<0\) and hence \(0\leq w<1\). Given this identity, one may, with no loss of generality, restrict to \(H\geq 0\), i.e., \(w\geq 1\), and we will do this below, unless otherwise indicated.
In the limit \(n(G)\to\infty\), the reduced, dimensionless free energy per vertex is
\[f(\{G\},q,s,v,w)=\lim_{n(G)\to\infty}\frac{1}{n(G)}\ln[Z(G,q,s,v,w)]\, \tag{2.15}\]
where the symbol \(\{G\}\) denotes the formal \(n\to\infty\) limit of a given family of graphs, such as a regular lattice with some specified boundary conditions. The actual Gibbs free energy per site is \(F(T,H)=-k_{B}Tf(T,H)\). For technical simplicity, unless otherwise indicated, we will restrict to the ferromagnetic case \(J>0\) here; in [5, 6, 7, 8] we have also discussed the antiferromagnetic case. The zero-temperature limit of the antiferromagnetic version defines a weighted-set chromatic polynomial that counts the number of assignments from \(q\) colors to the vertices of \(G\) subject to the condition that no two adjacent vertices have the same color, with prefered (disprefered) weighting given to colors in \(I_{s}\) for \(H>0\) (\(H<0\), respectively). Here and below, in order to avoid cumbersome notation, we will use the same symbol \(Z\) with different sets of arguments to refer to the full model, as \(Z(G,q,s,v,w)\) and the zero-field special case, \(Z(G,q,v)\).
The partition function of the zero-field Potts model is equivalent to an important function in mathematical graph theory, namely the Tutte (also called Tutte-Whitney) polynomial \(T(G,x,y)\)[14]. This is defined as
\[T(G,x,y)=\sum_{G^{\prime}\subseteq G}(x-1)^{k(G^{\prime})-k(G)}(y-1)^{c(G^{ \prime})}\, \tag{2.16}\]
where \(c(G^{\prime})\) denotes the number of linearly independent cycles on \(G^{\prime}\). Note that \(c(G^{\prime})=e(G^{\prime})+k(G^{\prime})-n(G^{\prime})=e(G^{\prime})+k(G^{ \prime})-n(G)\). The equivalence relation is
\[Z(G,q,v)=(x-1)^{k(G)}(y-1)^{n(G)}T(G,x,y)\, \tag{2.17}\]
with
\[x=1+\frac{q}{v}\,\quad y=v+1\, \tag{2.18}\]
so that \(q=(x-1)(y-1)\). Reviews of the Tutte polynomial and generalizations include [9, 10, 11], [15]-[18].
III Magnetic order parameter in the Ising, O(\(N\)), and Potts model with a conventional magnetic field
### Ising and O(\(N\)) Models
In the Ising model (6) the magnetization per site is given by
\[M(H)=-\frac{\partial F}{\partial H}=\frac{\partial f}{\partial h}\, \tag{10}\]
and the spontaneous magnetization is \(M\equiv\lim_{H\to 0}M(H)\). This \(M\) is (i) identically zero in the high-temperature phase where the theory is invariant under the global \(\mathbb{Z}_{2}\approx S_{2}\) symmetry and, (ii) for a regular lattice of dimensionality above the lower critical dimensionality \(d_{\ell}=1\), \(M\) is nonzero in the low-temperature phase where there is spontaneous breaking of this global \(\mathcal{Z}_{2}\) symmetry, increasing from \(0\) to a maximum of \(1\) as \(T\) decreases from the critical temperature \(T_{c}\) to \(T=0\). Alternatively, in the ferromagnetic Ising model (6) on a regular lattice, the (square of the) magnetization can be calculated in the absence of an external field as the limit
\[M^{2}=\lim_{r\rightarrow\infty}\langle\sigma_{i}^{(Is)}\sigma_{j}^{(Is)} \rangle\, \tag{11}\]
where \(\langle\sigma_{i}^{(Is)}\sigma_{j}^{(Is)}\rangle\) is the two-spin correlation function and \(r\) denotes the Euclidean distance between the lattice sites \(i\) and \(j\) on the lattice. For \(T>T_{c}\), this correlation function \(\langle\sigma_{i}^{(Is)}\sigma_{j}^{(Is)}\rangle\to 0\) as \(r\rightarrow\infty\), so that \(M=0\). Regarding the Ising model as the \(N=1\) special case of an O(\(N\)) spin model, similar comments hold. Thus, consider the partition function
\[Z_{\text{O}(N)}=\int\prod_{i}d\Omega_{i}e^{-\beta\mathcal{H}_{\text{O}(N)}}\, \tag{12}\]
with
\[\mathcal{H}_{\text{O}(N)}=-J\sum_{\langle ij\rangle}\vec{S}_{i}\cdot\vec{S}_{ j}-\vec{H}\cdot\sum_{i}\vec{S}_{i}\, \tag{13}\]
where \(\vec{S}_{i}\) is an \(N\)-component unit-normalized classical spin at site \(i\) on a given lattice and \(d\Omega_{i}\) denotes the O(\(N\)) integration measure. For zero external field, this model has a global O(\(N\)) invariance. The presence of an external magnetic field \(\vec{H}\) explicitly breaks the O(\(N\)) symmetry down to O(\(N-1\)). For general \(\vec{H}\), one has, for the thermal average of \(\vec{M}=\vec{M}(\vec{H})\),
\[\vec{M}(\vec{H})=-\frac{\partial F}{\partial\vec{H}}\, \tag{14}\]
and the relation
\[|\vec{M}|^{2}=\lim_{r\rightarrow\infty}\langle\vec{S}_{i}\cdot\vec{S}_{j} \rangle. \tag{15}\]
As usual, the spontaneous magnetization for the Ising and O(\(N\)) models is defined as \(M_{0}=\lim_{H\to 0}M(H)\) and \(\vec{M}_{0}=\lim_{|\vec{H}|\to 0}\vec{M}(\vec{H})\), respectively.
### Measures of Spin Ordering in Potts Model with Conventional Magnetic Field
The situation is different in the Potts model, even with a conventional external magnetic field that favors only one spin value. In our formalism, this means that \(I_{s}\) consists of the single value \(s=1\). Before proceeding to derive our new results, we review this situation for this conventional case. For this purpose, it is useful to analyze the properties of the spin-spin correlation function. It will be sufficient here and below to assume that the graph \(G\) is a regular \(d\)-dimensional lattice. Let us denote as \(P_{aa}(i,j)\) the probability (in the thermodynamic limit, in thermal equilibrium at temperature \(T\)) that the spins \(\sigma_{i}\) and \(\sigma_{j}\) at the sites \(i\) and \(j\) in the lattice have the value \(a\in I_{q}\). At \(T=\infty\), all spin configurations occur with equal probability, so the probability that \(\sigma_{i}\) has a particular value \(a\) is just \(1/q\), and similarly with \(\sigma_{j}\), so \(P_{aa}(i,j)=1/q^{2}\) at \(T=\infty\). To define a correlation function with the usual property that in the high-temperature phase, as the distance \(r\) between the spins goes to infinity, they should be completely uncorrelated, one must therefore subtract this \(1/q^{2}\) term. That is, in the Potts model, one defines the spin-spin correlation function as (e.g., [2])
\[\Gamma_{aa}(i,j)=P_{aa}(i,j)-\frac{1}{q^{2}}. \tag{3.7}\]
Thus, by construction, \(\Gamma_{aa}(i,j)=0\) at \(T=\infty\). At \(T=0\), in the ferromagnetic Potts model, all of the spins take on the same value in the set \(I_{q}\). Let us say that an infinitesimally small external field has been applied to favor the value \(a\in I_{q}\), so then \(P_{aa}(i,j)=1/q\) and hence, under these conditions,
\[\Gamma_{aa}(i,j)=\frac{1}{q}-\frac{1}{q^{2}}=\frac{q-1}{q^{2}}\quad\mbox{at $T=0$}. \tag{3.8}\]
In the ferromagnetic Potts model with a conventional external (uniform) magnetic field favoring a single spin model, the magnetic order parameter, \({\cal M}\), normalized so that it is unity at \(T=0\), is then related to this spin-spin correlation function \(\Gamma_{aa}(i,j)\) according to
\[{\cal M}=\left(\frac{q^{2}}{q-1}\right)\,\lim_{r\to\infty}\Gamma_{aa}(i,j). \tag{3.9}\]
Although the quantity \(-\partial F/\partial H=\partial f/\partial h\) yields one measure of magnetic ordering, it is not the order parameter itself, in contrast to the situation with both the Ising and O(\(N\)) spin models. Instead, as is evident from its definition, this partial derivative is equal to the
fraction of the total number of sites in the (thermodynamic limit of the) lattice with spins taking one particular value out of the set of \(q\) values. We denote this as
\[M=-\frac{\partial F}{\partial H}=\frac{\partial f}{\partial h}=w\frac{\partial f }{\partial w}. \tag{3.10}\]
At \(T=\infty\), since all spin values are weighted equally, it follows that the fraction of the spins taking on any particular value is \(1/q\), i.e.,
\[M=\frac{1}{q}\quad\mbox{at $T=\infty$}. \tag{3.11}\]
In the opposite limit of zero temperature, given that \(J>0\), the spin-spin interaction forces all of the spins to have the same value. There is then a dichotomy in the behavior of the system, depending on whether \(H\) is positive or negative. If \(H>0\), then the external field forces this spin value to be the single value in \(I_{s}\) favored by this field, so
\[M=1\quad\mbox{at $T=0$ if $H>0$}\, \tag{3.12}\]
If, on the other hand, \(H<0\), then the single spin value that all the spins are forced to have by the spin-spin interaction to lie in the orthogonal complement, which, for this case is the set \(I_{s}^{\perp}=\{2,...,q\}\), so the fraction in \(I_{s}\) is zero. Hence,
\[M=0\quad\mbox{at $T=0$ if $H<0$}. \tag{3.13}\]
The zero-field values of \(M\) at a given temperature for the respective cases \(H>0\) and \(H<0\) are \(M_{0^{+}}=\lim_{H\to 0^{+}}M\) and \(M_{0^{-}}=\lim_{H\to 0^{-}}M\).
Because the zero-field value of \(M\) does not vanish in the high-temperatures, \(S_{q}\)-symmetric phase, it cannot be the order parameter of the model, but instead is an auxiliary measure of the magnetic ordering per site. In the literature, studies focused on the case \(H>0\) in formulating an appropriate order parameter. In this case, to define an order parameter, one subtracts the \(T=\infty\) value of \(M\) from the value for general \(T\) and normalizes the result so that the order parameter saturates at unity at \(T=0\). This yields a measure of spin ordering that we denote as \({\cal M}\):
\[{\cal M}=\frac{M-M_{T=\infty}}{M_{T=0}-M_{T=\infty}}=\frac{M-\frac{1}{q}}{1- \frac{1}{q}}=\frac{qM-1}{q-1}. \tag{3.14}\]
The zero-field value of this order parameter, i.e., the spontaneous magnetization, is then
\[{\cal M}_{0}=\lim_{H\to 0}{\cal M}. \tag{3.15}\]
This construction was given for the specific case \(q=3\) in [19] and for general \(q\) in [20] (see also [2], where \({\cal M}_{0}\) was denoted as \(m_{0}\)). Series expansions [19] and Monte-Carlo simulations [20] for the two-dimensional Potts model were consistent with the behavior expected of an order parameter, namely \({\cal M}_{0}=0\) in the high-temperature \(S_{q}\)-symmetric phase and \({\cal M}>0\) in the low-temperature phase with spontaneous symmetry breaking of the global \(S_{q}\) symmetry to \(S_{q-1}\).
A parenthetical remark is in order concerning the trivial case \(q=1\) where the spins are all frozen to have the same value and hence are nondynamical. In this \(q=1\) case, up to a prefactor \(w^{n}\), the partition function is equal to the free-field result, \(w^{n}Z(G,q,v)\). As a consequence, for any temperature, \(M=1\), so \({\cal M}\) has an indeterminate form \(0/0\). Thus, in using the formula (3.14), one restricts to the range \(q\geq 2\).
## IV Measures of spin ordering in the Potts model with generalized magnetic field
In this section we present our new results on measures of spin ordering in the Potts model with a generalized magnetic field, including, in particular, the order parameter for this model. In the limit \(T\to\infty\), \(P_{aa}(i,j)=1/q^{2}\), independent of \(s\). This is a consequence of the fact that the Boltzmann weight \(e^{-\beta{\cal H}}\) in the expression for \(Z\) reduces to 1 for \(\beta=0\), and so the spins are completely random. However, the auxiliary measure of spin ordering, \(M\), behaves differently in the Potts model with a generalized versus conventional external magnetic field. From the basic definition, calculating \(M\) from Eq. (3.10) and then letting \(T\to\infty\), we find the following general behavior:
\[M=\frac{s}{q}\ \ \ \mbox{at $T=\infty$ and any finite $H$ }\ . \tag{4.1}\]
In the opposite limit, \(T\to 0\), the value of \(M\) again depends on the sign of \(H\). If \(H>0\), then (given that \(J>0\)), the spin-spin interaction forces all spins to have the same value, and the presence of the external field forces this value to lie in the set \(I_{s}\), so
\[\lim_{T\to 0}M=1\ \mbox{for $H>0$}. \tag{4.2}\]
In this \(T\to 0\) limit (again, given that \(J>0\)), if \(H<0\), then the spin-spin interaction forces all spins to have the same value and this value lies in the orthogonal complement \(I_{s}^{\perp}\), so the fraction of spins in \(I_{s}\) is 0:
\[\lim_{T\to 0}M=0\ \mbox{for $H<0$}. \tag{4.3}\]
Finally, we record the behavior of \(M\) in the limits \(H\to\pm\infty\) at fixed finite nonzero temperature. In terms of the Boltzmann weights, these two limits are \(w\to\infty\) and \(w\to 0\) with \(v\) finite. As \(H\to\infty\) in this limit, all of the spins must take on values in \(I_{s}\), so
\[\lim_{H\to\infty}M=1\ \text{for any finite nonzero}\ T. \tag{4.4}\]
If \(H\to-\infty\), then all spins must take on values in \(I_{s}^{\perp}\), so the fraction in \(I_{s}\) is zero for any temperature including \(T=0\):
\[\lim_{H\to-\infty}M=0\quad\text{for any finite nonzero}\ T. \tag{4.5}\]
In order to obtain the zero-field value of \(M\) at a given temperature, as in the case of a conventional magnetic field, one would calculate \(Z(G,q,s,v,w)\) on a given lattice graph \(G\), take the thermodynamic limit, then calculate \(M\) and take the limit \(H\to 0^{+}\) or \(H\to 0^{-}\).
We now construct a magnetic order parameter \(\mathcal{M}\) for the Potts model in a generalized magnetic field. As noted above, given the identity (2.14), we can, without loss of generality, restrict to \(H>0\) and we shall do so henceforth. We obtain
\[\mathcal{M}=\frac{M-M_{T=\infty}}{M_{T=0}-M_{T=\infty}}=\frac{M-\frac{s}{q}}{1 -\frac{s}{q}}=\frac{qM-s}{q-s}. \tag{4.6}\]
The spontaneous magnetization is then
\[\mathcal{M}_{0}=\lim_{H\to 0^{+}}\mathcal{M}. \tag{4.7}\]
A word is in order concerning the apparent pole at \(s=q\). If \(s=q\), then the presence of the external field simply adds a constant term \(-Hn\) to \(\mathcal{H}\), or equivalently, i.e., the partition function is equivalent to the product of the factor \(w^{n}\) times the zero-field \(Z\), as specified in the identity (2.13), so that
\[M=1\quad\text{if}\ s=q\, \tag{4.8}\]
independent of temperature. Hence, just as was the case with the expression (3.14) for a conventional magnetic field favoring just one spin, so also here, the expression (4.6) takes the indeterminate form \(0/0\) in this case. Hence, in using (4.6), we restrict \(s\) to the interval \(1\leq s\leq q-1\).
## V Some explicit examples
Some explicit examples illustrate the use of (4.6) for the order parameter. Although a Peierls-type argument shows that there is no spontaneous symmetry breaking of the
symmetry (4) on (the \(n\to\infty\) limit of a) one-dimensional lattice or quasi-one-dimensional lattice strip, these types of lattices are, nevertheless, useful to illustrate some features of Eq. (4.6).
### 1D Lattice
As a first example, we use the exact expression for \(Z(G,q,s,v,w)\) on a one-dimension lattice derived in [6]. This yields the reduced dimensionless free energy per site (in the thermodynamic limit, in the notation of [6])
\[f(1D,q,s,v,w)=\ln(\lambda_{Z,1,0,1})\, \tag{5.1}\]
where
\[\lambda_{Z,1,0,1}=\frac{1}{2}\Big{(}A+\sqrt{R}\ \Big{)}\, \tag{5.2}\]
with
\[A=q+s(w-1)+v(w+1) \tag{5.3}\]
and
\[R=A^{2}-4v(q+v)w. \tag{5.4}\]
(As expected, in the thermodynamic limit, this result applies independent of the boundary conditions.) The resultant auxiliary measure of spin ordering, \(M\), is
\[M=\frac{w}{\sqrt{R}}\left[s+v-\frac{2v(q+v)}{A+\sqrt{R}}\,\right]\,. \tag{5.5}\]
It is straightforward to confirm that Eq. (5.5) satisfies the general relations (4.1)-(4.5). As \(T\to\infty\), i.e., \(\beta\to 0\), the variables \(v\) and \(w\) (for finite \(J\) and \(H\)) approach the limits \(v\to 0\) and \(w\to 1\), i.e., \(K\to 0\) and \(h\to 0\). In this limit, we calculate a two-variable series expansion of \(M\) in \(K\) and \(h\) and find
\[M = \frac{s}{q}\bigg{[}1+\Big{(}1-\frac{s}{q}\Big{)}h\bigg{\{}1+\frac {2}{q}K+\frac{1}{2}\Big{(}1-\frac{2s}{q}\Big{)}h+\frac{1}{q}K^{2}+\frac{3}{q} \Big{(}1-\frac{2s}{q}\Big{)}Kh \tag{5.6}\] \[+ \Big{(}\frac{1}{6}-\frac{s}{q}\Big{(}1-\frac{s}{q}\Big{)}\Big{)} h^{2}+O(K^{3},K^{2}h,Kh^{2},h^{3})\bigg{\}}\bigg{]}\quad\mbox{as $T\to\infty$}\.\]
Setting \(\beta=0\), one sees that this expansion satisfies the identity (4.1). Substituting Eq. (5.6) into our general expression for the order parameter, we obtain
\[{\cal M}\,=\,\frac{sh}{q}\bigg{[}1+\frac{2}{q}K+\frac{1}{2}\Big{(}1-\frac{2s} {q}\Big{)}h+\frac{1}{q}K^{2}+\frac{3}{q}\Big{(}1-\frac{2s}{q}\Big{)}Kh+\Big{(} \frac{1}{6}-\frac{s}{q}\Big{(}1-\frac{s}{q}\Big{)}\Big{)}h^{2}\]
\[\left.+\,\,O(K^{3},K^{2}h,Kh^{2},h^{3})\right\}\,\right]\ \ \ \mbox{as}\ T\to\infty\, \tag{5.7}\]
where the notation \(O(K^{3},K^{2}h,Kh^{2},h^{3})\) refers to terms of order \(K^{3}\), \(K^{2}h\), \(Kh^{2}\), or \(h^{3}\) inside the curly brackets. The proportionality of \({\cal M}\) to \((s/q)h=(s/q)\beta H\) as \(\beta\to 0\) is the expression of the Curie-Weiss relation for the induced magnetization for this model.
Given Eq. (4.6) connecting \(M\) and \({\cal M}\), the susceptibilities defined via \(M\) and \({\cal M}\) are simply related to each other. Defining \(\chi_{M}=\partial M/\partial H\) and \(\chi_{\cal M}=\partial{\cal M}/\partial H\), we have
\[\chi_{M}=\left(1-\frac{s}{q}\right)\chi_{\cal M}. \tag{5.8}\]
From Eq. (5.7), it follows that the two-variable high-temperature Taylor series expansion of \(\chi_{\cal M}\) in powers of \(K\) and \(h\) is given by
\[\beta^{-1}\chi_{\cal M} = \frac{s}{q}\biggl{[}1+\frac{2}{q}K+\Bigl{(}1-\frac{2s}{q}\Bigr{)} h+\frac{1}{q}K^{2}+\frac{6}{q}\Bigl{(}1-\frac{2s}{q}\Bigr{)}Kh+\Bigl{(} \frac{1}{2}-\frac{3s}{q}\Bigl{(}1-\frac{s}{q}\Bigr{)}\Bigr{)}h^{2} \tag{5.9}\] \[+ O(K^{3},K^{2}h,Kh^{2},h^{3})\biggr{\}}\,\biggr{]}\ \ \ \mbox{as}\ T\to\infty\.\]
For \(H\to\infty\) at finite \(T\) (equivalently, \(w\to\infty\) with finite \(v\)), we calculate the Taylor series expansion
\[M=1-\frac{s(q-s)}{(s+v)^{2}w}+\frac{s(q-s)[s(q-s)-2v(q+v)]}{(s+v)^{4}w^{2}}+O \Bigl{(}\frac{1}{w^{3}}\Bigr{)} \tag{5.10}\]
and hence
\[{\cal M}=1-\frac{sq}{(s+v)^{2}w}+\frac{sq[s(q-s)-2v(q+v)]}{(s+v)^{4}w^{2}}+O \Bigl{(}\frac{1}{w^{3}}\Bigr{)}. \tag{5.11}\]
To show these results numerically for a typical case, we take the illustrative values \(q=5\) and \(v=2\). In Figs. 1 and 2 we plot \({\cal M}\) for this 1D lattice as a function of \(w\) in the intervals \(1\leq w\leq 8\) and \(8\leq w\leq 40\). Fixing the value of \(v\) corresponds most simply to fixing the values of \(J\) and \(T\), so that the variation in \(w\) then amounts to a variation in \(H\) at fixed \(T\). The results show that, as expected, \({\cal M}\) increases monotonically with increasing \(w\) and thus \(H\), for fixed \(T\). For small \(h\), i.e., \(w-1\to 0^{+}\), the values of \({\cal M}\) satisfy the relation \({\cal M}=(s/q)h\), in accord with (5.7) and hence are larger for larger \(s\). However, as is evident from the series expansion (5.11) and from Fig. 2, this monotonicity is not preserved in \({\cal M}\) for this 1D lattice at large \(w\).
### \(L_{y}=2\) Lattice Strips
In [8] we calculated \(Z(G,q,s,v,w)\) for the width \(L_{y}\) strips of the square and triangular lattice. This work generalized our previous calculations of \(Z(G,q,v)\) in zero external field
on these strips [21; 22]. In the infinite-length limit (independent of longitudinal boundary conditions), the reduced free energy is given, respectively, by \(f_{sq,L_{y}=2}=(1/2)\ln\lambda_{sq,L_{y}=2}\) and \(f_{tri,L_{y}=2}=(1/2)\ln\lambda_{tri,L_{y}=2}\), where \(\lambda_{sq,L_{y}=2}\) and \(\lambda_{tri,L_{y}=2}\) are roots of respective degree-5 and degree-6 algebraic equations. Hence, it is not possible to calculate the derivatives \(M=w\partial f/\partial w\) analytically to give explicit expressions for \(M\) and \({\cal M}\) for these infinite-length lattice strips. However, using numerical differentiation, it is still possible to obtain values for these quantities, given input values for \(v\), \(q\), and \(s\). Using this method and again taking the illustrative values \(q=5\) and \(v=2\), we show plots of \({\cal M}\) for the infinite-length strips of the square and triangular lattices with width \(L_{y}=2\) in Figs. 3-6. As was the case with the 1D lattice, for small \(h\), the values of \({\cal M}\) satisfy the relation \({\cal M}=(s/q)h\), and thus are larger for larger \(s\), but this monotonicity relation does not apply for large \(w\).
## VI Thermodynamic properties and critical behavior
As discussed in Sect. II above, in the presence of the generalized external magnetic field defined in Eq. (3), the symmetry group of \({\cal H}\) and \(Z\) is reduced from \({\cal S}_{q}\) at \(H=0\) to the tensor product in Eq. (4), and this further simplifies to \({\cal S}_{q-1}\) if \(s=1\) in which case, the external field favors or disfavors only a single spin value. From the identity (14), the case \(s=q-1\) is effectively equivalent to the conventional case \(s=1\). However, if \(s\) is in the interval
\[2\leq s\leq q-2\, \tag{16}\]
then the general model of Eqs. (2) and (3) exhibits properties that are interestingly different from those of a \(q\)-state Potts model in a conventional magnetic field. In this section we will consider both signs of \(H\) and \(J\). With a conventional magnetic field, at a given temperature \(T\), if \(H\gg|J|\), the interaction with the external field dominates over the spin-spin interaction, and if \(h=\beta H\) is sufficiently large, the spins are frozen to the single favored value. In contrast, in the model with a generalized magnetic field, if \(s\) lies in the interval (16), and if \(|H|\gg|J|\), this effectively reduces the model to (i) an \(s\)-state Potts model if \(H>0\), or (ii) a \((q-s)\)-state Potts model if \(H<0\). For given values of \(q\) and \(s\), taking the thermodynamic limit of a given regular lattice, there are, in general, four types of possible models, depending on the sign of \(H\) and the sign of \(J\). A discussion of these models, including the types of critical behavior, where present in the case of square lattice, was given in (Section 4 of) [7] with details for the illustrative case \(q=5\) and \(s=2\). We generalize this here to \(q\geq 5\). For \(H=0\), the ferromagnetic version of the model has a first-order phase transition, with spontaneous breaking of the \({\cal S}_{5}\) symmetry, at \(K_{c}=\ln(1+\sqrt{q})\), while the
antiferromagnetic version has no finite-temperature phase transition and is disordered even at \(T=0\)[2; 3]. For \(H>0\) and \(H\gg|J|\), the theory reduces effectively to a two-state Potts model, i.e., an Ising model. Owing to the bipartite property of the square lattice, there is an elementary mapping that relates the ferromagnetic and antiferromagnetic versions of the model, and, as is well known, both have a second-order phase transition, with spontaneous symmetry breaking of the \({\cal S}_{2}\approx{\mathbb{Z}}_{2}\) symmetry, at \(|K_{c}|=\ln(1+\sqrt{2})\simeq 0.881\) (where \(K=\beta J\)), with thermal and magnetic critical exponents \(y_{t}=1\), \(y_{h}=15/8\), described by the rational conformal field theory (RCFT) with central charge \(c=1/2\). For \(H<0\) and \(|H|\gg|J|\), the theory effectively reduces to a \((q-2)\)-state Potts model. In the ferromagnetic case, \(J>0\), if (a) \(q=5\), then the resultant 3-state Potts ferromagnet has a well-understood second-order phase transition, with spontaneous symmetry breaking of the \({\cal S}_{3}\) symmetry, at \(K_{c}=\ln(1+\sqrt{3})\simeq 1.01\), with thermal and magnetic critical exponents \(y_{t}=6/5\), \(y_{h}=28/15\), described by a RCFT with central charge \(c=4/5\); (b) if \(q=6\), then the resultant 4-state Potts ferromagnet also has a second-order phase transition with thermal and magnetic critical exponents \(y_{t}=3/2\), \(y_{h}=15/8\), described by a RCFT with central charge \(c=1\)[2; 23; 24]; and (c) if \(q\geq 7\), then the resultant Potts ferromagnet has a first-order transition [2; 3]. In the antiferromagnetic case, \(J<0\), if \(q=5\), the resultant 3-state Potts antiferromagnet has no finite-temperature phase transition but is critical at \(T=0\) (without frustration), with nonzero ground-state entropy per site \(S/k_{B}=(3/2)\ln(4/3)\simeq 0.432\)[2; 25]. If \(q\geq 6\), then the resultant \((q-2)\)-state Potts antiferromagnet (on the square lattice) does not have any symmetry-breaking phase transition at any finite temperature and is disordered also at \(T=0\). Similar discussions can be given for other lattices.
## VII Conclusions
In this paper we have discussed measures of spin ordering in the \(q\)-state Potts model in a generalized external magnetic field that favors or disfavors spin values in a subset \(I_{s}=\{1,...,s\}\) of the total set of \(q\) values. In particular, we have constructed an order parameter \({\cal M}\) (given in Eq. (4.6)) and have presented an illustrative evaluation of it, together with relevant series expansions, for the (thermodynamic limit of the) one-dimensional lattice, as well as quantitative plots of \({\cal M}\) for this 1D lattice and for strips of the square and triangular lattices.
###### Acknowledgements.
The research of S.-C.C. was supported in part by the Taiwan Ministry of Science and Technology grant MOST 111-2115-M-006-012-MY2. The research of R.S. was supported in part by the U.S. National Science Foundation Grant NSF-PHY-22-100533.
|
2309.13164 | Optimal data compression for Lyman-$α$ forest cosmology | The Lyman-$\alpha$ (Ly$\alpha$) three-dimensional correlation functions have
been widely used to perform cosmological inference using the baryon acoustic
oscillation (BAO) scale. While the traditional inference approach employs a
data vector with several thousand data points, we apply near-maximal score
compression down to tens of compressed data elements. We show that carefully
constructed additional data beyond those linked to each inferred model
parameter are required to preserve meaningful goodness-of-fit tests that guard
against unknown systematics, and to avoid information loss due to non-linear
parameter dependencies. We demonstrate, on suites of realistic mocks and DR16
data from the Extended Baryon Oscillation Spectroscopic Survey, that our
compression approach is lossless and unbiased, yielding a posterior that is
indistinguishable from that of the traditional analysis. As an early
application, we investigate the impact of a covariance matrix estimated from a
limited number of mocks, which is only well-conditioned in compressed space. | Francesca Gerardi, Andrei Cuceu, Benjamin Joachimi, Seshadri Nadathur, Andreu Font-Ribera | 2023-09-22T20:06:38Z | http://arxiv.org/abs/2309.13164v2 | # Optimal data compression for Lyman-\(\alpha\) forest cosmology
###### Abstract
The Lyman-\(\alpha\) (Ly\(\alpha\)) three-dimensional correlation functions have been widely used to perform cosmological inference using the baryon acoustic oscillation (BAO) scale. While the traditional inference approach employs a data vector with several thousand data points, we apply near-maximal score compression down to tens of compressed data elements. We show that carefully constructed additional data beyond those linked to each inferred model parameter are required to preserve meaningful goodness-of-fit tests that guard against unknown systematics, and to avoid information loss due to non-linear parameter dependencies. We demonstrate, on suites of realistic mocks and DR16 data from the Extended Baryon Oscillation Spectroscopic Survey, that our compression framework is lossless and unbiased, yielding a posterior that is indistinguishable from that of the traditional analysis. As a showcase, we investigate the impact of a covariance matrix estimated from a limited number of mocks, which is only well-conditioned in compressed space.
keywords: cosmological parameters - large-scale structure of universe - methods: data analysis
## 1 Introduction
In recent decades, the Lyman-\(\alpha\) (Ly\(\alpha\)) forest gained popularity as a probe of the distribution of matter at redshifts \(z>2\). The forest consists of a sequence of absorption lines in high-redshift quasar (QSO) spectra, caused by neutral hydrogen placed along the line-of-sight, and hence it is a tracer of the intergalactic medium (IGM). Therefore, it contains cosmological information, and in particular Lyman-\(\alpha\) clustering shows the distinct baryon acoustic oscillations (BAO) feature. This feature was first detected in the Ly\(\alpha\) auto-correlation function using the Baryon Oscillation Spectroscopic Survey (BOSS) DR9 data (Busca et al., 2013; Slosar et al., 2013; Kirkby et al., 2013), and subsequently extracted from the Ly\(\alpha\) cross-correlation with QSOs using DR11 data (Font-Ribera et al., 2014).
The Ly\(\alpha\) forest auto-correlation and its cross-correlation with quasars have been widely used to place constraints on the cosmological model (e.g. Aubourg et al., 2015; Alam et al., 2017; Cuceu et al., 2019; Alam et al., 2021; Cuceu et al., 2023). These two correlation functions are typically computed on a 2D grid in comoving coordinates along and across the line-of-sight, resulting in high dimensional data vectors, usually 2500 long for the auto-correlation and 5000 for the cross-correlation. However, standard BOSS and eBOSS (du Mas des Bourboux et al., 2020; hereafter dMdB20) Ly\(\alpha\) forest analyses have so far focused on extracting cosmological information from the BAO peak, which is well localized to a smaller subset of bins. This means that the vector can be reduced to a smaller dimensionality, encoding the information we wish to capture. Hence, in this context, applying a data compression scheme could be useful to optimize the inference. In addition, the accuracy of the parameter estimates is tightly linked to the covariance matrix of the data vector, under the assumption of a Gaussian likelihood. As the true covariance \(\mathbf{\Sigma}\) of the correlation function is inaccessible, standard analyses usually estimate it either from large set of mocks or analytically from models of the covariance matrix (Kitatura et al., 2016; Wadekar et al., 2020). In Ly\(\alpha\) analyses, producing mocks can be a highly computationally-expensive process, therefore only a limited number is available, 100 in the case of dMdB20. However, if the number of samples is significantly lower than the number of data points, the estimate of the covariance is singular and has no inverse (Hartlap et al., 2007; Dodelson & Schneider, 2013; Taylor & Joachimi, 2014; Sellentin & Heavens, 2015; Percival et al., 2021).
In the eBOSS DR16 analysis, the covariance matrix \(\mathbf{\Sigma}\) is computed via the sub-sampling method, which, given some dataset, consists of computing the covariance of correlation functions obtained in individual subsamples of the sky. Despite being larger (\(\sim 800\)) than the number of mocks (100), the number of subsamples is still lower than the number of data points (2500-5000); hence, the covariance matrix must be tested. Alternatively, in the same analysis, the authors computed a Gaussian covariance matrix using the Wick approximation (Delubac et al., 2015) and used it to benchmark the covariance computed from the sub-sampling method. The accuracy of the covariance matrix would increase by alleviating the mismatch between the number of bins and the number of mocks. This can be done by applying a data compression algorithm and evaluating the (compressed) data covariance matrix in a new space characterized by a lower dimen
sionality. In particular, given the available set of a hundred mocks, we reduce each of them to a set of compressed data vectors and compute a newly defined mock sample covariance, which is a good estimator of the true covariance, given that the length of the compressed data vector is now much smaller than the number of mocks. Then, a comparison between the covariance matrix of the data, mapped into the compressed space, and the mock sample covariance, obtained from the compressed vector, can clarify whether there has been an underestimation or overestimation of the contours in the standard analyses. Moreover, we are interested in obtaining a more sensitive goodness of fit test. The length of Ly\(\alpha\) correlation data vectors is of the order of \(\mathcal{O}(10^{3})\), which could easily hide any bad fit in a subset of the data. By reducing the dimensionality of the data vector through compression, we wish to obtain a test that would highlight when a few important points are off.
Driven by these optimization problems, we perform the inference analysis on realistic Ly\(\alpha\)\(\times\)Ly\(\alpha\) auto- and Ly\(\alpha\)\(\times\)QSO cross-correlation functions in a data compression framework. The compression algorithm we use is _score compression_(Alsing and Wandelt, 2018), under the hypothesis of a Gaussian likelihood. By construction, the dimensionality of the compressed data vector will be equal to the number of parameters we wish to keep information of, namely \(\mathcal{O}(10)\).
The paper is structured as follows. We start in Sect. 2 by outlining the method, explaining the computation of the covariance matrix, and introducing the modelling and the basic idea behind score compression. We proceed in Sect. 3 by testing the compression algorithm against loss of information, comparing the inferred posterior distribution for our sampled parameters in the traditional and compressed frameworks. In Sect. 4, we compare the constraining power of the original estimated covariance matrix against the mock-to-mock covariance. We then perform goodness of fit tests in the compressed framework in Sect. 5. Throughout the analysis a tight prior on the BAO parameters is imposed to overcome the problem of the non-linear relation between these and their corresponding summary statistics components. We relax the prior constraint, and hence made the analysis more generalizable, by extending the framework as described in Sect. 6. An application of our new framework to eBOSS DR16 data is presented in Sect. 7. Conclusions are drawn in Sect. 8.
Making sure the analysis is both optimized and reliable is key to interpret the vast amount of Ly\(\alpha\) forest data which will become available from the Dark Energy Spectroscopic Instrument (DESI).
## 2 Method
Generically referring to the Ly\(\alpha\) auto- and cross-correlations as the data vectors, the goal of this work is to study data compression in the context of Ly\(\alpha\) forest 3D analyses. In particular, this means compressing the data down to a set of summary statistics \(\mathbf{t}\), which will encode into a shorter vector the information we are interested in. As we have just seen, this also benefits the computation of the covariance matrix. The new 'compressed' framework is tested against the traditional analysis while performing parameter inference. To evaluate posterior distributions we use the nested sampler Polychordy (Handley et al., 2015, 2015).
We start in Sect. 2.1 by introducing the mocks used in this analysis, with a focus on the computation of the covariance matrix. We then describe the modelling of the Ly\(\alpha\)\(\times\)Ly\(\alpha\) and the cross Ly\(\alpha\)\(\times\)QSO power spectra in Sect. 2.2, as implemented in vega1(Cuceu et al., 2023), and the set of randomly generated _Monte Carlo realizations_ of the correlation function in Sect. 2.3. In Sect. 2.4 we finally outline the compression method used, namely _score compression_.
Footnote 1: [https://github.com/andreicuceu/vega](https://github.com/andreicuceu/vega)
### Synthetic data vector and covariance
In this work we use a set of 100 realistic Ly\(\alpha\) mocks, with and without contaminants, which were produced for the Ly\(\alpha\) eBOSS DR16 analysis (du Mas des Bourboux et al., 2020). The synthetic Ly\(\alpha\) transmitted fluxes are produced using the CoLoRe (Ramirez-Perez et al., 2022) and Ly\(\alpha\)CoLoRe (Farr et al., 2020) packages, from the same cosmology for all the mocks. Synthetic quasar spectra are then generated given some astrophysical and instrumental prescriptions, and contaminants are added if requested. Then the mocks run through the same analysis pipeline (picca2) as the real data, resulting in measured auto- and cross-correlation functions (ddB20). These are derived from computing the correlation function in each HEALPix3(Gorski et al., 2005) pixel -- about 880 pixels (subsamples) for the eBOSS footprint (NSIDE=16) -- and evaluating the mean and covariance over the full set of pixels of the mock, to be then assigned to the entire survey. In this way, for every \(i\)-th mock, there will be a measurement of both the correlation function and the covariance matrix \(\mathbf{C}_{i}\), which will be only an estimate of the true covariance \(\mathbf{\Sigma}\) as mentioned above. In each subsample, the correlation has a size of either 2500 (\(\xi_{\text{auto}}\)) or 5000 (\(\xi_{\text{cross}}\)) bins, hence the number of subsamples (880 pixels) is significantly lower than the number of data points (2500 or 5000). This means that the covariance should be singular, however off-diagonal elements of the correlation matrix are smoothed to make it positive definite (ddB20).
Footnote 3: [https://healpix.sourceforge.io](https://healpix.sourceforge.io)
Finally, given the same hundred mocks, it is possible to define a _stack_ of them. In particular, the correlation function for the _stack_ of mocks is obtained by collecting all the subsamples (for all the hundred mocks), and computing the mean and covariance of the correlation functions computed in each of them, effectively reducing the noise. We will refer to the contaminated auto- and cross- mock correlations of the _stack_ as _stacked correlations_.
In this analysis, we use the same scale cuts as in eBOSS DR16 (du Mas des Bourboux et al., 2020), assuming \(r_{\text{min}}=10\)\({}^{-1}\)Mpc, up to \(r_{\text{max}}=180\)\({}^{-1}\)Mpc. The effective redshift of the correlation functions is \(z_{\text{eff}}=2.3\).
### Modelling and parameter space
To model the Ly\(\alpha\) correlation functions we follow Eq. (27) of du Mas des Bourboux et al. (2020), while applying the same prescriptions as in Gerardi et al. (2022). Given a certain cosmological model and a corresponding isotropic linear matter power spectrum \(P(k,z)\), the Ly\(\alpha\) auto and Ly\(\alpha\)-QSO cross power spectra are computed as
\[P_{\text{Ly$\alpha$}}(k,\mu_{k},z)= b_{\text{Ly$\alpha$}}^{2}\left(1+\beta_{\text{Ly$\alpha$}}\mu_{k}^{2} \right)^{2}F_{\text{nl$\lambda$}}^{2}(k,\mu_{k})P(k,z)\ ; \tag{1}\] \[P_{\times}(k,\mu_{k},z)= b_{\text{Ly$\alpha$}}\left(1+\beta_{\text{Ly$\alpha$}}\mu_{k}^{2} \right)\] \[\times b_{\text{QSO}}\left(1+\beta_{\text{QSO}}\mu_{k}^{2} \right)F_{\text{nl$\text{,QSO}}}(k_{\parallel})P(k,z)\, \tag{2}\]
where \(\mu_{k}=k_{\parallel}/k\), with \(k\) and \(k_{\parallel}\) the wave vector modulus and its line-of-sight component, respectively. On one hand, the Ly\(\alpha\)\(\times\)Ly\(\alpha\) power spectrum in Eq. (1) depends on the Ly\(\alpha\) forest linear bias \(h_{\text{Ly$\alpha$}}\)
and RSD parameter \(\beta_{\rm Ly\alpha}=\frac{b_{\eta,{\rm Ly\alpha},\sigma}f(z)}{b_{\eta,{\rm Ly \alpha}}}\), where \(b_{\eta,{\rm Ly}\alpha}\) is an extra unknown bias, the velocity divergence bias, and \(f(z)\) the logarithmic growth rate. The \(F_{\rm nl,Ly\alpha}\) error accounts for non-linear corrections (Arinyo-i Prats et al., 2015). On the other hand, the quasar parameters that contribute to the Ly\(\alpha\times\)QSO power spectrum in Eq. (2) are the quasar linear bias \(b_{\rm QSO}\) and the redshift-space distortions (RSD) term \(\beta_{\rm QSO}=f(z)/b_{\rm QSO}\). Finally, we model non-linear effects of quasars and redshift errors following du Mas des Bourboux et al. (2020), using a Lorentzian function
\[F_{\rm nl,QSO}(k_{\parallel})=\left[1+\left(k_{\parallel}\sigma_{\nu}\right) ^{2}\right]^{-1/2}\, \tag{3}\]
where \(\sigma_{\nu}\) is the velocity dispersion.
The power spectra in Eqs. (1-2) only account for Ly\(\alpha\) flux and in reality this is also contaminated by absorption lines due to heavy elements, generally referred to as metals, and high column density (HCD) systems (Bautista et al., 2017; Font-Ribera et al., 2012). Let us first focus on the modelling of the HCDs. Font-Ribera et al. (2012) showed their broadening effect along the line-of-sight can be modeled at the level of new effective Ly\(\alpha\) bias and RSD parameters
\[b^{{}^{\prime}}_{\rm Ly\alpha}=b_{\rm Ly\alpha}+b_{\rm HCD}F_{\rm HCD}(k_{ \parallel})\, \tag{4}\]
\[b^{{}^{\prime}}_{\rm Ly\alpha}\theta^{{}^{\prime}}_{\rm Ly\alpha}=b_{\rm Ly \alpha}\beta_{\rm Ly\alpha}+b_{\rm HCD}\beta_{\rm HCD}F_{\rm HCD}(k_{ \parallel})\, \tag{5}\]
with \(b_{\rm HCD}\) and \(\beta_{\rm HCD}\) being the linear bias and RSD parameters. \(F_{\rm HCD}(k_{\parallel})\) is a function of the line-of-sight wavenumber, and it is modeled following dMdB20. On the other hand, metals contribute to the final auto- and cross-correlation functions as per
\[\xi^{{}^{\prime}}_{\rm auto}=\xi_{\rm Ly\alpha\times Ly\alpha}+\sum_{m}\xi_{ \rm Ly\alpha\times m}+\sum_{m_{1},m_{2}}\xi_{m_{1}\times m_{2}}\, \tag{6}\]
\[\xi^{{}^{\prime}}_{\rm cross}=\xi_{\rm Ly\alpha\times QSO}+\sum_{m}\xi_{ \rm QSO\times m}\, \tag{7}\]
where \(m\) generically refer to a metal and the sums are performed over all possible metals considered. The modelling of the cross-correlation of a metal with other metals (\(\xi_{m_{1}\times m_{2}}\)) and with Ly\(\alpha\) (\(\xi_{\rm Ly\alpha\times m}\)) and QSO (\(\xi_{\rm QSO\times m}\)) follows the modelling of the auto- and cross-correlations of the Ly\(\alpha\), and each metal line has a linear bias \(b_{m}\) and RSD parameter \(\beta_{m}=b_{\eta,m}f(z)/b_{m}\). Following dMdB20, we fix all \(\beta_{m}=0.5\), and sample the metal biases.
Based on this modelling, we use the code vega to compute the two-dimensional correlation functions \(\xi\). This same code computes both the BAO feature parameters \(\{a_{\rm I},a_{\rm I}\}\), which shift the peak along and across the line-of-sight, and the Gaussian smoothing (Farr et al., 2020), which accounts for the low resolution of the mocks and is parameterized by \(\{\sigma_{\parallel},\sigma_{\perp}\}\) smoothing parameters.
At the inference level, the set of sampled parameters is \(\boldsymbol{p_{\rm S}}=\{\alpha_{\parallel},\alpha_{\perp},b_{\rm Ly\alpha}, \beta_{\rm Ly\alpha},\rho_{\rm QSO},\rho_{\rm QSO},\sigma_{\nu},\sigma_{\perp}\}\), which is extended to include also \(\{b_{\eta,{\rm m}},b_{\rm HCD},\beta_{\rm HCD}\}\) when also fitting for contaminants. In this notation, \(b_{\eta,{\rm m}}\) is the velocity divergence bias for the metal m -- here we consider SiII(1260), SiII(1193), SiIII(1207) and SiII(1190).
For all these parameters we choose uniform priors, which are listed in Tab. 1. The only exception is given by \(\beta_{\rm HCD}\), for which, following the previous eBOSS DR16 analysis, we impose an informative Gaussian prior.
### Monte Carlo realizations
We here introduce a different kind of simulated data, which we will later use, defined as _Monte Carlo realizations_. They are correlation functions obtained by adding noise on top of the model, as defined in Sect. 2.2. The noise is given by a covariance matrix from one of the hundred mocks correlation that have been seen so far. What this means is that we can imagine every data point to be generated from a normal distribution \(\mathcal{N}(\boldsymbol{\xi},\boldsymbol{C})\), where \(\boldsymbol{\xi}\) is the model correlation function and \(\boldsymbol{C}\) is given by the covariance of the first realistic mock. Using Monte Carlo simulations comes with two advantages. First, it is possible to generate as many as needed to build any statistics. Secondly, we have control over the model and there will be clear knowledge of the underlying physics.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & & & \multicolumn{2}{c}{Testing the framework (_stacked_)} & \multicolumn{2}{c}{Testing the covariance (single mock)} \\ \cline{4-7} Parameter & Fiducial & Prior & Traditional & Compression & Original covariance & Mock-to-mock covariance \\ \hline \(\alpha_{\parallel}\) & 1.00 & \(\mathcal{U}(0.88,1.14)\) & \(1.000\pm 0.002\) & \(1.000\pm 0.002\) & \(1.003\pm 0.019\) & \(1.004\pm 0.021\) \\ \(\alpha_{\perp}\) & 1.01 & \(\mathcal{U}(0.88,1.14)\) & \(1.004\pm 0.003\) & \(1.004\pm 0.003\) & \(1.002\pm 0.027\) & \(1.004\pm 0.036\) \\ \(b_{\rm Ly\alpha}\) & \(-0.14\) & \(\mathcal{U}(-2,0)\) & \(-0.135\pm 0.001\) & \(-0.135\pm 0.001\) & \(-0.125\pm 0.004\) & \(-0.124\pm 0.006\) \\ \(\beta_{\rm Ly\alpha}\) & 1.41 & \(\mathcal{U}(0,5)\) & \(1.47\pm 0.01\) & \(1.47\pm 0.01\) & \(1.67^{+0.07}_{-0.08}\) & \(1.68\pm 0.11\) \\ \(b_{\rm QSO}\) & 3.81 & \(\mathcal{U}(0,10)\) & \(3.80\pm 0.01\) & \(3.80\pm 0.01\) & \(3.82\pm 0.08\) & \(3.81\pm 0.08\) \\ \(\beta_{\rm QSO}\) & 0.25 & \(\mathcal{U}(0,5)\) & \(0.25\pm 0.01\) & \(0.25\pm 0.01\) & \(0.27\pm 0.04\) & \(0.27\pm 0.04\) \\ \(\sigma_{\rm v}({\rm M}pc/h)\) & 2.87 & \(\mathcal{U}(0,15)\) & \(2.82\pm 0.04\) & \(2.82\pm 0.04\) & \(3.22^{+0.32}_{-0.28}\) & \(3.21\pm 0.30\) \\ \(\sigma_{\rm l,sm}\) & 2.05 & \(\mathcal{U}(0,10)\) & \(2.08\pm 0.01\) & \(2.08\pm 0.01\) & \(2.10\pm 0.09\) & \(2.10\pm 0.10\) \\ \(\sigma_{\rm l,sm}\) & 2.35 & \(\mathcal{U}(0,10)\) & \(2.33\pm 0.01\) & \(2.33\pm 0.01\) & \(2.23\pm 0.11\) & \(2.21\pm 0.13\) \\ \hline \(b_{\rm HCD}[\times 10^{-2}]\) & \(-1.70\) & \(\mathcal{U}(-20,0)\) & \(-2.12\pm 0.08\) & \(-2.13\pm 0.07\) & \(-2.98\pm 0.54\) & \(-3.07\pm 0.74\) \\ \(\beta_{\rm HCD}\) & 1.57 & \(N(0.5,0.09)\) & \(0.86\pm 0.06\) & \(0.86\pm 0.06\) & \(0.50\pm 0.09\) & \(0.50\pm 0.09\) \\ \(b_{\rm Ly,SiII(1260)}[\times 10^{-3}]\) & \(-0.58\) & \(\mathcal{U}(-50,50)\) & \(-0.59\pm 0.04\) & \(-0.59\pm 0.04\) & \(-0.83\pm 0.33\) & \(-0.86\pm 0.40\) \\ \(b_{\eta,{\rm SiII(1193)}}[\times 10^{-3}]\) & \(-1.12\) & \(\mathcal{U}(-50,50)\) & \(-1.09\pm 0.03\) & \(-1.09\pm 0.03\) & \(-0.83\pm 0.27\) & \(-0.83\pm 0.30\) \\ \(b_{\eta,{\rm SiII(1207)}}[\times 10^
### Score compression
To reduce the dimensionality of our datasets we use score compression (Alsing and Wandelt, 2018). Given a known form for the log-likelihood function \(\mathcal{L}\), this method corresponds to linear transformations of the data, based on the idea of compressing them down to the score function \(\mathbf{s}=\nabla\mathbf{L}_{*}\). The components of the compressed vector are the derivatives of the log-likelihood function, evaluated at some fiducial set of parameters \(\mathbf{\theta}_{*}\), with respect to the parameters of interest \(\mathbf{\theta}\). Under the assumptions that the likelihood function is Gaussian and the covariance \(\mathbf{C}\) does not depend on data, from the data \(\mathbf{d}\) the compressed data vector is obtained as
\[\mathbf{t}=\nabla\mathbf{\mu}_{*}^{T}\mathbf{C}^{-1}(\mathbf{d}-\mathbf{\mu}_{*})\, \tag{8}\]
where \(\mathbf{\mu}_{*}\) is the fiducial model. In our case the model corresponds to the correlation function \(\mathbf{\xi}\), described earlier in Sect. 2.2. The corresponding likelihood distribution in compressed space will be then given by
\[P(\mathbf{t}|\mathbf{\theta})=\frac{1}{(2\pi)^{\frac{n}{2}}|\mathbf{F}|^{2}}\exp\left[- \frac{1}{2}[\mathbf{t}-\mathbf{\mu}_{\mathbf{t}}(\mathbf{\theta})]^{T}\mathbf{F}^{-1}[\mathbf{t}-\mathbf{ \mu}_{\mathbf{t}}(\mathbf{\theta})]\right]\, \tag{9}\]
where \(n\) is the length of \(\mathbf{t}\), \(\mathbf{\mu}_{\mathbf{t}}(\mathbf{\theta})\) is the compressed model \(\mathbf{\mu}\) evaluated at \(\mathbf{\theta}\), namely \(\mathbf{\mu}_{\mathbf{t}}(\mathbf{\theta})=\nabla\mathbf{\mu}_{*}^{T}\mathbf{C}^{-1}[\mathbf{\mu}(\bm {\theta})-\mathbf{\mu}_{*}]\), and
\[\mathbf{F}=[\nabla\mathbf{\mu}_{*}]^{T}\mathbf{C}^{-1}[\nabla^{T}\mathbf{\mu}_{*}] \tag{10}\]
is the Fisher matrix.
When considering both the auto- and cross-correlations, some parameters will be in common; for this reason, there is the need to build a joint summary statistic. If we define independently the Ly\(\alpha\) auto- and cross- data vectors, characterized by the covariances \(\mathbf{C}_{\text{auto}}\) and \(\mathbf{C}_{\text{cross}}\) respectively, and given they do not correlate with each other, in the joint analysis the full covariance matrix will be given by
\[\mathbf{C}=\begin{pmatrix}\mathbf{C}_{\text{auto}}&0\\ 0&\mathbf{C}_{\text{cross}}\end{pmatrix}. \tag{11}\]
Then the resulting summary statistics vector and Fisher matrix will be respectively obtained as \(\mathbf{t}=\mathbf{t}_{\text{auto}}+\mathbf{t}_{\text{cross}}\) and \(\mathbf{F}=\mathbf{F}_{\text{auto}}+\mathbf{F}_{\text{cross}}\).
This compression method is dependent on the choice of the fiducial set of parameters \(\mathbf{\theta}_{*}\), which might not be known _a priori_. However, Alsing and Wandelt (2018) suggest iterating over the _Fisher scoring method_ for maximum-likelihood estimation
\[\mathbf{\theta}_{k+1}=\mathbf{\theta}_{k}+\mathbf{F}_{k}^{-1}\nabla\mathbf{L}_{k}\, \tag{12}\]
until convergence of the full set of parameters. How this is done in our particular case is described at the beginning of Sect. 3. An important note is that this iterative procedure does not take into account parameters priors, which means it can potentially lead to unusual values for those parameters which are not well constrained by the data.
In computing the score compression components over the parameters \(\{\alpha_{\parallel},\alpha_{\perp}\}\), we realized their relation with their corresponding summary statistics components, namely \(\{\mathbf{t}_{\alpha_{\parallel}},\mathbf{t}_{\alpha_{\perp}}\}\), was not monotonic, as per Fig. 1. This is problematic as this means the posterior can have more than one peak (Graff et al., 2011) if we sample over the full [0.01, 1.99] interval. We overcensus this complexity by imposing a tighter prior on \(\{\alpha_{\parallel},\alpha_{\perp}\}\) at the sampling step. This prior is designed to allow for \(\alpha_{\parallel}\) values in between the minimum and maximum of \(\mathbf{t}_{\alpha_{\parallel}}(\alpha_{\parallel})\). The same prior is imposed on \(\alpha_{\perp}\). This tightening does not affect the inference when performed on the correlation function of the _stacked_ mock, in which case posteriors are well within this prior, but it reveals to be quite important when evaluating the posteriors on the individual mocks. For this reason, we make sure we provide example results for those mocks whose contours are within the prior range.
Later, in Sect. 6 we will see how we can remove the tight prior constraint by evaluating the summary statistics components associated to \(\{\alpha_{\parallel},\alpha_{\perp}\}\) at multiple fiducial values of the BAO parameters, effectively enlarging the compressed vector.
## 3 Compression performance
In this Section we apply the score compression algorithm, outlined in Sect 2.4, to Ly\(\alpha\) auto- and cross-correlations measured from contaminated mocks. The pipeline starts by choosing a fiducial set of parameters for computing the score compressed vector, as per Eq. (8). The fiducial is obtained by iterating over Eq. (12), with \(\mathbf{\theta}_{0}\) given by the best fit of the _stacked_ correlation functions. Given this initial guess, we then iterated assigning to \(\mathbf{\theta}_{k+1}\) the median of the \(\mathbf{\theta}\) values over the hundred mocks at the \(k\)-th step.
The likelihood is assumed to be Gaussian, which has a major impact on the final form of the compressed vector, given that the latter is computed as the gradient of the log-likelihood. We make a consistency check by running the Henze-Zirkler test (Henze and Zirkler, 1990) for multivariate normality in the parameters space for the hundred mocks at the end of the iterative process. Intuitively, this test measures the distance between the measured and target (multivariate) distribution, and it was shown to perform well in high-dimensional problems. The assumption of Gaussianity is also inherited in the compressed space. In general, when mapping in a compressed space, this property might not be preserved, but given that score compression is a linear transformation, that is the case. However, for consistency, we also run the Henze-Zirkler test in the compressed space and found that the summary statistics, computed at the last step of the iteration, follows a multivariate normal distribution. Hence, the Gaussianity assumption is justified.
Provided the fiducial model and the Gaussianity checks, we first test the compression method on the _stack_ of the mocks, with results presented in this Section, and later, in Sect. 4, we compute the covariance matrix for the summary statistics over the set of hundred mocks and compare it to the Fisher matrix as defined in Eq. (10). It is important to keep in mind that, when referring to the Fisher matrix,
Figure 1: This plot shows the behaviour of the summary component \(\mathbf{t}_{\alpha_{\parallel}}\) as a function of \(\alpha_{\parallel}\), which is the parameter it is related to as per Eq. (8), against the value of \(\mathbf{t}_{\alpha_{\parallel}}\) evaluated using \(\alpha_{\parallel}=1.00\) (see Tab. 1), denoted as ‘data’. The remainder of the parameters are set to the fiducial values listed in Tab 1. This figure highlights a non-monotonic relationship between the two parameters, which would lead to multiple peaks in the posterior if a tight prior is not imposed.
we are simply referring to the mapping of the data covariance matrix \(\mathbf{C}\) into the compressed space.
To test the score compression algorithm against the traditional approach, for simplicity, we employ both the contaminated auto- and cross- _stacked correlations_, which are almost noise-free. This choice is motivated by the fact that we imposed a tight prior on the \(\{\alpha_{\parallel},\alpha_{\perp}\}\) parameters to overcome the challenges coming from the non-monotonic relationship between these parameters and their corresponding summary statistics components (see Fig. 1). Thus, experimenting over less noisy mock data facilitates running the test in a case where it is granted that posteriors will not hit the priors.
For both the traditional (uncompressed data) and the compressed frameworks we run the Polvchord sampler for the auto- and cross- _stacked correlations_, while sampling the full set of 15 model parameters \(\{\alpha_{\parallel},\alpha_{\perp},b_{\mathrm{Ly}\alpha},b_{\mathrm{J} \mathrm{J}\mathrm{x}},b_{\mathrm{Q}\mathrm{Q}\mathrm{Q}},\partial_{\mathrm{Q} \mathrm{Q}},\sigma_{\mathrm{v}},\sigma_{\parallel},\sigma_{\perp},b_{\eta, \mathrm{S}\mathrm{III}(1260)}\), \(b_{\eta,\mathrm{S}\mathrm{III}(1193)},b_{\eta,\mathrm{S}\mathrm{III}(1207)},b_{ \eta,\mathrm{S}\mathrm{III}(1190)},b_{\mathrm{H}\mathrm{C}\mathrm{D}},\beta_{ \mathrm{H}\mathrm{C}\mathrm{D}}\}\) and results are presented in Fig. 2. The two methods agree well with each other, leading to almost identical results. The numerical values of the peaks and \(1\sigma\) confidence intervals of the 1d marginals are pre
Figure 2: Triangle plots of the parameters of interest for the _stack_ of correlation functions computed from a set of 100 mocks. Results are split, for presentation purposes only, into the set of standard parameters \(\{\alpha_{\parallel},\alpha_{\perp},b_{\mathrm{Ly}\alpha},\mathbf{\beta}_{\mathrm{ Ly}\alpha},b_{\mathrm{Q}\mathrm{Q}\mathrm{Q}},\sigma_{\mathrm{v}},\sigma_{\parallel}, \sigma_{\perp}\}\) (lower left panel) and contaminants parameters \(\{b_{\eta,\mathrm{S}\mathrm{III}(1260)},b_{\eta,\mathrm{S}\mathrm{III}(1193)},b_ {\eta,\mathrm{S}\mathrm{III}(1207)},b_{\eta,\mathrm{S}\mathrm{III}(1190)},b_{ \mathrm{H}\mathrm{C}\mathrm{D}}\}\) (upper right panel). The green contours refer to the results obtained performing the inference using the full uncompressed data vector, which we denote as ‘Traditional analysis’, while the blue dashed refer to the compressed analysis results, denoted as ‘Score compression analysis’.
sented in Tab. 1 as part of the 'Testing the framework (_stacked_)' set of columns. From the table, it can be noticed that in some cases the fiducial parameters used to compute the compression are not within the \(3\sigma\) confidence interval. Despite the fiducial being a first guess, and not necessarily accurate, the contours of the two methods agree well with each other.
We just demonstrated that the score compression inference pipeline leads to the same results as the standard approach. This shows the linearity of the parameters in the model to a good approximation. However, it is important to bear in mind that, in this case, this only holds locally around the fiducial, because of the non-linearity of the components that relate to \(\alpha_{\parallel}\) and \(\alpha_{\perp}\), on which we imposed a tight prior.
## 4 Testing the covariance matrix
An interesting application of the compression algorithm consists of evaluating the accuracy of the covariance matrix \(\mathbf{C}\) by comparing it to the mock-to-mock covariance \(\mathbf{C_{t}}\), which is the covariance matrix of the summary statistics vectors of the set of hundred mocks. We now showcase this application using a single mock.
We recall that the computation of the standard data covariance happens in a setup where the length of the data vector is larger than the number of samples, which is sub-optimal. The covariance should be singular; however, the off-diagonal elements of the correlation matrix are smoothed to make it positive definite (du Mas des Bourboux et al., 2020). Reducing the dimensionality of the data vector via score compression allows us to compute a new covariance matrix \(\mathbf{C_{t}}\), which has a dimensionality significantly lower than the number of samples used to compute it, given that the new data vector will be \(\sim\mathcal{O}(10)\) long. The fact that now the number of mock samples is larger than the number of compressed data points, means that we are now in a framework where the estimated \(\mathbf{C_{t}}\) is in principle a better estimator of the true covariance \(\mathbf{\Sigma}\) in compressed space than \(\mathbf{F}\), which is obtained by mapping the covariance \(\mathbf{C}\) into this space.
We now repeat the same experiment as in Sect. 3 over a single mock and evaluate the posterior using Polychord for the full set of parameters \(\{\alpha_{\parallel},\alpha_{\perp},b_{\mathrm{Ly}\alpha},b_{\mathrm{Ly} \alpha},b_{\mathrm{QSO}},\beta_{\mathrm{QSO}},\sigma_{\mathrm{V}},\sigma_{ \parallel},\sigma_{\perp},b_{\eta,\mathrm{Sill}(1260)},\break b_{\eta,\mathrm{Sill }(1193)},b_{\eta,\mathrm{Sill}(1207)},b_{\eta,\mathrm{Sill}(1190)},b_{\mathrm{ HCD}},b_{\mathrm{HCD}}\}\). This is either done using the original covariance \(\mathbf{C_{t}}\) matrix (mapped into the compressed space, to the Fisher matrix) in the likelihood in Eq. (9) or instead substituting it with the mock-to-mock covariance \(\mathbf{C_{t}}\), such that the new likelihood is of the form of
\[P(\mathbf{t}|\mathbf{\theta})=\frac{1}{(2\pi)^{\frac{n}{2}}|\mathbf{C_{t}}|^{\frac{1}{2}} }\exp\left[-\frac{1}{2}[\mathbf{t}-\mathbf{\mu_{t}}(\mathbf{\theta})]^{T}\mathbf{C_{t}}^{-1}[ \mathbf{t}-\mathbf{\mu_{t}}(\mathbf{\theta})]\right]\ . \tag{13}\]
The inverse of the mock-to-mock covariance in Eq. (13) is corrected by the Hartlap factor (Hartlap et al., 2007; see also Percival et al., 2021) with respect to the covariance directly computed from mocks \(\mathbf{C_{t}}^{\mathrm{mocks}}\), such that \(\mathbf{C_{t}}^{-1}=(h\mathbf{C_{t}}^{\mathrm{mocks}})^{-1}\). The Hartlap factor \(h\) is given by
\[h=\frac{n_{s}-1}{n_{s}-n_{d}-2}\, \tag{14}\]
where \(n_{s}\) and \(n_{d}\) are respectively the number of mocks and the length of the compressed summary, respectively. In our test case, given that \(n_{s}=100\) and \(n_{d}=15\) -- equal to the number of sampled parameters -- the Hartlap factor \(h\) has a value of \(\sim 1.2\). The larger the set of mocks, the closer to unity, hence negligible, the correction factor. Once again the choice of the tight prior on both \(\{\alpha_{\parallel},\alpha_{\perp}\}\) affected the choice of the set of mocks in order to run this second experiment. However, the goal of this second experiment is to provide an example case of testing the accuracy of the subsampling estimation of the covariance matrix. If the method is demonstrated to effectively work over some subset of mocks, it is expected that will also be the case for the others.
The results for the BAO parameters \(\{\alpha_{\parallel},\alpha_{\perp}\}\) and the Ly\(\alpha\) parameters \(\{b_{\mathrm{Ly}\alpha},\beta_{\mathrm{Ly}\alpha}\}\) are shown in Fig. 3, while the full set is presented in Sect. A and listed in Tab. 1 ('Testing the covariance (single mock)' columns). In this test case, using the mock-to-mock covariance results in enlargements of the posteriors for the BAO parameters: while using the original covariance matrix provides \(\{\alpha_{\parallel}=1.003\pm 0.019,\alpha_{\perp}=1.002\pm 0.027\}\), the mock-to-mock covariance results in \(\{\alpha_{\parallel}=1.004\pm 0.021,\alpha_{\perp}=1.004\pm 0.036\}\). On the other hand, the Ly\(\alpha\) linear bias and RSD parameter absolute errors increase by 50% each, with final relative error of about \(5-6\%\). The uncertainty of the vast majority of the other parameters agree remarkably well.
We end this discussion on covariance matrix estimation by noting that the test presented here is meant as a showcase of the usefulness of compressing Ly\(\alpha\) forest correlation functions. However, proper testing of the Ly\(\alpha\) forest covariance matrices would require a more comprehensive analysis using a larger sample of mocks4, and comparison with other estimation methods (see e.g., du Mas des Bourboux et al., 2020).
Figure 3: Triangle plots of the BAO parameters of interest \(\{\alpha_{\parallel},\alpha_{\perp}\}\) and the Ly\(\alpha\) parameters \(\{b_{\mathrm{Ly}\alpha},\beta_{\mathrm{Ly}\alpha}\}\) for one set of the Ly\(\alpha\) auto- and cross- mock correlations. The blue filled contours refer to the results obtained performing the inference using the original covariance matrix \(\mathbf{C}\) (mapped into the compressed space) in the likelihood function, and hence are denoted as ‘Original covariance’, while the red dashed refer to the case in which the mock-to-mock covariance matrix is used, denoted as ‘Mock-to-mock covariance’.
## 5 Goodness of fit test
In this section, we make a step forward with respect to the original aim of the work, by considering goodness of fit tests. For Ly\(\alpha\) correlation functions, the length of the data vector can go from 2500, considering only the auto-, to 7500 if considering also the cross-correlation. In a context where only \(\sim\mathcal{O}(10)\) parameters are sampled, any bad fit for noisy data can be hard to detect. Reducing the dimensionality of the data via score compression, we investigate whether it would be easier for any bad fit to be spotted. Hence, given the results presented in Sect. 3, we test the robustness of the method against unmodelled effects in the correlation functions, via the \(\chi^{2}\) statistics.
To this end we test the goodness of fit on contaminated data when metals are not modelled. For simplicity, here we restrict to the Ly\(\alpha\) auto-correlation alone and without considering contamination from HCD. The sampled parameters will only be \(\{\alpha_{\parallel},\alpha_{\perp},b_{\mathrm{Ly}\alpha},\mathbf{\rho}_{ \mathrm{Ly}\alpha},\sigma_{\parallel},\sigma_{\perp}\}\). Tests are run by constructing the \(\chi^{2}\) distributions over a set of 300 Monte Carlo realizations of the auto-correlation, introduced in Sect. 2.3: for each realization we run a minimizer and evaluate the \(\chi^{2}\) at the best fit.
We considered two main Monte Carlo populations: with and without metal contamination. The difference between the two is shown in the wedge plot of Fig. 4, which is built by averaging over the values of the correlation function in the 'wedge' of the space \(\{r_{\parallel},r_{\perp}\}\) identified by values of \(|\mu|=|r_{\parallel}/r|\) between 0.95 and 1.0. To generate them we used the best fit values of \(\{\alpha_{\parallel},\alpha_{\perp},b_{\mathrm{Ly}\alpha},\mathbf{\rho}_{ \mathrm{Ly}\alpha},\sigma_{\parallel},\sigma_{\perp},b_{\mathrm{\eta,Sill}( 1260)},b_{\mathrm{\eta,Sill}(1193)},\)\(b_{\mathrm{\eta,Sill}(1207)},b_{\mathrm{\eta,Sill}(1190)}\}\) for the contaminated _stacked_ Ly\(\alpha\) mock auto-correlation, where depending on the population (contaminated or uncontaminated) the metals' parameters were either included or not.
### Maximal compression
For both the contaminated and uncontaminated mock data, we apply a compression down to the same number of sampled parameters without including contamination in the modelling, with the summary statistics thus given by \(\mathbf{t}_{\mathrm{max}}=\{t_{\alpha_{\parallel}},t_{\alpha_{\perp}},t_{b_{ \mathrm{Ly}\alpha}},t_{\beta_{\mathrm{Ly}\alpha}},\sigma_{\varphi_{\parallel} },t_{\sigma_{\perp}}\}\). This is defined as _maximal compression_. In what follows we are interested in learning about the \(\chi^{2}\) distribution for the two Monte Carlo populations.
We found that for both contaminated and uncontaminated data, the \(\chi^{2}\) distributions are similar, with values of the order of \(\mathcal{O}(10^{-10}-10^{-3})\) (left panel of Fig. 5). However, comparing the fits to the contaminated and uncontaminated data, the best-fit parameter values are systematically shifted for some parameters. The distributions of the best-fit values for \(b_{\mathrm{Ly}\alpha}\) and \(\mathbf{\rho}_{\mathrm{Ly}\alpha}\) are shown in the right panels of Fig. 5: for the fits to contaminated data, 80% and 90% of the best-fit values respectively for each parameter are below the true value.
The \(\chi^{2}\) values remain very small for the fits to contaminated data, which indicates that in the compressed space, the model without contaminants still has enough flexibility to perfectly fit the data: the system has zero degrees of freedom, given that we are sampling six parameters, and the compressed data vector has six components. Instead of the mismatch between the model without contaminants and the contaminated data being visible in the form of large \(\chi^{2}\) values, it is manifest through a systematic shift in the recovered parameter values from the truth, which in a realistic data fitting scenario could not be detected. This is linked to the fact that we are very close to a linear model scenario, meaning that in the compressed space the model still has enough flexibility to fit the data. This motivated a deeper testing of the framework, extending it to extra degrees of freedom as follows.
### Non-maximal compression
Given the problem highlighted in the _maximal_ framework, we tested the pipeline in a _non-maximal compression_ case, where the extra degrees of freedom are given by the metals contaminating the data. Namely, the _maximal_ sumary statistics is now extended to include \(\mathbf{t}_{\mathrm{extra}}=\{t_{b_{\mathrm{Ly,Sill}(200)}},t_{b_{\mathrm{Ly,Sill }(1907)}},t_{b_{\mathrm{Ly,Sill}(2007)}},t_{b_{\mathrm{Ly,Sill}(11900)}}\}\). Still, metals will not be included in the likelihood modelling. This means that if the quantities of reference here are the compressed data vector
\[\mathbf{t}=\nabla\mathbf{\mu}_{\star}^{T}\mathbf{C}^{-1}(\mathbf{d}-\mathbf{\mu}_{\star})\, \tag{15}\]
the compressed model
\[\mathbf{\mu}_{\star}=\nabla\mathbf{\mu}_{\star}^{T}\mathbf{C}^{-1}(\mathbf{\mu}(\mathbf{\theta})- \mathbf{\mu}_{\star})\, \tag{16}\]
and they enter the \(\chi^{2}\) as per
\[\chi^{2}(\mathbf{\theta})=[\mathbf{t}-\mathbf{\mu}_{\star}(\mathbf{\theta})]^{T}\mathbf{F}^{-1}[ \mathbf{t}-\mathbf{\mu}_{\star}(\mathbf{\theta})]\, \tag{17}\]
the fiducial model \(\mathbf{\mu}_{\star}\) and its gradient will now include contaminants, whereas \(\mathbf{\mu}(\mathbf{\theta})\) will not and \(\mathbf{d}\) will either be contaminanted or uncontaminated data depending on the population used to build the \(\chi^{2}\) statistics. Now \(\mathbf{t}=\{t_{\mathrm{max}},t_{\mathrm{extra}}\}\). The length of the compressed data vector is ten, where the first six components refer to the sampled parameters, with a remainder of four components, which are fixed and constitute our extra degrees of freedom. Under the approximation that the mean of a \(\chi^{2}\) distribution indicates the number of degrees of freedom of the problem, we would expect that mean to be at least equal to the number of extra degrees of freedom we added. In our case, we expect that for the uncontaminated case, for which we know the modelling is good, the mean will be close to 4 (four metals). We
Figure 4: This wedge plot, for \(|\mu|=|r_{\parallel}/r|\) between 0.95 and 1.0, shows the effect of adding metals (in orange) to the correlation model \(\mathbf{\xi}\) without metals (in blue) along the line-of-sight. For simplicity in the \(\chi^{2}\) analysis we do not include contamination coming from HCD, so these features are only the effects of metal lines. Also, in this example, in order to better visualize the difference between the two, we have been generating noise from the covariance matrix of the _stacked_ auto-correlation mock.
want to test whether in this case a bad fit to the contaminated data is apparent as a mean \(\chi^{2}\) significantly larger than 4.
The \(\chi^{2}\) histograms are shown in the left panel of Fig. 6: the mean values for the uncontaminated and contaminated cases are respectively 3.89 and 67.51. Considering a \(\chi^{2}\) with number of degrees of freedom equal to 4, the p-values for the two means are respectively 0.4 and \(10^{-14}\): the bad fit in the contaminated case has emerged.
We further experimented over the addition of metals and we considered adding a single extra degree of freedom at a time, associated to either one of the following metals: the SiII(1260) and the SiIII(1190). The resulting \(\chi^{2}\) histograms are shown in the middle and right panels of Fig. 6, respectively. These two metal lines were chosen because of how differently they affect the data: while the SiII(1260) contamination happens around the BAO scale along the line-of-sight, the SiII(1190) contributes to the peak at \(\sim 60\mathrm{Mpc}/h\). We run the same exact experiment and find that the addition of \(t_{b_{\eta,\mathrm{SiII}(1190)}}\) does bring out the bad fit, while the other does not. Specifically, the two \(\chi^{2}\) distributions when the extra degree of freedom is given by \(b_{\eta,\mathrm{SiII}(1260)}\) have a mean of \(\sim 1\), again equal to the number of degrees of freedom, but they cannot be distinguished. The p-values for both distributions, assuming one degree of freedom, are all above a threshold of 0.01. Both distributions are indicative of an acceptable fit. On the contrary, adding the extra compressed component related to SiII(1190) results in having a mean \(\chi^{2}\) of 1.01 in the uncontaminated case and 10.04 in the contaminated one, with corresponding p-values of 0.3 and \(10^{-3}\) if we consider a target \(\chi^{2}\) distribution of one degree of freedom. This perhaps is indicative about the fact that in order to capture a bad fit, adding extra degrees of freedom is not enough: these extra degrees of freedom must be informative about features not captured by the core set of parameters. The SiII(1260) affects the model at scales of the correlation function which are on top of the BAO peak, which we model for, whereas SiII(1190) effectively adds information on a feature which is completely unmodelled.
In light of this, a possible solution is to add some extra degrees of freedom to the _maximal_ compression vector, which are designed to be orthogonal to the already known components in the compressed space. This would allow the extra flexibility, that is not captured in the model, to highlight for a bad fit in the compressed framework. This is an interesting problem which is left for future work. However, a similar solution has already been implemented in the context of MOPED (Heavens et al., 2020), specifically to allow new physics to be discovered.
Not modelling the SiII(1260) line in the uncompressed traditional framework does not result in any bad fit, which makes this an example of systematics hidden in the large original data vector. At the same time, the fact that the SiII(1260) test in the compressed framework fails to show a bad fit at the level of the \(\chi^{2}\) is quite problematic, given this metal line is one of the primary contaminants we have to be careful of in BAO measurement, affecting the peak's scale. The worry is then that, despite constructing an extended framework, there is a chance that some systematics hiding in the signal could be missed.
Figure 5: \(\chi^{2}\) histograms (left panel) for the _maximal_ compression and corresponding best fit values histograms for the Ly\(\alpha\) parameters (right panels), where blue refers to the uncontaminated case and orange to contaminated. In the _maximal_ compression setup \(t=t_{\mathrm{max}}=\{t_{\alpha_{1}},t_{\alpha_{1}},t_{b_{\eta}\alpha},t_{ \beta_{1}\alpha},t_{\sigma_{1}},t_{\sigma_{1}}\}\). The black dashed lines in the two panels on the right correspond to the true values used to generate the Monte Carlo realisations.
Figure 6: Normalized \(\chi^{2}\) histograms for the three _non-maximal compression_ cases presented in Sect. 5.2: starting from the left, all four metals, SiII(1260) and SiII(1190) were used to build extra degrees of freedom. In blue the histograms and \(\chi^{2}\) distributions for the uncontaminated data, orange for contaminated. The corresponding \(\chi^{2}\) distributions (dashed lines) are generated assuming as number of degrees of freedom the mean of the histogram distributions. The first set of histograms, that relates to all four extra degrees of freedom, present a strong shift between the orange and the blue distributions: their corresponding means are 3.89 and 67.51. In the SiII(1260) case, both distributions have a mean of \(\sim 1.1\), while in the SiII(1190), the mean for the contaminated case is 1.01, against 10.04 in the contaminated case.
This effectively means that in order to apply data compression, the underlying physics must be already well known to a good extent.
## 6 Robustness to parameter non-linearities
Each component of the score-compressed data vector relates to a specific model parameter, as per Eq. (8), via the gradient. Throughout the analysis, the BAO parameters proved to be a source of non-linearities in relation to their summary statistics components (see Fig. 1), sometimes resulting in a multi-peaked posterior distribution. With the intent of mitigating this effect, we were forced to impose a tight prior on both \(\{\alpha_{\parallel},\alpha_{\perp}\}\), which reduces the generalizability of the approach.
Based on the work of Protopapas et al. (2005), we explore extensions to the algorithm by considering an ensemble of fiducial values of the BAO parameters to compute the score-compressed vector components related to \(\{\alpha_{\parallel},\alpha_{\perp}\}\). For any extra set of BAO parameters \(\{\alpha_{\parallel}^{\text{extra}},\alpha_{\perp}^{\text{extra}}\}\), we introduce two extra summary statistics components:
\[\epsilon^{\text{extra}}_{\alpha_{\parallel}} =\nabla_{\alpha_{\parallel}}\mu^{\text{TT}}_{\text{extra}} \boldsymbol{C}^{-1}(\boldsymbol{d}-\boldsymbol{\mu}_{\text{extra}})\;, \tag{18}\] \[\epsilon^{\text{extra}}_{\alpha_{\perp}} =\nabla_{\alpha_{\perp}}\mu^{\text{TT}}_{\text{extra}} \boldsymbol{C}^{-1}(\boldsymbol{d}-\boldsymbol{\mu}_{\text{extra}})\;, \tag{19}\]
where \(\boldsymbol{\mu}_{\text{extra}}\) is the model evaluated at \(\{\alpha_{\parallel}^{\text{extra}},\alpha_{\perp}^{\text{extra}}\}\), keeping the previously defined fiducial values for the other parameters. We test this extension on the same mock that was used to test the subsampling covariance matrix in Sect. 4, and results are presented in Fig. 7, imposing a physically motivated uniform prior \([0.65,1.35]\) for both \(\alpha_{\parallel}\) and \(\alpha_{\perp}\). The ensemble of extra fiducials is given by the set \(\{[\alpha_{\parallel}=0.8,\alpha_{\perp}=1.2\}\), \(\{\alpha_{\parallel}=1.2,\alpha_{\perp}=0.8\}\), \(\{\alpha_{\parallel}=1.3,\alpha_{\perp}=0.7\}\), \(\{\alpha_{\parallel}=0.9,\alpha_{\perp}=1.1\}\}\), in addition to the original \(\{\alpha_{\parallel}=1.00,\alpha_{\perp}=1.01\}\) (see Tab. 1). From Fig. 7 it can be seen that the constraining power on the BAO parameters between the traditional and compressed methods match. This same result is also true for the other parameters, not shown here.
We tested the extension in terms of generalizability by progressively adding extra points to the ensemble, with reasonable spread, and found that with an ensemble of three to four extra fiducial sets of BAO parameters the algorithm is able to effectively get rid of the secondary posterior peaks and increase the accuracy of the measurement. Hence, the assumption of multiple fiducials for the BAO parameters, for which we had to impose a tight prior, enables us to relax the prior constraints.
## 7 Application to real data
The score compression framework has so far been tested on realistic mocks, hence it is straightforward to apply this same algorithm to real eBOSS DR16 Ly\(\alpha\) data, for which we refer to du Mas des Bourboux et al. (2020). The set of nuisance parameters is now extended to also include the contamination from carbon absorbers, the systematic quasar redshift error \(\Delta_{\parallel}\), the quasar radiation strength \(\xi_{0}^{\text{TP}}\) and the sky-subtraction parameters \(A_{\text{sky,Ly}\sigma}\) and \(\sigma_{\text{sky,Ly}\sigma}\). The results presented in Sect. 6 motivate a direct test of the whole extended framework, which gets rid of the tight prior, to the real data. The ensemble of BAO parameter fiducial values is given by the set of \(\{\alpha_{\parallel}=1.05,\alpha_{\perp}=0.96\}\) -- which are the best fit values obtained through the traditional analysis -- and \(\{[\alpha_{\parallel}=0.8,\alpha_{\perp}=1.2\}\), \(\{\alpha_{\parallel}=1.2,\alpha_{\perp}=0.8\}\), \(\{\alpha_{\parallel}=1.3,\alpha_{\perp}=0.7\}\), \(\{\alpha_{\parallel}=0.9,\alpha_{\perp}=1.1\}\}\), which were found to be effective in Sect. 6. The fiducial values of the other parameters are set to the best fit found with the standard uncompressed analysis. In Fig. 8, we present the agreement of the extended framework against the traditional approach at the level of \(\{\alpha_{\parallel},\alpha_{\perp},b_{\text{Ly}\sigma},\beta_{\text{Ly} \sigma},\Delta_{\text{Ly}\sigma},\Delta_{\text{Ty}},\beta_{\text{QSO}},\sigma _{\text{v}}\}\). The nuisance parameters are also found to be in excellent agreement.
## 8 Conclusions
Standard analyses of the Lyman-\(\alpha\) (Ly\(\alpha\)) forest correlation functions focus on a well localized region, which corresponds to the baryon acoustic oscillations (BAO) peak. However, these correlation functions usually have dimensions of 2500 or 5000, which means the cosmological signal is extracted from a small subset of bins. This means that reducing the dimensionality of the data vector, while retaining the information we care about, could be a step forward in optimizing the analysis. At the same time, as extensively explained in Sect. 2, the covariance matrix \(\mathbf{C}\) used for Ly\(\alpha\) correlations analyses is estimated via sub-sampling. However, the dimensionality of the correlation functions is much larger than the number of data samples used to estimate the covariance. Reducing the dimensionality of the data vector to \(O(10)\) allows for a reliable estimate of the covariance matrix. Given these premises, the goal of this work is to apply and explore a data compression algorithm for realistic Ly\(\alpha\) auto- and cross-correlation functions.
We reduced the dimensionality of the data vector to a set of summary statistics \(\mathbf{t}\) using score compression. We assume a Gaussian likelihood, test for its validity, and show that this assumption is preserved in the compressed space as well, as the compression is a linear transformation. In the compressed space the covariance can be either given by the mapped traditional covariance or by a covariance estimated directly in such a space.
We tested the compressed framework against the traditional approach at the posterior level, when using the original covariance \(\mathbf{C}\), and found the two of them agree, and no bias is introduced. We
Figure 7: Triangle plots of the BAO parameters of interest \(\{\alpha_{\parallel},\alpha_{\perp}\}\) for one set of the Ly\(\alpha\) auto- and cross- mock correlations, with relaxed priors. The green contours refer to the results obtained performing the inference using the full uncompressed data vector, which we denote as ‘Traditional analysis’, while the blue dashed refer to the compressed analysis results, denoted as ‘Score compression analysis’. The framework of the latter is extended here to the assumption of multiple fiducial values for \(\{\alpha_{\parallel},\alpha_{\perp}\}\) when performing the compression, namely \(\{[\alpha_{\parallel}=1.00,\alpha_{\perp}=1.01\}\), \(\{\alpha_{\parallel}=0.8,\alpha_{\perp}=1.2\}\), \(\{\alpha_{\parallel}=1.2,\alpha_{\perp}=0.8\}\), \(\{\alpha_{\parallel}=1.3,\alpha_{\perp}=0.7\}\), \(\{\alpha_{\parallel}=0.9,\alpha_{\perp}=1.1\}\)].
then showcased a test example of covariance matrix evaluation in the compressed space, which is a key benefit of the approach, enabling a comparison to the covariance matrix obtained in the traditional sub-optimal framework. Because of non-linear relationship between the BAO parameters and their summary statistics components, throughout the analysis we adopted a tight prior on \(\{\alpha_{\parallel},\alpha_{\perp}\}\). Later in the analysis, with the aim of increasing the generalizability of the approach, while relaxing the prior constraint, we successfully tested extensions to the framework by assuming an ensemble of fiducial values for these problematic parameters.
We then further examined the compressed framework, by testing the inference against unmodelled effects and we find that if any information about the unmodelled features in the correlation function is not captured by the compressed data vector \(\mathbf{t}\), this can potentially lead to biases, which do not emerge at the level of the \(\chi^{2}\) goodness of fit test. Hence, we advise against performing goodness of fit tests in compressed space, unless the compressed vector is extended to include extra degrees of freedom, analogous to what is done in Heavens et al. (2020). Extending the framework in this sense is left for future work.
We applied our extended compression framework to DR16 data from the Extended Baryon Oscillation Spectroscopic Survey and demonstrated that the posterior constraints are accurately recovered without loss of information. A step change in constraining power, and thus accuracy requirements, is expected for forthcoming Ly\(\alpha\) cosmology analyses by the on-going DESI experiment (see e.g., Gordon et al., 2023), which will observe up to 1 million high-redshift quasars with \(z>2\). Optimal data compression as proposed in this work will facilitate these analyses through inference that is complementary to the traditional approach and through additional consistency and validation tests.
## Acknowledgements
We thank Alan Heavens and Niall Jeffrey for helpful discussions. This work was partially enabled by funding from the UCL Cosmoparticle Initiative. AC acknowledges support provided by NASA through the NASA Hubble Fellowship grant HST-HF2-51526.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555. BJ acknowledges support by STFC Consolidated Grant ST/V000780/1. SN acknowledges support from an STFC Ernest Rutherford Fellowship, grant reference ST/T005009/2. AFR acknowledges support by the program Ramon y Cajal (RYC-2018-025210) of the Spanish Ministry of Science and Innovation and from the European Union's Horizon Europe research and innovation programme (COSMO-LYA, grant agreement 101044612). IFAE is partially funded by the CERCA program of the Generalitat de Catalunya. For the purpose of open access, the authors have applied a creative commons attribution (CC BY) licence to any author-accepted manuscript version arising.
## Data Availability
The code is publicly available at the 'compression' branch of [https://github.com/andreicuceu/vega.git](https://github.com/andreicuceu/vega.git). The data underlying this article will be shared on reasonable request to the corresponding author.
|
2309.13602 | 6G Positioning and Sensing Through the Lens of Sustainability,
Inclusiveness, and Trustworthiness | 6G promises a paradigm shift in which positioning and sensing are inherently
integrated, enhancing not only the communication performance but also enabling
location- and context-aware services. Historically, positioning and sensing
have been viewed through the lens of cost and performance trade-offs, implying
an escalated demand for resources, such as radio, physical, and computational
resources, for improved performance. However, 6G goes beyond this traditional
perspective to encompass a set of broader values, namely sustainability,
inclusiveness, and trustworthiness. From a joint industrial/academic
perspective, this paper aims to shed light on these important value indicators
and their relationship with the conventional key performance indicators in the
context of positioning and sensing. | Henk Wymeersch, Hui Chen, Hao Guo, Musa Furkan Keskin, Bahare M. Khorsandi, Mohammad H. Moghaddam, Alejandro Ramirez, Kim Schindhelm, Athanasios Stavridis, Tommy Svensson, Vijaya Yajnanarayana | 2023-09-24T10:31:43Z | http://arxiv.org/abs/2309.13602v2 | # 6G Positioning and Sensing Through the Lens of Sustainability, Inclusiveness, and Trustworthiness
###### Abstract
6G promises a paradigm shift in which positioning and sensing are inherently integrated, enhancing not only the communication performance but also enabling location- and context-aware services. Historically, positioning and sensing have been viewed through the lens of cost and performance trade-offs, implying an escalated demand for resources, such as radio, physical, and computational resources, for improved performance. However, 6G goes beyond this traditional perspective to encompass a set of broader values, namely sustainability, inclusiveness, and trustworthiness. This paper aims to: (i) shed light on these important value indicators and their relationship with the conventional key performance indicators, and (ii) unveil the dual nature of 6G in relation to these key value indicators (i.e., ensuring operation according to the values and enabling services that affect the values).
6G, Positioning, Sensing, Performance, Values.
## I Introduction and Motivation
Integrated sensing and communication (ISAC) is expected to be a major differentiator of 6G when compared to previous generations [1]. The promises of ISAC include pervasive situational awareness by monostatic, bistatic, and multi-static _radar-like sensing_, complemented with extremely accurate _position and orientation estimation of devices_. These promises are delivered thanks to a variety of technological advances, including mmWave and sub-THz spectrum, reconfigurable intelligent surfaces (RISs), artificial intelligence (AI), radio frequency (RF) hardware, etc. In turn, ISAC will enable new applications with unprecedented demands in terms of the key performance indicators (KPIs) (e.g., accuracy, latency, coverage), such as extended reality, digital twinning, and collaborative robotics [2].
The timing of 6G happens to be well-aligned with the Agenda 2030 for Sustainable Development by the United Nations (UN). Under this agenda, 17 interlinked sustainable development goals (SDGs) have been defined, which serve as a "shared blueprint for peace and prosperity for people and the planet, now and into the future" [3]. 6G can use these SDGs to identify the critical sustainability areas in which 6G can play an important role. Notably, the European Hex-X 6G Flagship project has pinpointed specific SDGs where 6G can significantly contribute, through the establishment of infrastructures that promote remote work, privacy-focused designs, eco-design of products, and a holistic approach towards societal, economic, and environmental sustainability.
The conventional KPIs focus on functionality and measurable requirements. To extend this view towards a more comprehensive approach, key value indicators (KVIs) have been introduced to complement the KPIs and are able to better capture the spirit of the SDGs [4]. The KVIs have been defined in three categories: _sustainability_, _inclusiveness_, and _trustworthiness_. Hence, the 6G system should itself meet each of these KVIs, not only during the lifecycle of its components, but also by enabling services and applications that can, in turn, improve the KVIs. This vision of 6G positioning and sensing is visualized in Fig. 1.
In this paper, we aim to describe and structure the KVIs for 6G in the context of ISAC (i.e., positioning and sensing), reveal their synergies and conflicts, and propose ways to quantify them (thus effectively turning them into new KPIs). Each KVI will also be discussed in detail, shedding light on the dual role of 6G. By integrating these KVIs into the architectural design of 6G ISAC, we anticipate not only novel research avenues but also the facilitation of achieving the SDGs.
Fig. 1: The vision of 6G positioning and sensing. 6G use cases are conventionally mapped to positioning and sensing KPIs, which drive the 6G design. To support a subset of the UN’s SDGs (highlighted: #8 (Sustainable development), #9 (Industry, innovation, and infrastructure), #11 (Sustainable cities and communities), #12 (Sustainable consumption and production), #13 (Climate action)), three KVIs (sustainability, inclusiveness, and trustworthiness) are introduced, which should be integrated with the KPIs.
## II Performance and Value Indicators
In this section, we elaborate on the 6G use cases and the corresponding KPIs. Then, we detail the three KPIs and explain the relation to the KPIs.
### _6G Use Cases and KPIs_
For the efficient realization of ISAC that can unleash the true potential of applications and higher-layer services, the 6G architecture necessitates the incorporation of open and well-defined interfaces (such as those providing raw or processed measurements) to enable easy data access and flow. This will catalyze the 6G ecosystem, synergizing sensing, positioning, and services designed for optimization and analytics. As illustrated in Fig. 2, the typical 6G use cases can be clustered according to the verticals: healthcare, automotive, industry, and extended reality. According to the use cases, the corresponding positioning requirements are expected to be tighter than the ones for the existing 5G standard [2]. Moreover, new sensing requirements must be introduced in alignment with the specific use cases. Positioning and sensing information can also be used internally by the 6G system to enhance and optimize communication functionality, for example using position information to optimize proactive resource allocation.
Definition of positioning and sensing requirements in 6G is done through the lens of KPIs, which can be divided into low-level and high-level KPIs. The low-level KPIs relate to the performance and limitations of the underlying radio resources and algorithms for positioning and sensing. For example, in positioning and radar-based sensing, accuracy and resolution of the delay, Doppler, and angle measurements are the most important KPIs. In contrast, the high-level KPIs relate to the performance and assessment of the system as a whole, focusing on the quantities of interest. The most representative examples of high-level KPIs are positioning accuracy, availability, and update rate.
The KPIs manifest considerable variations across and within each use case cluster shown in Fig. 2. For example, in healthcare, the collection of biomedical samples in remote rural areas using drones requires 3D position accuracy of around \(0.1\,\mathrm{m}\) and an update rate of around \(1\,\mathrm{Hz}\), while remote surgery requires very high positioning accuracy (approximately \(1\,\mathrm{mm}\) in space and \(1\,\mathrm{deg}\) in orientation) and a high update rate. In the extended reality cluster, specifications for human-machine interface cases like gesture recognition entail additional low-level KPIs, such as velocity, range, and angle resolution. Meanwhile, applications like augmented reality impose supplementary high-level KPIs, encompassing 3D orientation accuracy. Industrial use cases for digital twins impose requirements on the unambiguous range, range resolution, and Doppler resolution to ensure sensing and positioning coverage within a factory. In the automotive space, KPIs such as range and velocity resolution together with unambiguous range and velocity play a critical role in use cases such as collision avoidance and platooning.
A comprehensive exploration of the use cases, KPIs, and gap analysis in the context of 6G positioning and sensing are beyond the scope of this paper, but an in-depth discussion is available for interested readers in [2].
### _The 6G KVIs Explained_
The inception of value-based considerations in 6G, though initially introduced in [4], has philosophical roots that can be traced to broader social awareness and responsibility [5]. The concept of _value_, as delineated by [4], encompasses "intangible yet essential human and societal imperatives, including growth, sustainability, trustworthiness, and inclusion." The operationalization of these values within the 6G framework necessitates the formulation and integration of associated criteria, in the design, functionality, and decommissioning of the system. While [4] refrained from explicit definitions of these values and the KVIs, we regard them as analogous. Our objective is to provide useful definitions for each KVI that are comprehensive yet specific, a task that will be further expounded in the context of 6G positioning and sensing in the next section.
_Sustainability1_: bifurcates into environmental and economic domains. Economic sustainability pertains to practices that support long-term economic growth, balancing organizational and societal needs without undermining social, environmental, and cultural facets [6]. Environmental sustainability was already highlighted in the 4G era [7], where life cycle analyses indicate that a holistic approach must incorporate considerations of manufacturing, operational energy consumption, recycling practices, and end-of-life treatment. However, with 6G these considerations must be considered already in the design and standardization phase.
Footnote 1: Given that all SDGs are by definition related to sustainability, a more narrow definition is proposed.
_Inclusiveness_: is multifaceted and aims to foster increased participation and mitigate digital divides, promoting an equitable technological landscape. Inclusiveness encompasses accessibility to 6G technologies, education, and facilitation in their usage, as well as assisting vulnerable demographics,
Fig. 2: 6G use cases and KPIs. Positioning and sensing require a well-defined process and suitable interfaces to (i) create an ecosystem and (ii) support the use case clusters (corresponding to the verticals of healthcare, automotive, industry, and extended reality). The high-level (green) and low-level (blue) KPIs vary widely depending on the specific use case, with qualitative shapes shown for 5G (dashed black) and 6G (red).
such as the elderly or infants, and those marginalized due to geography, gender, culture, health, or education.
_Trustworthiness:_ encompasses security (defense against deliberate attacks), robustness (mitigation of unintentional faults, including environmental disturbances, human errors, and system malfunctions), and privacy (unauthorized leakage of sensitive information, whether deliberate or inadvertent) [8]. Notably, the anticipated pervasive utilization of AI in 6G introduces unprecedented challenges and considerations in the realm of trustworthiness, necessitating innovative approaches.
### _Relations Between KPIs and KVIs_
The evolution of positioning and sensing paradigms in wireless communication networks has primarily emphasized satisfying the KPIs tailored for specific applications. However, responsible deployment of 6G should transcend technical performance, aligning with global values of sustainability, inclusiveness, and trustworthiness. An intricate relation exists between the traditional KPIs and new KVIs, underlined by a multifaceted interplay of trade-offs and synergies, as visually depicted in Fig. 3. This relationship will be further elaborated below, incorporating the measurement methodologies for KVIs and exposing the challenges emanating from potential knock-on effects. The latter signifies that enhancement in one KVI may result in unintended repercussions in another KVI, further influencing subsequent KPIs.
#### Ii-C1 Trade-off between KPIs and KVIs
Achieving a particular KPI might necessitate a compromise on a corresponding KVI. Pursuing heightened accuracy might demand extensive infrastructure deployment or resource consumption, undermining sustainability. Consequential impacts may manifest in reduced trustworthiness (owing to a less diversified technology ecosystem) and diminished inclusiveness (resulting from unaffordable services for specific demographics). Conversely, elevating a KVI may cause conflicts with KPIs. The construction of a trustworthy system, albeit fostering secure services and long-term reliability, might entail additional resources or complex algorithms. This, in turn, might introduce latencies or degrade performance within the given resource constraints, affecting the associated KPIs.
#### Ii-C2 Synergy between KPIs and KVIs
Certain scenarios reveal mutual support between KPIs and KVIs. Accurate position and map information can improve energy efficiency via so-called channel knowledge maps. Enhancements in positioning and sensing, coupled with broadened service reach, can promote user inclusiveness. This may, in turn, catalyze commercialization and privacy through distributed processing, thereby enabling accurate cooperative positioning. Trustworthiness and sustainability are valued intrinsically by users, thereby amplifying inclusiveness through wider adoption. By carefully exploiting these synergies, future networks can be designed to concurrently optimize both KPIs and KVIs, ensuring both performance objectives and broader societal benefits are achieved. A salient instance of this synergy manifests in hardware impairment exploitation, where attributes of cost-efficient hardware (contributing to sustainability and inclusiveness) can be harnessed to enhance KPIs, such as sensing accuracy and unambiguous range [9].
#### Ii-C3 Quantification of KVIs
While KPIs can offer quantifiable metrics for evaluating positioning and sensing performance in 6G networks, quantifying KVIs poses a formidable challenge as they often encompass essential societal values that lack a rigorous mapping to tangible metrics. New performance metrics of KVIs need to be defined, which may involve weighing a set of KPIs based on evolving trends in a specific 6G use case or application [4]. Below are examples of how we can transform KVIs into actionable KPIs, beyond those shown in Fig. 2:
* **Sustainability KPIs:** Obvious KPIs include _energy efficiency_, which is relatively well-defined for communication, but not for positioning and sensing, as well as _capital_ (e.g., deployment) and _operational expenses_ (e.g., power consumption of components or systems).
* **Inclusiveness KPIs:** Possible KPIs include _coverage_ that can be provided within the legacy KPI (e.g., accuracy and latency) requirements, _cost_ of the device or service for the end-user, _accuracy_ of new human-machine interfaces (e.g., via gesture recognition).
* **Trustworthiness KPIs:** The broad nature of trustworthiness requires metrics like _position integrity_ to ensure robustness against faults, and security evaluation through the _probability of undetected attacks and the subsequent impact_. Privacy considerations may invoke measures such as _differential privacy_ and _mutual information_ metrics.
Despite these formalized attempts at quantification, the perceived performance remains inherently subjective, influenced by various factors, including not only the specific use case, but also background, culture, location, regulations, and stakeholder structures.
## III The Dual Role of Positioning and Sensing from a KVI Perspective
In this section, we go deeper into each of the KVIs, in order to provide specific examples of how they relate to positioning
Fig. 3: Synergies (green) and trade-offs (red) among KPIs and KVIs, including higher-order effects. KPIs must be augmented to quantify the KVIs in 6G design, when possible.
and sensing in 6G. For each KVI, a dual view is taken: (i) how the network can operate in a way that is aligned with each value, and (ii) how positioning and sensing, conceptualized as a service, can improve the KVIs, which can be interpreted as a higher-order effect.
### _Sustainability_
Sustainability is arguably among the major concerns in 6G systems, guiding the entire lifecycle design.
#### Iii-A1 Sustainable Positioning and Sensing
We consider three dimensions of sustainable design: radio resource optimization, infrastructure optimization, and the level of integration of positioning and sensing within 6G communication.
* _Radio resources:_ 6G calls for sustainable designs that are optimized to reach the target KPIs, rather than over-optimizing the indicators themselves. Optimization of radio resources considers the KPIs as objective, with an implicit consideration in terms of sustainability, as the allocated resources should be as small as possible. However, conservative designs based on over-provisioning should be avoided in favor of flexible and adaptive resource allocation schemes, such that energy and resource consumption can be minimized, while still (exactly) meeting the instantaneous target KPIs. Complementary to the radio resources, sleep/idle modes should be activated whenever possible to conserve energy.
* _Infrastructure:_ Positioning and sensing generally require a more extensive infrastructure deployment than communication. Such an extension is provided in 6G through two emerging technologies: distributed multiple-input multiple-output (D-MIMO) systems and RISs. In D-MIMO deployments, user equipments (UEs) are surrounded by a large number of energy-efficient base stations (BSs), providing not only outstanding performance in communication but also in positioning and sensing. RISs are a class of low-energy equipment that can replace/complement location anchors (e.g., BSs) and manipulate the wireless environment [10], resulting in better propagation channels, especially in the presence of blockages. Similar to the radio resources, the infrastructure should be optimized, for instance, the deployment, the manufacturing, and the replacement possibility of D-MIMO and RIS systems, to improve sustainability under long-term target KPI requirements.
* _Level of integration:_ One of the key features of 6G is to use resources and infrastructures for both positioning/sensing and communications, thereby inherently improving sustainability. The integration of positioning/sensing and communications can span different levels, from sites, spectrum, and infrastructure, to waveforms and time/frequency resources, as shown in Fig. 4. While progressive integration improves sustainability, there are unavoidable trade-offs in terms of performance. Hence, stringent KPI requirements may not be suitable for the tightest possible integration.
#### Iii-A2 Positioning and Sensing for Sustainability
Positioning and sensing, through their ability to understand and digitize the physical world, provide a unique tool to enhance sustainability. First of all, by harnessing positioning and sensing information, data communication sustainability can be improved (e.g., context-aided communication with proactive resource allocation, beam alignment, and blockage avoidance) [11]. In addition to the more sustainable operation of communication, the ability to sense and localize has broader sustainability implications, such as earth monitoring (e.g., the ability to monitor pollution and weather). Recalling the verticals from Fig. 2, sustainability benefits in healthcare include the reduction in CO2 emissions thanks to remote surgery and drone deliveries. In the automotive sector, traffic coordination and platooning can be used to minimize fuel/battery consumption. In the industry vertical, digital twins (e.g., twins for manufacturing and autonomous supply chains, twins for sustainable food production, or twins in the context of immersive smart cities) can track the position of assets or humans to optimize processes, save material, and reduce waste or energy per produced item. Finally, in the realm of extended reality, the ability to collaborate virtually can lead to enormous CO2 savings, due to reduced ground and air travel.
### _Inclusiveness_
In the pursuit of global digital equity, 6G should ensure accessibility to all humans, irrespective of gender, age, ability, and geographical location [12]. An integral part of this vision is to make the technology affordable, scalable, and ubiquitous. As such, positioning and sensing are the core aspects of this inclusive objective.
#### Iii-B1 Inclusive Positioning and Sensing
Positioning and sensing, embedded in the network architecture, can be facilitated by network deployment across all geographical terrains. This is feasible through a combination of several developments: the reuse of communication resources and infrastructure for multi-purpose functionality, ubiquitous connectivity, and cooperative networks.
* _Multi-purpose functionality:_ The infrastructure for providing communication and network services will be repurposed for positioning and sensing functions. This
Fig. 4: Different levels of integration between communication, positioning, and sensing functionalities. Tighter integration is more sustainable but may come at a penalty in terms of performance (e.g., reduced accuracy or increased latency).
dual-purpose application obviates the need for additional hardware and does not necessitate any alterations to the existing communication signals or protocols. A proof-of-concept for this dual-purpose application is illustrated in Fig. 5, where communication signals are used to track a person.
* _Ubiquitous connectivity_: Connectivity is the prerequisite to providing communication services, which is the main goal of 6G networks. For example, the incorporation of non-terrestrial networks (NTNs) will significantly extend the coverage of 6G networks to remote or difficult-to-reach areas, ensuring that geographical barriers do not limit access to vital communication or sensing services. Similarly, RISs also enhance and enable accurate and efficient positioning and sensing in various scenarios, largely extending the coverage of services [13]. Consequently, ubiquitous connectivity-enabled positioning is poised to significantly augment the inclusiveness of the 6G network by enabling uninterrupted connectivity regardless of the users' proximity to the traditional network infrastructure.
* _Cooperative networks_: Sidelink supports direct communication between devices, bypassing the centralized network infrastructure. This capability can facilitate the creation of localized communication networks, extending connectivity and service availability in scenarios where conventional network coverage may be absent or limited, such as in rural, remote, or disaster-struck areas. Such a cooperative approach makes positioning and sensing tasks to be completed in a distributed manner, largely extending the coverage and reducing the cost of the provided services.
These three aspects underscore how 6G technology will be instrumental in breaking down existing barriers in network access and functionality, demonstrating a firm commitment to creating a truly inclusive, global digital ecosystem.
#### Iii-B2 Positioning and Sensing for Inclusiveness
Inclusiveness in 6G networks is not only a macro-level objective but also addresses the accessibility challenges encountered at the micro-level of individual human-machine interactions. Positioning and sensing can play a crucial role in this context. On the one hand, advancements in sensing technology will enable systems that can interpret and respond to gestures, which benefits individuals who face challenges in traditional interaction modalities. Such a transformation can redefine the nature of human-machine interaction, making it more inclusive and accessible. On the other hand, intelligent monitoring, especially in critical societal domains such as elderly care, patient supervision, and infant care, emerges as a domain where sensing can be a game-changer. Such integrative applications promise to redefine caregiving, providing options characterized by precision, real-time feedback, and remote monitoring. These developments serve to enhance the quality of life for these demographic segments, underlining 6G's commitment to be genuinely inclusive and beneficial to all of society. Referring back to the proof-of-concept demonstration from Fig. 5, a person can be tracked in a cluttered environment with the aid of communication signals and infrastructure, negating the need for additional equipment or invasive monitoring techniques.
### _Trustworthiness_
Ensuring the robustness, security, and privacy of 6G positioning and sensing must be a priority in the design of the overall 6G system, given the safety-critical nature of the verticals highlighted in Fig. 2. This section outlines challenges and approaches related to the trustworthiness of positioning and sensing in 6G.
#### Iii-C1 Trustworthy Positioning and Sensing
We deconstruct trustworthiness into its constituent elements, such as robustness, security, and privacy, before discussing the influence of AI on them separately.
* _Robustness:_ Robust positioning and sensing are primarily based on diversity, relying on a large set of measurements from independent technologies, observations, or dimensions, to provide redundancy for detecting and eliminating faults. This approach is common in global navigation satellite system (GNSS), where for instance, aviation applications demand protection levels with a high degree of certainty, even in the presence of faults. The 6G system itself can provide inherent redundancy, via diverse measurements (e.g., not only time-difference-of-arrival (TDoA), but also angle-of-arrival (AoA), angle-of-departure (AoD), carrier phase, and perhaps Doppler), diverse location references (e.g., using many access points in D-MIMO), and multi-sensor fusion (e.g., relying on a combination of 6G sensing with vision). When combined with integrity monitoring, 6G can provide high performance with guaranteed robustness [14].
Fig. 5: Proof-of-concept for joint communication and sensing, showing how existing communication infrastructure and signals can be repurposed for sensing, in support of sustainability and inclusiveness. The hardware comprises Sivers semiconductors EVK06002 as transmitter (TX) and receiver (RX), each with 1x16 arrays. Standard 5G waveform with 120 kHz subcarrier spacing, 800 MHz bandwidth, 69 GHz carrier frequency, and 64 QAM modulation are employed. Besides the data transmission (top middle), beam sweeping (top left) provides bearing measurement of the passive target. Bistatic time measurements provide a sensing ellipse to further improve the target position estimate (bottom left).
* _Security:_ Vulnerabilities exist in classical positioning technologies (e.g., GNSS and ultra-wide band (UWB)), where attackers can perform jamming (blinding the receiver, leading to service interruption), meaconing (re-transmission of legitimate signals), or spoofing (transmission of false signals) [15]. Spoofing can be mitigated by cryptographic countermeasures, while jamming can be mitigated by directional nulling at the receiver. Attacks on radar sensing include jamming, altering electromagnetic properties, deception, masking, and imitation. Adaptive waveform design and frequency hopping help correct target range or velocity errors. Extrapolating these concepts to 6G, it is clear that each measurement type (delay, angle, Doppler), each piece of hardware (BS, RIS, UE), and each waveform have potential security weaknesses that can compromise positioning and sensing. An example of a positioning and sensing attack in a 6G context is shown in Fig. 6, where an attacker manipulates the TX beamforming, which leads to perceived high-power paths at the RX with modified AoD (with limited knowledge at the attacker) or AoA (with complete knowledge at the attacker).
* _Privacy:_ Privacy protection in the area of location tracking of humans is already crucial for 5G and comes even more into focus with 6G's higher positioning accuracy, its opportunities for cross-platform fusion of tracking information, and exposure framework for internal and external use (see Fig. 2). Position information that can be easily used for behavioral profiling must be secured from unauthorized access on all levels (including physical-layer security). Moreover, not only tracking of humans is possible but tracking of objects and assets as well. In corporate environments, where asset tracking is used to monitor and optimize processes, this process information becomes worthwhile protecting as well. Technological protection includes solutions like active cloaking, reminiscent of techniques in electronic warfare.
In the context of the trustworthiness of 6G, the advent of AI possesses the potential to instigate novel attacks, exploiting latent system vulnerabilities. Conversely, AI can fortify system security and privacy by innovating newly learned protocols or waveforms. However, the opaque nature of AI mechanisms demands rigorous and transparent scrutiny to ensure stakeholders are well-informed of the associated risks, especially for safety and mission-critical tasks. Explainable and model-based AI can help address this concern from a technical perspective.
#### Iii-C2 Positioning and Sensing for Trustworthiness
The ability to localize users and objects with a high degree of accuracy can support applications that rely on trustworthiness. First of all, in terms of robustness, 6G will act as an additional sensor, complementing and verifying existing sensors (e.g., camera, GPS, lidar, radar, inertial measurement unit). This will benefit all safety-critical services (including the corresponding communication), where incorrect location information may lead to harm. Secondly, security functions can be based on accurate location information or biometric 6G sensing data can be employed in access control or payment services. Given the built-in encryption and security frameworks, 6G data is poised to receive greater trust than other sensory inputs, driving the emergence of novel applications. Lastly, surveillance and crowd control applications are envisioned to benefit immensely from the sensory data facilitated by 6G.
### _Impact of 6G Enablers on the KVIs_
To conclude this section, we offer an analytical examination of various technological enablers pertinent to 6G positioning and sensing, referencing insights from [2]. These enablers include RIS, NTN, sidelink, AI, D-MIMO, and sub-THz signals. Until now, our discussion has largely highlighted the advantageous aspects of these enablers. However, as shown in Fig. 7, it is crucial to recognize that each enabler also bears inherent challenges and costs in relation to each KVI. Some of these associated costs emanate from higher-order effects, underscoring the intricate and multifaceted nature of 6G system design. It is paramount that 6G design and implementation consider not just the direct benefits (KPIs) but also potential drawbacks (KPIs) - optimally mapping the latter to quantifiable KPIs while concurrently navigating these higher-order effects.
## IV Outlook
The evolution of precise positioning and sensing for 6G ISAC presents a set of challenges and opportunities. As this paper has underscored, the next generation of digital communication is not merely about advancing the traditional KPIs, but also to forge a digital ecosystem that is sustainable, inclusive, and trustworthy, in line with the UN's SDGs. We have shown that these values should be related to KVIs, which in turn can be mapped to new KPIs. Both synergies and trade-offs will occur, and higher-order effects should be considered. For each of the KVIs, this paper has revealed the intricate nature of 6G positioning and sensing, both to make positioning
Fig. 6: A 6G ISAC attack example, where a transmitter modifies its beamforming vector to fool an analog/hybrid receiver into believing there are additional (strong) paths at arbitrary AoD or AoA, shifted \(\pi/4\) in each domain with respect to the line-of-sight (LoS) path at AoD and AoA of \(0\) radians.
and sensing coalesce with the KVIs, and to provide services that enhance the KVIs.
As we stand on the cusp of the 6G era, it has become clear that the adoption of a holistic approach is imperative. As researchers, developers, and stakeholders, our task is not only to innovate, but also to ensure that the digital future is sustainable, inclusive, and trustworthy.
## Acknowledgments
This work was supported, in part, by the European Commission through the H2020 project Hexa-X (Grant Agreement no. 101015956). The authors are grateful to Hamed Farhadi (Ericsson) for his comments on the manuscript.
|
2309.06569 | Promises of Deep Kernel Learning for Control Synthesis | Deep Kernel Learning (DKL) combines the representational power of neural
networks with the uncertainty quantification of Gaussian Processes. Hence, it
is potentially a promising tool to learn and control complex dynamical systems.
In this work, we develop a scalable abstraction-based framework that enables
the use of DKL for control synthesis of stochastic dynamical systems against
complex specifications. Specifically, we consider temporal logic specifications
and create an end-to-end framework that uses DKL to learn an unknown system
from data and formally abstracts the DKL model into an Interval Markov Decision
Process (IMDP) to perform control synthesis with correctness guarantees.
Furthermore, we identify a deep architecture that enables accurate learning and
efficient abstraction computation. The effectiveness of our approach is
illustrated on various benchmarks, including a 5-D nonlinear stochastic system,
showing how control synthesis with DKL can substantially outperform
state-of-the-art competitive methods. | Robert Reed, Luca Laurenti, Morteza Lahijanian | 2023-09-12T20:04:16Z | http://arxiv.org/abs/2309.06569v2 | # Promises of Deep Kernel Learning for Control Synthesis
###### Abstract
Deep Kernel Learning (DKL) combines the representational power of neural networks with the uncertainty quantification of Gaussian Processes. Hence, it is potentially a promising tool to learn and control complex dynamical systems. In this work, we develop a scalable abstraction-based framework that enables the use of DKL for control synthesis of stochastic dynamical systems against complex specifications. Specifically, we consider temporal logic specifications and create an end-to-end framework that uses DKL to learn an unknown system from data and formally abstracts the DKL model into an Interval Markov Decision Process (IMDP) to perform control synthesis with correctness guarantees. Furthermore, we identify a deep architecture that enables accurate learning and efficient abstraction computation. The effectiveness of our approach is illustrated on various benchmarks, including a 5-D nonlinear stochastic system, showing how control synthesis with DKL can substantially outperform state-of-the-art competitive methods.
## I Introduction
Data-driven control synthesis is emerging as an important research topic in recent years [1, 2, 3, 4, 5]. This is due to three main reasons: (i) increased complexity of modern systems, which often include black-box components, (ii) availability of data in large scale, and (iii) increased capability of machine learning (ML) techniques. There are however several challenges in data-driven approaches for control systems, especially in _safety-critical_ applications where robustness guarantees are vital. Such guarantees are conditioned on quantification of the learning error and its propagation through the control synthesis procedure. While there exist ML techniques that supply information about the error [6], they are often empirical (statistical) and lack necessary mathematical rigor. Those methods that do provide formal error analysis [7] suffer from scalability [8, 9]. This work focuses on these challenges and aims to provide a scalable data-driven control synthesis framework with robustness guarantees.
Formal synthesis is a rigorous approach to providing guarantees on the performance of control systems against complex properties [10, 11]. In this approach, specifications are expressed in a formal language such as _linear temporal logic_ (LTL) over _finite_ behaviors (LTLf) [12] and the system progression is abstracted into a finite model called an _abstraction_. Then, automated model-checking-like algorithms are used on the abstraction to synthesize a controller. To ensure correctness, the abstraction must have a _simulation_ relation with the system, which is often achieved by including all the uncertainties, e.g., errors due to discretization, stochasticity, and learning, in the abstraction. A popular model that allows that is Interval Markov Decision Process (IMDP) [13], which is shown to also enable scalability to high dimensional systems [14]. A key aspect in constructing a scalable IMDP abstraction is an accurate representation of the system evolution with tight uncertainty bounds. That, however, is difficult to achieve in a data-driven setting.
A widely-used method for accurate representation of the latent control system from data is _Gaussian Process_ (GP) regression [7, 8, 15]. Its power lies in rigorous uncertainty quantification, which comes at the expense of cubic computational complexity in the size of data. That makes GPs ideal for formal control synthesis, but they suffer in high dimensional spaces, where a massive amount of data is required to obtain small uncertainty. For high-dimensional systems, _neural networks_ (NNs) are successfully used to learn the dynamics, called _NN dynamic models_ (NNDMs) [16], with control synthesis methods [17, 18]. However, quantification of the learning error of NNDMs in a formal manner remains an open problem in spite of recent attempts to use confidence-based approaches [6], which cannot be propagated through the synthesis procedure.
In this work, we bridge the gap by introducing a scalable synthesis framework that harnesses the representational power of NNs and uncertainty quantification ability of GPs. Specifically, we employ _deep kernel learning_ (DKL) [19, 20], which uses NNs as informed priors for GPs while maintaining an analytical posterior, to efficiently construct (accurate) IMDP abstractions. To ensure the correctness of the abstraction, we leverage recent techniques for linear relaxations of NNs [21] and provide bounds on the mean and variance of the GP. Critically, we show that the optimization problems that bound the probabilities in the IMDP construction reduce to evaluations of a finite set of points on an analytical function, resulting in computational efficiency. Then, we employ existing tools [11] to synthesize a strategy on the IMDP that maximizes the probability of satisfying a given LTLf specification and is robust against the learning error. We prove that this strategy can be mapped to the underlying latent system with correctness guarantees. We illustrate the efficacy of our framework on various benchmarks, which show control synthesis with DKL substantially outperforms state-of-the-art methods. We also identify an architecture for DKL that results in high accuracy and efficiency in abstraction construction, promoting further scalability.
In summary, the contributions are: (i) a scalable data-driven framework for control synthesis with complex specifications and hard guarantees, (ii) an efficient finite abstraction technique for DKL models with correctness guarantees, (iii)
a DKL architecture design for fast and accurate abstraction, and (iv) illustration of the efficacy and scalability of the framework via benchmarking against state-of-the-art methods on a set of rich case studies with complex nonlinear systems up to 5 dimensions via deep architectures up to 3 hidden-layers and 100s of neurons.
## II Problem Formulation
Consider the following discrete-time stochastic system:
\[\mathbf{x}(k+1)=f(\mathbf{x}(k),\mathbf{u}(k))+\mathbf{v}(k), \tag{1}\]
where \(\mathbf{x}(k)\in\mathbb{R}^{n}\), \(\mathbf{u}(k)\in U\), \(U=\{a_{1},\ldots,a_{|U|}\}\) is a finite set of actions or control laws, \(\mathbf{v}(k)\in\mathbb{R}^{n}\) is a Gaussian random variable \(\mathbf{v}(k)\sim\mathcal{N}(0,\mathcal{V})\) with zero mean and covariance \(\mathcal{V}\in\mathbb{R}^{n\times n}\), and \(f:\mathbb{R}^{n}\times U\rightarrow\mathbb{R}^{n}\) is an _unknown_, possibly non-linear, function. Without loss of generality, we assume covariance \(\mathcal{V}\) is diagonal1. Intuitively, System (1) represents a switched stochastic systems with additive noise and unknown dynamics.
Footnote 1: There always exists a linear transformation, namely Mahalanobis transformation, that enables diagonalization of the covariance matrix.
We define a _finite trajectory_ of length \(N\in\mathbb{N}\) of System (1) as \(\omega_{\mathbf{x}}^{N}=x_{0}\xrightarrow{\mathbf{u}_{0}}x_{1}\xrightarrow{ \mathbf{u}_{1}}\cdots\xrightarrow{\mathbf{u}_{N-1}}x_{N}\), where each \(x_{k}\in\mathbb{R}^{n}\) is a sample from System (1). We denote the \(i\)-th element of \(\omega_{\mathbf{x}}^{N}\) by \(\omega_{\mathbf{x}}^{N}(i)\) and the set of all finite trajectories by \(X^{*}\). \(\mathbf{A}\)_control strategy_\(\pi:X^{*}\to U\) is a function that chooses the next action \(u\in U\) given a finite trajectory. Under \(\pi\) and initial condition \(x_{0}\in\mathbb{R}^{n}\), System (1) defines a unique probability measure \(P^{x_{0}}\) over \(X^{*}\)[22].
We impose a standard smoothness (well-behaved) assumption on \(f\). Namely, we assume \(f\) is a sample from a Gaussian process (GP)2 (see Sec. III-A for details). Since \(f\) is unknown, we aim to reason about System (1) solely from a set of input-output data. Specifically, we assume \(D=\{(x_{i},u_{i},x_{i}^{+})\}_{i=0}^{m}\) is a set of identically and independently distributed (i.i.d.) data, where \(x_{i}^{+}\) is a sample of a one time-step evolution of System (1) from \(x_{i}\in\mathbb{R}^{n}\) under action \(u_{i}\in U\).
Footnote 2: Note that the restrictions that this assumption poses on \(f\) depends on the choice of the covariance (kernel) function for the GP, and there exist universal kernels, such as the squared exponential, that allow for a GP to approximate _any_ continuous \(f\) arbitrarily well.
We are interested in the temporal properties of \(\mathbf{x}\) in a compact set \(X\subset\mathbb{R}^{n}\) w.r.t. a set of regions \(R=\{r_{1},\ldots,r_{l}\}\), where \(r_{i}\subseteq X\). To this end, we define a set of atomic proposition \(\Pi=\{\mathrm{p}_{1},\ldots,\mathrm{p}_{l}\}\), where \(\mathrm{p}_{i}\) is true iff \(\mathbf{x}\in r_{i}\). Let \(L:X\to 2^{\Pi}\) be a labeling function that assigns to each state the set of atomic propositions that are true at that state. Then, the observation trace of trajectory \(\omega_{\mathbf{x}}^{N}\) is \(\rho=\rho_{0}\rho_{1}\ldots\rho_{N}\), where \(\rho_{i}=L(\omega_{\mathbf{x}}^{N}(i))\) for all \(0\leq i\leq N\).
To express the temporal properties of System (1), we use LTLf [12], which has the same syntax as LTL but its interpretations are over finite behaviors (traces).
**Definition 1** (LTLf).: _Given a set of atomic propositions \(\Pi\), an LTLf formula is defined recursively as_
\[\varphi=\mathrm{p}\mid\neg\varphi\mid\varphi\wedge\varphi\mid\bigcirc\varphi \mid\varphi\mathcal{U}\varphi\mid\mathcal{F}\varphi\mid\mathcal{G}\varphi\]
_where \(\mathrm{p}\in\Pi\), and \(\bigcirc\), \(U\), \(\mathcal{F}\), and \(\mathcal{G}\) are the "next", "until", "eventually", and "globally" temporal operators, respectively._
The semantics of LTLf are defined over finite traces [12]. We say trajectory \(\omega_{\mathbf{x}}\in X^{*}\) satisfies formula \(\varphi\), denoted by \(\omega_{\mathbf{x}}\models\varphi\), if a prefix of its observation trace satisfies \(\varphi\).
**Problem 1** (Control Synthesis).: _Given a dataset \(D=\{(x_{i},u_{i},x_{i}^{+})\}_{i=1}^{m}\) of i.i.d. samples of System (1), compact set \(X\), and LTLf formula \(\varphi\), find control strategy \(\pi^{*}\) that maximizes the probability of satisfying \(\varphi\) without existing \(X\), i.e., for every \(x_{0}\in X\),_
\[\pi^{*}=\arg\max_{\pi}P^{x_{0}}(\omega_{\mathbf{x}}\models\varphi\mid D,\pi) \tag{2}\]
There are three main challenges in Problem 1: (i) the dynamics of System (1) are unknown, can be nonlinear, and its evolution is stochastic, (ii) guarantees are required for the underlying system to satisfy complex specifications, and (iii) scalability to higher dimensions is necessary, which is an additional challenge that we impose. In our approach, we show that challenges (i) and (iii) can be successfully addressed by utilizing the power of DKL to approximate \(f\) at the low level. For challenges (ii) and (iii), we draw inspirations from formal methods literature and construct a discrete abstraction of the dynamics as an IMDP. With an IMDP and an LTLf specification, we can use off-the-shelf tools for synthesizing provably correct strategies.
## III Modelling Dynamical Systems using Deep Kernel Learning
To describe how we employ DKL to learn \(f\) in System (1), we first need to introduce GPs. Then, we present DKL within the GP framework.
### _Gaussian Process Models_
A Gaussian process (GP) is a collection of random variables, such that any finite collection of those random variables are jointly Gaussian [15]. Because of the favorable analytical properties of Gaussian distributions, GPs are widely employed to learn unknown functions, such as \(f\) in System (1), from observations of the system [7, 23]. In particular, given a prior GP, \(\textit{GP}(\mu,k_{\gamma})\), where \(\mu:\mathbb{R}^{n}\rightarrow\mathbb{R}\) is the mean function and \(k_{\gamma}:\mathbb{R}^{n}\times\mathbb{R}^{n}\rightarrow\mathbb{R}\) is a positive semi-definite covariance function (or kernel) with hyperparameters \(\gamma\), the assumption is that for each \(a\in U\) and for each \(j\in\{1,\ldots,n\}\), \(f^{(j)}(\cdot,a)\), the \(j\)-th component of \(f(\cdot,a)\), is a sample from \(\textit{GP}(\mu,k_{\gamma})\). Then, given dataset \(D=\{(x_{i},u_{i},x_{i}^{+})\}_{i=0}^{m}\) of samples of System (1), which we partition in \(|U|\) subsets \(D_{a}=\{(x,u,x^{+})\in D\mid u=a\}\), we obtain that, at every point \(x^{*}\in\mathbb{R}^{n}\), the posterior predictive distribution of \(f^{(j)}(x^{*},a)\) given \(D\) is still Gaussian with mean and variance:
\[\mathbb{E}(f^{(j)}(x^{*},a)\mid D)=\\ \mu(x^{*})+K_{x^{*},\mathcal{X}}(K_{\mathcal{X},\mathcal{X}}+ \sigma^{2}I)^{-1}Y, \tag{3}\]
\[\textit{cov}(f^{(j)}(x^{*},a)\mid D)=\\ K_{x^{*},x^{*}}-K_{x^{*},\mathcal{X}}(K_{\mathcal{X}, \mathcal{X}}+\sigma^{2}I)^{-1}K_{\mathcal{X},x^{*}}, \tag{4}\]
where \(\mathcal{X}=(x_{1},\ldots,x_{|D_{a}|})\), \(Y=(x_{1}^{(j)+},\ldots,x_{|D_{a}|}^{(j)+})\), and \(K_{\mathcal{X},\mathcal{X}}\in\mathbb{R}^{|D_{a}|\times|D_{a}|}\) is a matrix whose \(i\)-th row and \(l\)-th column is \(k_{\gamma}(x_{i},x_{l})\).
A widely-used kernel function is the squared exponential:
\[k_{\gamma_{se}}(x,x^{\prime})=\sigma_{s}\exp\left(\frac{-\|x-x^{\prime}\|}{2l ^{2}}\right) \tag{5}\]
with the set of hyper-parameters \(\gamma_{se}=\{\sigma_{s},l\}\), where \(\sigma_{s}\) and \(l\) are the output scale and length scale, respectively. These hyper-parameters are generally learned by minimizing the negative marginal log-likelihood of the data [15].
### _Deep Kernel Learning_
The squared exponential kernel, like most commonly-employed kernels for GP regression [15], only depends on few hyper-parameters. This limits the flexibility of GPs in learning complex representations of data [24], often resulting in predictions with large uncertainty (variance). One can reduce this uncertainty with more data, but that leads to computational intractability since the time complexity of GP regression is \(\mathcal{O}(|D|^{3})\)[15]. DKL aims to address this issue by considering a kernel that is composed with a NN. The underlying idea is that a fully connected NN \(g_{a}^{w}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{s}\), parameterized by weights and biases vector \(w\) over action \(a\), is employed to map the input into an \(s\)-dimensional feature space, where GP regression is performed. Specifically, starting with a base kernel \(k_{\gamma}\) (in this paper we always assume \(k_{\gamma}=k_{\gamma_{se}}\)), we define a deep kernel as
\[k_{dkl}(x,x^{\prime})=k_{\gamma}(g_{a}^{w}(x),g_{a}^{w}(x^{\prime})). \tag{6}\]
Then, with \(k_{dkl}\), predictions still use GP's mean and covariance equations in (3)-(4), but the number of hyper-parameters (i.e., \(\gamma\) and \(w\)) are drastically increased. This significantly improves flexibility and representational power of GPs.
The learning of the parameters in \(\gamma\) and \(w\) can be achieved by either minimizing the negative marginal log-likelihood or considering a fully Bayesian approach [20]. Furthermore, the NN portion of DKL models can be pre-trained and its parameters fixed. This minimizes the number of parameters being optimized through the marginal log-likelihood and mitigates the possibility of DKL over-fitting the data [20].
DKL combines the flexibility of deep NN with the principled uncertainty quantification of GPs. Such a combination is particularly important for problems that require learning complex non-linear dynamics with robustness analysis. The former often necessitates large amount of data and the latter reasoning about uncertainty. The power of DKL is illustrated in Figure 1, where we consider learning a 2D vector field \(f(\mathbf{x})=(\sin\left(x_{1}+x_{2}\right),\cos\left(x_{1}-x_{2}\right))^{T}\) with noise distribution \(\mathcal{N}(0,0.01)\) by using both a standard GP and DKL. We can observe that the \(k_{dkl}\) is able to learn the oscillatory behavior of the data, while the GP with \(k_{\gamma_{se}}\) only correlates nearby points. As a consequence, with the same amount of data, the predictions of DKL are more accurate and less uncertain compared to the ones of the GP. Particularly, in our synthesis framework, which relies on abstraction-based techniques, the lower uncertainty associated with DKL can lead to less conservative abstractions and probabilistic guarantees.
**Remark 1**.: _We note that care must be taken for both architecture design and training of the NN portions of DKL. If the prior is trained poorly, the kernel may underestimate uncertainty, resulting in an inaccurate model. Also, architecture of the NN can have a significant role for both computational tractability and accuracy of the abstraction. In Sec. VII, we compare a few architectures and discuss an appropriate form for control synthesis. As for training technique, our empirical results show that stochastic mini-batching [20] is highly effective._
## IV IMDP Abstraction
DKL allows one to predict the one-step evolution of System (1) from a given \(x\in X\) and \(u\in\mathcal{U}\). To analyze LTLf properties of System (1), however, we need to reason over finite trajectories (with arbitrary lengths) of System (1) and consequently perform multi-step predictions of arbitrarily length. Unfortunately, such analysis is already intractable for standard GPs even for a fixed finite horizon [25]. To address this problem we rely on finite abstractions, which in turn allows one to use existing LTLf control synthesis tools [11, 26]. Specifically, we use an Interval Markov Decision Process (IMDP) [13] as the abstraction model due to its ability to represent multiple levels of uncertainty.
**Definition 2** (Imdp).: _An interval Markov Decision Process (IMDP) is a tuple \(\mathcal{I}=(Q,A,\Sigma,\hat{P},\hat{P},\Pi,L)\), where_
* \(Q\) _is a finite set of states,_
* \(A\) _is a finite set of actions,_
Fig. 1: Results on learning a 2D vector field with (left) DKL and (right) GP. Top: the true vector field in green, the model predictive posteriors in red, and 95% confidence intervals shaded in grey. Bottom: scaled correlation function for the first dimension at one point. 1000 samples were used to pre-train the NN. Both methods use 100 samples for predictions.
* \(\tilde{P}:Q\times A\times Q\rightarrow[0,1]\) _is a transition probability function that defines the upper bound of the transition probability from state_ \(q\in Q\) _to state_ \(q^{\prime}\in Q\) _under action_ \(a\in A\)_,_
* \(\tilde{P}:Q\times A\times Q\rightarrow[0,1]\) _is a transition probability function that defines the lower bound of the transition probability from state_ \(q\in Q\) _to state_ \(q^{\prime}\in Q\) _under action_ \(a\in A\)_,_
* \(\Pi\) _is a set of atomic propositions, and_
* \(L:Q\to 2^{\Pi}\) _is a labeling function that assigns to each state_ \(q\in Q\) _a subset of_ \(\Pi\)_._
It holds for all \(q,q^{\prime}\in Q\) and \(a\in A(q)\) that \(\tilde{P}(q,a,q^{\prime})\leq\tilde{P}(q,a,q^{\prime})\) and \(\sum_{q^{\prime}\in Q}\tilde{P}(q,a,q^{\prime})\leq 1\leq\sum_{q^{\prime}\in Q} \tilde{P}(q,a,q^{\prime})\). A _finite path_ of \(\mathcal{I}\), denoted by \(\omega_{\mathcal{I}}\in Q^{*}\), is a sequence of states in \(Q\). A _strategy_ of \(\mathcal{I}\) is a function \(\pi_{\mathcal{I}}:Q^{*}\to A\) that maps \(\omega_{\mathcal{I}}\) to an action in \(A\).
### _Building the Abstraction_
#### Iii-A1 States and Actions
First, we partition \(X\) into a set of convex regions \(\bar{Q}=\{q_{1},\ldots,q_{|\bar{Q}|}\}\), e.g., by using a grid. We consider an additional region \(q_{u}=\mathbb{R}^{n}\setminus X\) and call \(Q=\bar{Q}\cup\{q_{u}\}\) the set of IMDP states. We assume that the discretization \(\bar{Q}\) of \(X\) respects the regions of interest in \(R\), i.e., \(\forall r\in R\), \(\exists Q_{r}\subseteq\bar{Q}\) such that \(\cup_{q\in Q_{r}}q=r\). With an abuse of notation, we use \(q\) to denote both a state in the IMDP and its corresponding region, i.e., \(q\in Q\) and \(q\subset\mathbb{R}^{n}\). Note that for every \(x,x^{\prime}\in q\), \(L(x)=L(x^{\prime})\); accordingly, we set the IMDP labeling function as \(L(q)=L(x)\). The set of IMDP actions \(A\) is given by the set of actions \(U\) and all actions are allowed to be available at each state \(q\in Q\).
#### Iii-A2 Probability Bounds
The key step to building an IMDP abstraction of System (1) is the computation of the transition probability functions \(\tilde{P}\) and \(\tilde{P}\). Given \(q\subset\mathbb{R}^{n}\), \(a\in U\), and \(x\in X\), we define the _transition kernel_\(T_{a}(q\mid x)\) as:
\[T_{a}(q\mid x)=\int_{q}\mathcal{N}(v\mid\mathbb{E}(f(x,a)\mid D),\] \[cov(f(x,a)\mid D)+\mathcal{V})dv \tag{7}\]
That is, \(T_{a}(q\mid x)\) is the probability that, given the data \(D\), our Gaussian prior assumption on \(f\), and an initial state \(x\), System (1) transitions to \(q\) under \(a\) in one time step. Note that \(T_{a}(q\mid x)\) is defined by marginalizing the DKL predictive distribution for \(f\) over the dynamics of System (1) and the resulting kernel is still Gaussian due to the closure of Gaussian random variables under linear combinations [15]. As we show in Theorem 2 in Sec.V, this marginalization guarantees that our abstraction accounts for the uncertainty coming from the DKL predictions.
Consequently, for \(q,q^{\prime}\in\bar{Q}\), it follows that
\[\tilde{P}(q,a,q^{\prime}) =\min_{x\in q}T_{a}(q^{\prime}\mid x), \tag{8}\] \[\hat{P}(q,a,q^{\prime}) =\max_{x\in q}T_{a}(q^{\prime}\mid x), \tag{9}\]
and for the unsafe region \(q_{u}\), it holds that
\[\hat{P}(q,a,q_{u}) =1-\max_{x\in q}T_{a}(X\mid x), \tag{10}\] \[\hat{P}(q,a,q_{u}) =1-\min_{x\in q}T_{a}(X\mid x). \tag{11}\]
Lastly, since reaching \(q_{u}\) violates the requirement of not leaving \(X\), we set \(q_{u}\) to be a sink state, i.e., \(\forall a\in A\), \(\hat{P}(q_{u},a,q_{u})=\hat{P}(q_{u},a,q_{u})=1\).
In the remainder of this section, we show how to efficiently compute the bounds in (8)-(9). We start by noticing that local linear relaxations for NNs can be built in constant time by utilizing algorithms in [21]. That is, for NN \(g_{a}^{w}\) and region \(q\subset\mathbb{R}^{n}\), one can find matrices \(\hat{A}_{q},\hat{A}_{q}\in\mathbb{R}^{n\times n}\) and \(\hat{b}_{q},\hat{b}_{q}\in\mathbb{R}^{n}\) such that \(\forall x\in q\) it holds that:
\[\hat{A}_{q}x+\hat{b}_{q}\leq g_{a}^{w}(x)\leq\hat{A}_{q}x+\hat{b}_{q}. \tag{12}\]
We use such relaxations to propagate \(q\) through the NN of a deep kernel to produce an axis-aligned hyper-rectangle \(Z_{q,a}\) that contains the output of \(g_{a}^{w}(x)\) for every \(x\in q\). Then, given \(Z_{q,a}\), we can use existing results for GPs [27, Propositions 4 and 7] to propagate \(Z_{q,a}\) through the squared exponential function and obtain the ranges of posterior mean and variance for all \(x\in q\) by solving convex optimization problems, namely one quadratic program and three linear programs. Then, we obtain mean bounds \(\underline{M}_{q,a},\overline{M}_{q,a}\in\mathbb{R}^{n}\) and variance bounds \(\underline{\Sigma}_{q,a},\overline{\Sigma}_{q,a}\in\mathbb{R}_{\geq 0}^{n}\) such that, for every \(x\in q\) and every \(j\in\{1,\ldots,n\}\),
\[\mathbb{E}(f^{(j)}(x,a)\mid D_{a})\in[\underline{M}_{q,a}^{(j)}, \overline{M}_{q,a}^{(j)}], \tag{13}\] \[cov(f^{(j)}(x,a)\mid D_{a})\in[\underline{\Sigma}_{q,a}^{(j)}, \overline{\Sigma}_{q,a}^{(j)}]. \tag{14}\]
Using these bounds, in the following theorem, we show how to efficiently compute bounds for (8)-(9)
**Theorem 1** (Efficient Computation for Tran. Prob. Bounds).: _For \(\mu\in\mathbb{R}\) and \(\sigma\in\mathbb{R}_{\geq 0}\) and closed interval \(\theta=[\underline{\theta},\overline{\theta}]\subset\mathbb{R}\), define function_
\[h(\theta,\mu,\sigma)=\frac{1}{2}\left(\operatorname{erf}\left(\frac{\overline{ \theta}-\mu}{\sqrt{2\sigma}}\right)-\operatorname{erf}\left(\frac{\theta-\mu}{ \sqrt{2\sigma}}\right)\right).\]
_Further, given region \(q\in\bar{Q}\), let \([\underline{M}_{q,a},\overline{M}_{q,a}]\) and \([\underline{\Sigma}_{q,a},\overline{\Sigma}_{q,a}]\) be the poster mean and variance of DKL as reported in (13)-(14). Additionally, for region \(q^{\prime}\in\bar{Q}\), denote its centroid by \(c_{q^{\prime}}\) and define points_
\[\underline{z} =\operatorname*{arg\,min}_{z\in[\underline{M}_{q,a},\overline{M}_{ q,a}]}\|z-c_{q^{\prime}}\|,\] \[\overline{z} =\operatorname*{arg\,max}_{z\in[\underline{M}_{q,a},\overline{M}_{ q,a}]}\|z-c_{q^{\prime}}\|.\]
_Then, denoting the closed interval obtained by projecting \(q^{\prime}\subset\mathbb{R}^{n}\) onto the \(j\)-th dimension by \(q^{\prime(j)}\subset\mathbb{R}\), it holds that_
\[\min_{x\in q}T_{a}(q^{\prime}\mid x) \geq\prod_{j=1}^{n}\min_{\tau\in\{\underline{\Sigma}_{q,a}^{(j)}, \overline{\Sigma}_{q,a}^{(j)}\}}h(q^{(j)},\overline{z}^{(j)},\tau+\mathcal{V}^{(j, j)}),\] \[\max_{x\in q}T_{a}(q^{\prime}\mid x) \leq\prod_{j=1}^{n}\max_{\tau\in\{\underline{\Sigma}_{q,a}^{(j)}, \overline{\Sigma}_{q,a}^{(j)}\}}h(q^{(j)},\underline{z}^{(j)},\tau+\mathcal{V}^{(j, j)}),\]
_where \(\mathcal{V}^{(j,j)}\) is the \(j,j\) element of the noise covariance matrix \(\mathcal{V}\)._
Proof.: In the proof, we consider the \(\min\) case; the \(\max\) case follows similarly. Note that \(h(\theta,\mu,\sigma)\) is the integral of \(\mathcal{N}(\mu,\sigma)\) over \(\theta\). Then, under the assumption of diagonal \(cov(f^{(j)}(x,a)\mid D_{a})\) and \(\mathcal{V}\), it holds that for every \(x\in q\),
\[T_{a}(q^{\prime}\mid x)=\prod_{j=1}^{n}h(q^{\prime(j)},\mathbb{E }(f^{(j)}(x,a)\mid D),\] \[cov(f^{(j)}(x,a)\mid D)+\mathcal{V}^{(j,j)})\geq\] \[\prod_{j=1}^{n}\min_{\mu\in[\underline{\mathbb{M}}_{\sigma,a}^{( j)},\overline{M}_{q,a}^{(j)}],\tau\in[\underline{\mathbb{N}}_{\sigma,a}^{(j)}, \overline{\Sigma}_{q,a}^{(j)}]}h(q^{\prime(j)},\mu,\tau+\mathcal{V}^{(j,j)}).\]
Consequently, what is left to show is how to place mean \(\mu\) and variance \(\tau\) of a uni-dimensional Gaussian to minimize its integral over the respective dimension of \(q^{\prime}\). Each of these is minimized by first maximizing the distance of \(z^{(j)}\) from \(c_{q^{\prime}}^{(j)}\), hence \(z\) can be chosen according to \(\arg\max_{z\in[\underline{M}_{\sigma,a}^{(j)},\overline{M}_{q,a}^{(j)}]}\|z- c_{q^{\prime}}\|\). Then, there are two cases: \(z^{(j)}\in q^{\prime(j)}\) and \(z^{(j)}\not\in q^{\prime(j)}\). In the first case, \(T\) is minimized if we minimize the probability mass in \(q^{\prime}\), which results in \(\tau=\overline{\Sigma}_{q,a}^{(j)}\). In the second case, with a similar reasoning we obtain \(\tau=\underline{\Sigma}_{q,a}^{(j)}\) or \(\tau=\overline{\Sigma}_{q,a}^{(j)}\).
Theorem 1 shows that we can compute transition bounds \(\tilde{P}\) and \(\tilde{P}\) by simply evaluating an error function at \(4n\) points, thus guaranteeing efficient abstraction construction.
## V Control Synthesis and Refinement
Once we obtain IMDP abstraction \(\mathcal{I}\), our goal is to synthesize a strategy \(\pi_{\mathcal{I}}\) that maximizes the probability of satisfying specification \(\varphi\) on \(\mathcal{I}\) and then map it back to System (1) to obtain control strategy \(\pi\).
Let \(\mathcal{D}(Q)\) be the set of all probability distributions over \(Q\). We define an _adversary_\(\nu_{\mathcal{I}}:Q^{*}\times A\rightarrow\mathcal{D}(Q)\) to be a function that maps a finite path \(\omega_{\mathcal{I}}\in Q^{*}\) and an action \(a\in A\) to a transition probability distribution such that, \(\forall q^{\prime}\in Q\),
\[\tilde{P}(\text{last}(\omega_{\mathcal{I}}),a,q^{\prime})\leq\nu_{\mathcal{I }}(\omega_{\mathcal{I}},a)(q^{\prime})\leq\hat{P}(\text{last}(\omega_{\mathcal{ I}}),a,q^{\prime}),\]
where \(\text{last}(\omega_{\mathcal{I}})\) is the last state in \(\omega_{\mathcal{I}}\). Given \(\pi_{\mathcal{I}}\) and \(\nu_{\mathcal{I}}\), a probability measure \(Pr\) over paths in \(Q^{*}\) is induced [11]. Our objective can then be translated as finding an optimal \(\pi_{\mathcal{I}}^{*}\) that is robust to all uncertainties induced by abstraction, i.e.,
\[\pi_{\mathcal{I}}^{*}=\arg\max_{\pi_{\mathcal{I}}}\min_{\nu_{\mathcal{I}}}Pr( \omega_{\mathcal{I}}\models\varphi\mid\pi_{\mathcal{I}},\nu_{\mathcal{I}}, \omega_{\mathcal{I}}(0)=q) \tag{15}\]
\(\pi_{\mathcal{I}}^{*}\) can then be computed using off-the-shelf tools with a time complexity polynomial in \(Q\)[11].
We can then define \(\pi\) according to \(\pi_{\mathcal{I}}^{*}\) by using a mapping between trajectories of System (1) and paths of \(\mathcal{I}\). Let \(\mathbb{M}:X\rightarrow\bar{Q}\) be a mapping such that \(\mathbb{M}(x)=q\) for all \(x\in q\). With an abuse of notation, for a finite trajectory \(\omega_{\mathbf{x}}\in X^{*}\) with length \(N\), we define \(\mathbb{M}(\omega_{\mathbf{x}})=\mathbb{M}(\omega_{\mathbf{x}}(0))\dots \mathbb{M}(\omega_{\mathbf{x}}(N))\in Q^{*}\). Then, the control strategy of System (1) is given by:
\[\pi(\omega_{\mathbf{x}})=\pi_{\mathcal{I}}^{*}(\mathbb{M}(\omega_{\mathbf{x}})). \tag{16}\]
Furthermore, for \(\pi_{\mathcal{I}}^{*}\), we also obtain lower and upper bound probabilities of satisfaction of \(\varphi\) from every \(q\in\bar{Q}\) as
\[\tilde{p}(q)=\min_{\nu_{\mathcal{I}}}Pr(\omega_{\mathcal{I}} \models\varphi\mid\pi_{\mathcal{I}}^{*},\nu_{\mathcal{I}},\omega_{\mathcal{I}}( 0)=q),\] \[\hat{p}(q)=\max_{\nu_{\mathcal{I}}}Pr(\omega_{\mathcal{I}} \models\varphi\mid\pi_{\mathcal{I}}^{*},\nu_{I},\omega_{\mathcal{I}}(0)=q).\]
In the following theorem, we show that these probability bounds also hold for System (1).
**Theorem 2** (Correctness).: _For \(q\in Q\), let \(\tilde{p}(q)\) and \(\hat{p}(q)\) the lower- and upper-bound probabilities of satisfying \(\varphi\) from \(q\). Then, it holds that_
\[P^{x_{0}}(\omega_{\mathbf{x}}\models\varphi\mid D,\pi,x_{0}\in q)\in[\tilde{p }(q),\hat{p}(q)].\]
Proof.: \(\mathbf{x}^{(j)}(k+1)=f^{(j)}(x,a)+\mathbf{v}^{(j)}(k)\) is a Gaussian process with zero mean and covariance \(k_{dkl}(x,x)+\mathcal{V}^{(j,j)}\). Consequently, for \(x_{1},...,x_{l}\in D_{a}\) the joint distribution of \(f(x_{1},a)+\mathbf{v}(k),...,f(x_{1},a)+\mathbf{v}(k)\) is still Gaussian. Then the transition kernel \(T_{a}(q\mid x)\) in (7) defines the one step dynamics of System (1). Then, for any strategy \(\pi\), the upper and lower bound probabilities returned by the IMDP from initial region \(q\) as built in Sec. IV contains \(P^{x_{0}}(\omega_{\mathbf{x}}\models\varphi\mid D,\pi,x_{0}\in q)\) as follows from [8, Theorem 2].
Refinement:Recall that abstraction \(\mathcal{I}\) relies on a discretization of \(X\). The uncertainty induced by the discretization may result in undesirable results where large sections of the space have large gap between \(\tilde{p}\) and \(\hat{p}\). We consider a refinement strategy similar to that in [17, 28] to efficiently reduce this conservatism. In particular, to decide on which states to refine, we define a scoring function \(\beta:\bar{Q}\rightarrow\mathbb{R}_{\geq 0}\) as
\[\beta(q)=(\hat{p}(q)-\tilde{p}(q))\sum_{a\in U}\sum_{q^{\prime}\in q}(\hat{P}(q,a,q^{\prime})-\hat{P}(q,a,q^{\prime})).\]
\(\beta\) gives higher score to states that have the most uncertainty associated with satisfying \(\varphi\) and states with conservative outgoing transition probabilities. We refine the \(n_{\text{ref}}\) states with the highest score and for each state we only split in half the dimension that minimizes the volume of \(Z_{q,a}\) (i.e. conservatism induced by the NN linear relaxation).
## VI Case Studies
We evaluate our DKL control synthesis framework on various nonlinear systems and cases studies. First, we assess the learning performance of DKL under different NN architectures against other GP-based methods. Then, we show the efficacy of our control synthesis framework in various environments and specifications.
All experiments were run on an Intel Core i7-12700K CPU at 3.60GHz with 32 GB of RAM limited to 10 threads. Our tool is available on GitHub3.
### _Setup and Training_
We consider three nonlinear systems from [3, 17] as shown in Table I. To learn their dynamics, we used seven learning models:
* GP: the squared exponential kernel trained on \(D_{a}^{\text{pred}}\).
* NN-GP: joint NN and GP model, where the NN is trained as a predictor of the dynamics on \(D_{a}\) and a GP is regressed to predict the error of the NN from truth on \(D_{a}^{\text{pred}}\).
* NN-GP\({}^{\text{L}}\)(Limited NN-GP): NN-GP except that the NN is trained only on \(D_{a}^{\text{pred}}\).
* DKL\({}^{\text{F}}\): DKL with a NN that is trained on \(D_{a}\) and the full output of the NN is provided as an input to the base kernel (see Figure 2), i.e, \(k_{dkl}(x_{1},x_{2})=k_{\gamma}(g_{a}^{w}(x_{1}),g_{a}^{w}(x_{2}))\). The base kernel parameters are trained on \(D_{a}^{\text{pred}}\).
* DKL\({}^{\text{F}}\)(Limited DKL\({}^{\text{F}}\)): similar to DKL\({}^{\text{F}}\) except that the NN is trained only on \(D_{a}^{\text{pred}}\).
* DKL\({}^{\text{S}}\): similar to DKL\({}^{\text{F}}\) but only the corresponding output dimension of the NN is provided as an input to the base kernel (see Figure 2), i.e., \(k_{dkl}(x_{1},x_{2})=k_{\gamma}(g_{a}^{w}(x_{1})^{(j)},g_{a}^{w}(x_{2})^{(j)})\).
* DKL\({}^{\text{S}}\)(Limited DKL\({}^{\text{S}}\)): similar to DKL\({}^{\text{S}}\) except that the NN is trained only on \(D_{a}^{\text{pred}}\).
All NNs use the ReLU activation function. We trained the NN portion of the DKL models with stochastic mini-batching as a scaled predictor of the dynamics and fixed the parameters before learning the kernel parameters via maximum log likelihood. Details on the architectures and training datasets are in Table I. The primary difference between DKL\({}^{\text{F}}\) and DKL\({}^{\text{S}}\) models is the relation between the output of the NN and the input of the kernel as illustrated in Figure 2.
### _Accuracy of Deep Kernel Learning_
We first demonstrate the advantages of DKL by comparing the predictive accuracy of the learning models above. We define the predictive mean and variance error of each model at a point \(x\) under action \(a\) as
\[\text{err}_{\mu}(x,a) =\|\mathbb{E}(f(x,a)\mid D)-f(x,a)\|_{2},\] \[\text{err}_{\sigma}(x,a) =\mathrm{trace}(cov(f(x,a)\mid D))^{\frac{1}{2}},\]
respectively. Table II shows the maximum error values over 100,000 test points for a fixed action.
In all cases, GP has the worse performance in mean error and compensates with large variance. For low dimensional systems, NN-GP performs well in mean error (err\({}_{\mu}\)) but retains a large uncertainty (err\({}_{\sigma}\)) due to poor correlation between data points in the GP. Also, as number of dimensions increases, its predictions become drastically more uncertain. The DKL models have lower uncertainty (err\({}_{\sigma}\)) than GP and NN-GP models across the board, and among DKL models, DKL\({}^{\text{S}}\) generally has the best performance (small err\({}_{\sigma}\)). As the number of dimensions increases, the advantages of the DKL method become more significant in both mean error and variance. This is mainly due to the NN used in the prior for the kernel, which improves both mean and variance accuracy, whereas the NN in NN-GP only improves the predicted mean, not the variance as the GP prior contains insufficient information. Among NN architectures for DKL\({}^{\text{F}}\) and DKL\({}^{\text{S}}\) models, DKL\({}^{\text{S}}\) (Figure 2 left) provides the best performance. This is because the NN captures the correlation between dimensions sufficiently well resulting in only the corresponding NN output being required for the base kernel to predict accurately.
In terms of effect of data on training, DKL\({}^{\text{F}}\) and DKL\({}^{\text{S}}\), where more data was used to train the NN, perform better than DKL\({}^{\text{F}}\),DKL\({}^{\text{S}}\). This shows that more data leads to a more accurate NN prior for the kernel. Nevertheless, overfitting could also be a concern. As noted in Sec. III, an ill-formed prior may result in uncertainty being underestimated. We find that this is more likely to happen with low dimensional systems where a standard GP already performs sufficiently well. For example, the DKL\({}^{\text{S}}\) with the proposed NN architecture underestimates uncertainty for one of the four modes in the 2D system, i.e., only 21% of predictions contained the true dynamics within 2 standard deviations. However, by adding another layer to the NN or altering the training parameters, we can remove this artifact and maintain \(>\)95% of predictions containing truth within two standard deviations.
\begin{table}
\begin{tabular}{l|c c|c c|c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{2}{c|}{2D System} & \multicolumn{2}{c|}{3D System} & \multicolumn{2}{c}{5D System} \\ \cline{2-7} & err\({}_{\mu}\) & err\({}_{\sigma}\) & err\({}_{\sigma}\) & err\({}_{\sigma}\) & err\({}_{\sigma}\) & err\({}_{\sigma}\) \\ \hline GP & 0.677 & 0.3473 & 1.857 & 0.5261 & 0.7479 & 0.7825 \\ NN-GP & **0.0591** & 0.3215 & 0.1158 & 0.5209 & 0.2856 & 0.7797 \\ NN-GP\({}^{\text{L}}\) & 0.0914 & 0.3218 & 0.1297 & 0.5214 & 0.2655 & 0.7794 \\ DKL\({}^{\text{F}}\) & 0.2575 & 0.1817 & 0.2414 & 0.1643 & 0.1909 & 0.2626 \\ DKL\({}^{\text{F}}\) & 0.2769 & 0.1854 & 0.2068 & 0.2645 & 0.2654 & 0.2590 \\ DKL\({}^{\text{S}}\) & 0.1716 & **0.1276** & **0.0856** & **0.1545** & **0.1294** & 0.1778 \\ DKL\({}^{\text{S}}\) & 0.1960 & 0.1373 & 0.1727 & 0.1575 & 0.2324 & **0.1775** \\ \hline \hline \end{tabular}
\end{table} TABLE I: Overview of the considered systems with dimensionality (Dim), number of actions in \(U\), number of data points collected per action \(D_{a}\), number of data points used for posterior predictions \(D_{a}^{\text{pred}}\subset D_{a}\), and number of layers (# L) and neurons per layer (# N/L) of the NNs considered.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline System Type & Dim. & \(|U|\) & \(|D_{a}|\) & \(|D_{a}^{\text{pred}}|\) & \#L & \#N/L \\ \hline Non-linear [3, 17] & 2D & 4 & 1,000 & 100 & 2 & 64 \\ Dubin’s Car [17] & 3D & 7 & 10,000 & 400 & 2 & 128 \\
2nd-order Car [17] & 5D & 3 & 50,000 & 250 & 3 & 64 \\ \hline \hline \end{tabular}
\end{table} TABLE I: Overview of the considered systems with dimensionality (Dim), number of actions in \(U\), number of data points collected per action \(D_{a}\), number of data points used for posterior predictions \(D_{a}^{\text{pred}}\subset D_{a}\), and number of layers (# L) and neurons per layer (# N/L) of the NNs considered.
Fig. 2: Illustration of deep kernel models. Red: NN input layer, Green: hidden layers, Grey: NN output layer, Blue: base kernel. Left: DKL\({}^{\text{S}}\) provides single dimensional input to the kernel \(k\). Right: DKL\({}^{\text{F}}\) provides all outputs of the NN to each kernel.
### _Synthesis Results_
Here, we illustrate the efficacy of our control synthesis framework. Our metrics are the _empirical validation_ of guarantees, _computation time_, and _percent volume_ of the space from which the system under the synthesized control strategy is guaranteed to satisfy the specification with probably greater than or equal to 0.95 called \(Q^{yes}\), less then 0.95 called \(Q^{no}\), and the remaining states called \(Q^{?}\).
#### Iv-C1 Refinement and Computation Time
We consider the 3D Dubin's car system, with the state space representing position and orientation, and synthesize a strategy for a static overtaking scenario as shown in Figure 3. We label the stationary car as \(b\) and the goal region as \(a\). The LTLf specification \(\varphi_{1}=\mathcal{G}(\neg b)\wedge\mathcal{F}(a)\) then defines the task. The control synthesis results can be seen in Table III and a visualization is shown in Figure 3. Simulated trajectories under the strategy are shown in black lines, starting from the black dot and ending at the purple star. Note that we were unable to verify the NN-GP model due to a time out.
We see that DKL\({}^{\lx@sectionsign}\) has the best performance, leaving only 13.27% of the state space volume as undetermined (\(Q^{?}\)). This is expected as the DKL\({}^{\lx@sectionsign}\) model has the highest accuracy as shown in Table II. This also follows the prediction that the lower uncertainty in DKL models would result in a less conservative abstraction. We note that DKI\({}^{\lx@sectionsign}\) provides guarantees on a lower volume of space than GP, but is capable of achieving similar results in one tenth the time and outperforms the GP on the volume of \(Q^{yes}\).
We note that the computational bottleneck for GP abstractions comes from bounding the kernel outputs, and both DKL\({}^{\lx@sectionsign}\) and DKL\({}^{\lx@sectionsign}\) significantly outperform the GP model in this metric. The DKL\({}^{\lx@sectionsign}\) bounds the kernel in the shortest time due to the single dimensional input to the kernel. The DKL\({}^{\lx@sectionsign}\) outperforms the GP in time due to the NN mapping the kernel inputs into a scaled space. This allows for all of the data points used in the kernel to provide useful information about the mean and variance, as well as producing a smaller input space to calculate the bounds, unlike the GP model. In practice, we find that scaling is more effective in larger spaces. The NN-GP model timed out during kernel bounding, taking more than 5000 minutes to bound only three of seven modes.
In each case the final abstraction consists of roughly one tenth the number of states that a uniform discretization at the finest level would produce. The tight satisfaction probabilities we see in the final abstraction for DKL highlight the efficacy of our abstraction procedure. This method is particularly effective for the DKL models. Since the NN linear relaxation holds for every \(x\) inside of a discrete region \(q\), refining region \(q\) allows for the re-use of the linear relaxation. This enables the DKL models to take a fraction of the time as the GP model during refinements, as the linear relaxation takes significantly longer time to compute than the kernel bounds in DKL models. The linear relaxation is also the primary contributor to the conservative mean and variance bounds of the kernel, hence refinements can result in a greater change in posterior bounds for the DKL models than the GP model.
To validate the accuracy of the satisfaction probabilities, we simulate the evolution of the system under the synthe
Fig. 3: Region labeling and lower bound satisfaction probabilities \(\bar{p}(q)\) for experiment 1. Left: DKI\({}^{\lx@sectionsign}\), middle: DKI\({}^{\lx@sectionsign}\), and right: GP. Top: the initial abstraction, Bottom: after two refinements. Green: \(Q^{yes}\), yellow: \(Q^{?}\), red: \(Q^{no}\).
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{\# Ref.} & \multirow{2}{*}{\(|Q|\)} & \multirow{2}{*}{\(Q^{yes}\)} & \multirow{2}{*}{\(Q^{no}\)} & \multirow{2}{*}{\(Q^{?}\)} & \multicolumn{6}{c}{Time (min.)} \\ \cline{6-13} & & & & & NN Lin. Rel. & Kernel Bounds & Trans. Prob. & Synthesis & Total \\ \hline \multirow{2}{*}{GP} & 0 & 20,482 & 17.67 & 40.86 & 41.47 & – & 4,866.46 & 5.87 & 3.53 & 4,875.86 \\ & 2 & 40,482 & 36.67 & 43.40 & 19.92 & – & 7.00 & 17.00 & 2.24 & 26.24 \\ \hline \multirow{2}{*}{NN–GP} & 0 & 20,482 & – & – & – & 477.45 & Time Out & – & – & – \\ \hline \multirow{2}{*}{DKI\({}^{\lx@sectionsign}\)} & 0 & 20,482 & 18.60 & 36.27 & 45.13 & 418.8 & 63.80 & 4.98 & 1.76 & 489.34 \\ & 2 & 40,490 & 38.06 & 39.04 & 22.90 & – & 1.42 & 16.72 & 1.92 & 20.06 \\ \hline \multirow{2}{*}{DKI\({}^{\lx@sectionsign}\)} & 0 & 20,482 & 20.05 & 36.46 & 43.49 & 418.8 & 8.80 & 4.73 & 1.31 & 433.64 \\ & 2 & 38,885 & **42.65** & **44.08** & **13.27** & – & **0.11** & **15.37** & **1.38** & **16.86** \\ \hline \hline \end{tabular}
\end{table} TABLE III: Synthesis results for four different models of the 3D system. Reported values are percent volume of space for \(Q^{yes/no/7}\) for the initial abstraction and after two refinements. We also report the time (minutes) taken for the NN linear relaxation, kernel bounding, transition probability calculation, and synthesis. Note that the transition probability times include the time taken to write the values to a file. Note that we recalculate the transition probabilities for every state during refinement, resulting in longer times as more states are added.
sized control strategy 1000 times from an initial region in the left half of the space, where \(0.3829\leq p(q)\leq 1.0\) and found that 100% of simulations satisfy the specification. Similarly, simulating from a region where \(0.9793\leq p(q)\leq 1.0\), we find 100% of simulations satisfy the specification. Simulating from an initial state where the maximum probability of satisfaction is 0 results in no trajectory remaining safe.
#### Vi-B2 Control Synthesis with Complex Specifications
To show our framework can handle complex specifications, we use the 2D system and perform control synthesis given the same labeling considered in [3, 17] and LTLf specification \(\varphi_{2}=\mathcal{G}(\neg b)\wedge\mathcal{F}(a)\wedge\mathcal{F}(c)\). In this low dimensionality, there is little difference between the synthesis results for the 4 learning models. We show results for the DKLf. The abstraction consists of 861 states and was constructed in 10.17 minutes. We synthesized a control strategy, under which 53.22% of the space is in \(Q^{yes}\) and 26.76% in \(Q^{2}\), which is comparable to the results in [3, 17]. Note that [17] assumes the system model is given as a NN, whereas here we use only data and achieve similar results. After two refinements, which takes 2.4 minutes of which 1.8 are synthesis, a control strategy where 73.48% of the space is in \(Q^{yes}\) and 0.07% in \(Q^{7}\) is synthesized; results are shown in Figure 4, simulated trajectories begin at black dots and end at purple stars.
We validate the probabilities by simulating from the initial states of the trajectories shown in Figure 4 1000 times where \(0.9573\leq p(q)\leq 1\), \(0.9998\leq p(q)\leq 1\) and \(0.9916\leq p(q)\leq 1\) find 100% of simulations satisfy the specification, validating the bounds.
#### Vi-B3 Scalability to Higher Dimensions
We demonstrate scalability by synthesizing a control strategy for the 5D system using the environment described in [17] and LTLf specification \(\varphi_{1}=\mathcal{G}(\neg b)\wedge\mathcal{F}(a)\). Here we only show results for the DKLf, as the GP model cannot scale. The initial abstraction consists of 21,171 regions, and it took 260 minutes to calculate the NN linear relaxation and less than 6 minutes to calculate the bounds on mean and variance for the base kernel. Construction of the IMDP and synthesis took 18 minutes for the first abstraction, producing a control strategy where 17.14% of the space is \(Q^{yes}\) and 66.98% is \(Q^{7}\). After two refinements, which took 60 minutes with only 12 seconds being used to calculate the kernel bounds, we synthesized a control strategy where 44.36% is \(Q^{yes}\) and 39.46% is \(Q^{?}\) producing comparable results to [17] with less restrictive assumptions on the dynamics. The abstraction for this refinement consists of 42,633 states, again having one tenth the number of states of a uniform discretization; results are shown in Figure 4.
We simulate 1000 trajectories from initial states where \(0.7024\leq p(q)\leq 1.0\) and \(0\leq p(q)\leq 0.6830\) and find 100% and 0% of simulations satisfy the specification respectively, validating the bounds.
## VII Conclusion
We introduced an abstraction framework for unknown, stochastic dynamics via DKL which can be used to synthesize strategies with guarantees on the behavior of the system. We showed the DKL models utilize the NN to transform kernel inputs into a space that enables more accurate predictions, easier computation of posterior bounds, and faster synthesis times. DKL, and our framework, enables the use of data-driven verification of systems with high-dimensionality. We note that DKL models can utilize significantly larger data sets than GPs by optimizing the NN prior over all the data and using a subset of the data for posterior predictions. This is particularly promising for systems with millions of data points available for evaluation, as this allows for a computationally tractable form of uncertainty quantification. Our abstraction procedure relies on the system having discrete modes, but recent works have provided methods for IMDP synthesis over continuous action spaces [29]. We hope to expand our method to this domain in future work.
|
2305.19944 | Irreducibility of eventually $2$-periodic curves in the moduli space of
cubic polynomials | Consider the moduli space, $\mathcal{M}_{3},$ of cubic polynomials over
$\mathbb{C}$, with a marked critical point. Let $\mathscr{S}_{k,n}$ be the set
of all points in $\mathcal{M}_{3}$ for which the marked critical point is
strictly $(k,n)$-preperiodic. Milnor conjectured that the affine algebraic
curves $\mathscr{S}_{k,n}$ are irreducible, for all $k \geq 0, n>0$. In this
article, we show the irreducibility of eventually $2$-periodic curves, i.e.
$\mathscr{S}_{k,2},\; k\geq 0$ curves. We also note that the curves,
$\mathscr{S}_{k,2},\; k\geq 0$, exhibit a possible splitting-merging phenomenon
that has not been observed in earlier studies of $\mathscr{S}_{k,n}$ curves.
Finally, using the irreducibility of $\mathscr{S}_{k,2}$ curves, we give a new
and short proof of Galois conjugacy of unicritical points lying on
$\mathscr{S}_{k,2}$, for even natural number $k$. | Niladri Patra | 2023-05-31T15:26:56Z | http://arxiv.org/abs/2305.19944v2 | # Irreducibility of preperiodic curves in the moduli space of cubic polynomials
###### Abstract.
Consider the moduli space, \(\mathcal{M}_{3}\), of cubic polynomials over \(\mathbb{C}\), with a marked critical point. Let \(\Sigma_{k,n}\) be the set of all points for which the marked critical point is strictly \((k,n)\)-preperiodic. Milnor conjectured that \(\Sigma_{k,n}\)'s are irreducible curves in \(\mathcal{M}_{3}\), for all \(k\geq 0,n>0\). Buff, Epstein, and Koch have proved this conjecture for \(k\geq 0,n=1\). In this article, we show the irreducibility of \(\Sigma_{k,2},k\geq 0\) curves. The curves, \(\Sigma_{k,2},k\geq 0\), exhibit a splitting-merging phenomenon that does not occur for \(\Sigma_{k,1}\) curves. Furthermore, using the irreducibility of \(\Sigma_{k,2}\) curves, we prove an irreducibility result in the unicritical cubic case. Stronger versions of this result in the unicritical cubic case, have been proved by Vefa Goksel and Buff, Epstein, and Koch, but our methods are different. Finally, we show that our method does not extend directly for \(\Sigma_{k,q}\) curves, where \(q\) is an odd prime.
2010 Mathematics Subject Classification: Primary 11R09, Secondary 37P15, 37P55
## 1. Introduction
Let \(f\) be a polynomial over \(\mathbb{C}\). We denote the iteration of \(f\) with itself \(m\geq 0\) times, as \(f^{m}\), i.e. \(f^{0}=Id,\ f^{m}=f^{m-1}\circ f,\ \forall\ m\in\mathbb{N}\). For any point \(x\in\mathbb{C}\), the _forward orbit_ of \(x\) is the set, \(\{x,f(x),f^{2}(x),...,f^{m}(x),...\}=\{f^{m}(x)|m\geq 0\}\).
A point \(x\in\mathbb{C}\) is called a _periodic point_ of period \(n\) iff \(f^{n}(x)=x\). It is called _strictly n-periodic point_ iff \(n\) is the smallest positive integer for which \(f^{n}(x)=x\). A point \(x\in\mathbb{C}\) is called a \((k,n)\)-_preperiodic point_ iff \(f^{k+n}(x)=f^{k}(x)\). It is called _strictly \((k,n)\)-_preperiodic point_ iff \(f^{k+n}(x)=f^{k}(x)\) and \(f^{l+m}(x)\neq f^{l}(x)\), for any \(0\leq l\leq k,1\leq m\leq n,(k,n)\neq(l,m)\in\mathbb{Z}^{2}\).
For a polynomial \(f\in\mathbb{C}[z]\), the roots of the derivative of \(f\) are called the _finite critical points_ of \(f\). Let us consider the set \(S_{3}\) of all cubic polynomials over \(\mathbb{C}\), with a marked (finite) critical point. Two polynomials that are affine conjugate to each other, exhibit the same dynamical behaviour. So, we consider the quotient space of \(S_{3}\), by identifying polynomials that are affine conjugate to each other, and the affine conjugation map sends the marked critical point of the first to the marked critical
point of the latter. This space, \(\mathcal{M}_{3}\), is called _the moduli space of cubic polynomials with a marked critical point_.
A polynomial in \(\mathbb{C}[z]\) is called _monic_ if its leading coefficient is one, and called _reduced_ if the sum of its roots is zero. Observe that, any polynomial is affine conjugate to a monic, reduced polynomial. Hence, \(\mathcal{M}_{3}\) can be seen as set of affine conjugacy classes of monic, reduced cubic polynomials over \(\mathbb{C}\), with a marked critical point. From [10], every monic, reduced cubic polynomial, with a marked critical point, can be written in the modified _Branner-Hubbard normal form_ as,
\[f_{a,b}(z)=z^{3}-3a^{2}z+2a^{3}+b, \tag{1.1}\]
with \(\pm a\) as its finite critical points, \(f(a)=b\) is a finite critical value and \(a\) is the marked critical point. Let \(a,b,a^{\prime},b^{\prime}\in\mathbb{C}\). Brief calculation shows that, \(f_{a,b}\) and \(f_{a^{\prime},b^{\prime}}\) are affine conjugate to each other iff either \((a,b)=(a^{\prime},b^{\prime})\) or \((a,b)=(-a^{\prime},-b^{\prime})\). Hence, the moduli space \(\mathcal{M}_{3}\) can be identified as,
\[\mathcal{M}_{3}\longleftrightarrow\mathbb{C}^{2}/\left((a,b)\sim(-a,-b) \right).\]
The space \(\mathbb{C}^{2}/\left((a,b)\sim(-a,-b)\right)\) is the image of \(\mathbb{C}^{2}\) under the affine Veronese map, \(\mathbb{C}^{2}\to\mathbb{C}^{3},(a,b)\mapsto(a^{2},ab,b^{2})\). Hence, \(\mathcal{M}_{3}=\mathbb{C}^{2}/\left((a,b)\sim(-a,-b)\right)\) is an affine variety.
Fix two integers \(k\geq 0\) and \(n>0\). Consider the set of all points \((a,b)\in\mathcal{M}_{3}\) such that the marked critical point \(a\) is strictly \((k,n)\)-preperiodic under the polynomial map \(f_{a,b}\). Zariski closure of this set in \(\mathcal{M}_{3}\) is a curve in \(\mathcal{M}_{3}\), which is denoted as \(\Sigma_{k,n}\). Milnor [10] conjectured that the \(\Sigma_{0,n},n\in\mathbb{N}\) curves are all irreducible. In general, it is conjectured that,
**Conjecture 1.1**.: _For any choice of integers \(k\geq 0\) and \(n>0\), the curve \(\Sigma_{k,n}\) is irreducible._
Buff, Epstein, and Koch [1] proved this conjecture for \(\Sigma_{k,1}\) curves. In this article, we will prove this conjecture for \(\Sigma_{k,2}\) curves, for any non-negative integer \(k\) (see Theorem 5.10). We state the theorem below.
**Theorem 1.2**.: _For any non-negative integer \(k\), the curve \(\Sigma_{k,2}\) is irreducible._
Our proof of Theorem 1.2 is of arithmetic nature and follows the approach taken for the unicritical case in [1]. We form polynomials \(h_{k,n}\in\mathbb{Z}[a,b]\) for which the corresponding curve in \(\mathcal{M}_{3}\) is \(\Sigma_{k,n}\). We show that \(h_{k,2}\) polynomials are generalised \(3\)-Eisenstein polynomials. Hence, they are irreducible over \(\mathbb{Q}\). If the polynomials \(h_{k,2},k\geq 0\) are irreducible over \(\mathbb{C}\), then we are done. But, we observe that the polynomials \(h_{k,2}\) can be reducible over \(\mathbb{C}\). If \(h_{k,2}\) is reducible for some \(k\geq 0\), then it can split into at most two factors over the field \(\mathbb{Q}[i]\). We show that both of these
factors, that lie in \(\mathbb{Q}[i][a,b]\), have a smooth \(\mathbb{Q}[i]\)-rational point. Using extension of irreducibility (Corollary 4.8), we get that both of these factors are irreducible over \(\mathbb{C}\). Moreover, we show that the irreducible curves in \(\mathbb{C}^{2}\) corresponding to these two factors merge together under the equivalence relation \((a,b)\sim(-a,-b)\), generating one irreducible curve in \(\mathcal{M}_{3}\), which is precisely the curve \(\Sigma_{k,2}\).
A polynomial over \(\mathbb{C}\) is called _Unicritical_ iff all the finite critical points are equal. Assuming \(a=0\) in Equation (1.1), we get the general form of a monic, reduced, unicritical cubic polynomial,
\[f_{b}(z)=z^{3}+b. \tag{1.2}\]
Milnor [14] conjectured that the finite set of values of \(b\) for which the critical point \(0\) is strictly \((k,n)\)-preperiodic under \(f_{b}\), form one Galois orbit under the action of the absolute Galois group of \(\mathbb{Q}\). One can form a polynomial \(h_{k,n}^{uni}\in\mathbb{Z}[b]\), whose solution set is the set of all values of \(b\) for which \(0\) is strictly \((k,n)\)-preperiodic. Hence, Milnor's conjecture can be stated as, for any choice of \((k,n)\in\mathbb{Z}^{2}\) such that \(k\geq 0,n\geq 1\), the polynomials \(h_{k,n}^{uni}\) are either constant or irreducible over \(\mathbb{Q}\). These polynomials can be constant polynomials, for example, \(h_{1,n}^{uni},n\in\mathbb{N}\) are all equal to one (see Remark 7.1). Vefa Goksel [10] has shown that \(h_{k,2}^{uni}\) polynomials are irreducible over \(\mathbb{Q}\). Buff, Epstein and Koch [1] proved the same for a different form of monic, reduced, unicritical cubic polynomial, namely \(f_{c}(z)=cz^{3}+1\). We use the fact that \(h_{k,2}\) polynomials are generalised \(3\)-Eisenstein polynomials to show that \(h_{k,2}^{uni}\) polynomials are either constants or generalised \(3\)-Eisenstein polynomials, for every even integer \(k\geq 0\) (Theorem 7.2). We state the theorem below.
**Theorem 1.3**.: _For any even \(k\in\mathbb{Z}_{\geq 0}\), the polynomial \(h_{k,2}^{uni}\) is either constant or irreducible over \(\mathbb{Q}\)._
In section 2, we form the polynomials \(h_{k,2},k\geq 0\), and show that the curve in \(\mathcal{M}_{3}\) corresponding to \(h_{k,2}\) is \(\Sigma_{k,2}\). In section 3, we fix some notations to be used in the later sections. In section 4, we state some lemmas and tools to be used in the proofs of the later sections. In section 5, we prove irreducibility of \(\Sigma_{k,2}\) curves. In section 6, we show that our method does not extend directly for \((k,q)\) curves, where \(q\) is an odd prime number. Finally, in section 7, we prove irreducibility results in the unicritical cubic case.
The author would like to thank C. S. Rajan, Sagar Shrivastava and Manodeep Raha, for their valuable inputs and many discussions. The author also expresses his gratitude to Ashoka University for providing a productive workspace, where this work was done. The author is also thankful to Kerala School of Mathematics for his visit there, where he had many helpful discussions with Plawan Das, Subham Sarkar, and M. M. Radhika.
## 2. Preliminaries
In this section, we will form polynomials in \(\mathbb{Z}[a,b]\), irreducibility of which over \(\mathbb{C}\) implies irreducibility of \(\Sigma_{k,n}\) in \(\mathcal{M}_{3}\). Consider the Equation (1.1). As \(f_{a,b}\) varies over all monic, reduced, cubic polynomials over \(\mathbb{C}\), \(a\) and \(b\) vary over \(\mathbb{C}\). So, we will drop the subscript \(a,b\) from the notation \(f_{a,b}\).
Any point \((a,b)\in\mathcal{M}_{3}\) for which \(a\) is \((k,n)\)-preperiodic must satisfy the equation,
\[f^{k+n}(a)-f^{k}(a)=0,\]
where \(f\) is the polynomial,
\[f(z)=z^{3}-3a^{2}z+2a^{3}+b.\]
Observe that \(a\) is not necessarily strictly \((k,n)\)-preperiodic, for every point \((a,b)\) lying in the solution space of the polynomial \(f_{k,n}:=f^{k+n}(a)-f^{k}(a)\). In section 4, we will show that for \(0\leq l\leq k,\ 1\leq m,\ m|n\), the polynomial \(f_{l,m}\) divides \(f_{k,n}\) in \(\mathbb{Z}[a,b]\). So, we form the polynomial,
\[h_{k,n}=\frac{f_{k,n}}{\prod_{i}g_{i}^{\alpha_{i}}}, \tag{2.1}\]
where \(g_{i}\) varies over all irreducible factors of \(f_{l,m}\) in \(\mathbb{Z}[a,b]\), for all \(0\leq l\leq k,1\leq m,\ m|n,\ (k,n)\neq(l,m)\in\mathbb{Z}^{2}\), and \(\alpha_{i}\) is the highest power of \(g_{i}\) that divides \(f_{k,n}\).
**Lemma 2.1**.: _The set of all points \((a,b)\in\mathcal{M}_{3}\) for which \(a\) is strictly \((k,n)\)-preperiodic, is a Zariski dense subset of the algebraic set of \(h_{k,n}\) in \(\mathcal{M}_{3}\)._
Proof.: Consider the set, \(S\), of all points \((a,b)\in\mathcal{M}_{3}\), for which \(a\) is strictly \((k,n)\) preperiodic. Any point in \(S\) lies in the solution space of \(f_{k,n}\) but not in the solution space of \(g_{i}\), for any \(g_{i}\) appearing in Equation (2.1). Hence, \(S\) is a subset of the algebraic set of \(h_{k,n}\) in \(\mathcal{M}_{3}\). Now, the complement of \(S\) in the algebraic set of \(h_{k,n}\) is the set of points \((a,b)\) for which \(a\) is \((k,n)\)-preperiodic but not strictly. So, they are solutions of \(g_{i}\)'s appearing in Equation (2.1). By definition of \(h_{k,n}\), \(h_{k,n}\) is coprime to \(g_{i}\) over \(\mathbb{Z}\) for every \(i\). The polynomials \(h_{k,n}\) and \(g_{i}\)'s are all monic as polynomials in \(b\) over \(\mathbb{Z}[a]\). So for every \(i\), the polynomials \(g_{i}\) and \(h_{k,n}\) are coprime over \(\mathbb{Q}\). Hence, \(h_{k,n}\) is coprime to \(g_{i}\) over \(\mathbb{C}\) too, for every \(i\). So, the complement of \(S\) in the solution space of \(h_{k,n}\) consists of finitely many points. Hence, \(S\) is Zariski dense subset of the algebraic set of \(h_{k,n}\) in \(\mathcal{M}_{3}\).
From Lemma 2.1, one directly obtains the following corollary,
**Corollary 2.2**.: _If the polynomial \(h_{k,n}\) is irreducible over \(\mathbb{C}\), then the curve \(\Sigma_{k,n}\) is irreducible._
**Remark 2.3**.: As we will see in section 5, the converse of Corollary 2.2 is not true. For example, we will see that \(h_{k,2}\) polynomials can be reducible over \(\mathbb{C}\). But for any \(k\geq 0\), \(h_{k,2}\) can have at most two irreducible factors. Moreover, the curves corresponding to each of these factors are the same in \(\mathcal{M}_{3}\), which is precisely \(\Sigma_{k,2}\).
## 3. Notations
We will use the following notations for the rest of the article. Let \(g,h\) be elements of \(\mathbb{Z}[a,b]\), the polynomial ring in variables \(a,b\) over \(\mathbb{Z}\).
* By saying \(g\) is _monic_ in \(\mathbb{Z}[a][b]\), we mean \(g\) is monic as a polynomial in \(b\) over the ring \(\mathbb{Z}[a]\).
* By \(\operatorname{Res}(g,h)\), we denote the _resultant_ of \(g\) and \(h\), both considered as polynomials in \(b\) with coefficients coming from the integral domain \(\mathbb{Z}[a]\). So, \(\operatorname{Res}(g,h)\in\mathbb{Z}[a]\).
Consider the polynomial \(f\) as defined in Equation (1.1). For any non-negative integers \(k,n\), with \(n>0\),
* \(f^{0}:=\) identity map, \(f^{n}:=f^{n-1}\circ f\), for all \(n\in\mathbb{N}\).
* \(f^{\prime}\) denote the derivative of \(f\) w.r.t \(z\).
* \(f_{k,n}=f_{k,n}(a,b):=f^{k+n}(a)-f^{k}(a)\).
* \(h_{k,n}=h_{k,n}(a,b):=f_{k,n}/\prod_{i}g_{i}^{\alpha_{i}}\), where \(g_{i}\) varies over all distinct irreducible factors of \(f_{l,m}\) over \(\mathbb{Z}\), where \(l\leq k,\ m|n,\ (l,m)\neq(k,n)\in\mathbb{Z}^{2}\), and for each \(i\), \(\alpha_{i}\) is the highest power of \(g_{i}\) that divides \(f_{k,n}\).
* \(\mathbb{C}^{2}:=\) complex affine space of dimension \(2\).
* \(\mathcal{M}_{3}:=\mathbb{C}^{2}\bigg{/}\bigg{(}(a,b)\sim(-a,-b)\bigg{)}\).
* \(\Sigma_{k,n}:=\) the Zariski closure of set of all points of \(\mathcal{M}_{3}\) for which \(a\) is strictly \((k,n)\)-preperiodic.
* \(G_{\mathbb{Q}}\) denotes the absolute Galois group of \(\mathbb{Q}\).
* \(\mathbb{F}_{3}\) is the finite field of \(3\) elements.
Let \(F\) be a number field and \(g\in F[a,b]\).
* By saying \(g\) has a smooth \(F\)-rational point, we mean that there exists a point \((a^{0},b^{0})\in F^{2}\), such that \(g(a^{0},b^{0})=0\) and \(g\) is smooth at \((a^{0},b^{0})\).
## 4. Basic lemmas and Tools
In this section, we gather a collection of lemmas and tools, that will be used in the later sections. Generalisations of many of the statements of this section have
been proved in [10]. For such statements, we omit the proof here and refer to the generalised statement in [10].
Consider the Equation (2.2) in [10]. Putting degree \(d=3\) and \(\alpha_{1}=a,\alpha_{2}=-a,\beta=b\) in Equation (2.2), we get the modified _Branner Hubbard normal form_ for monic reduced cubic polynomials,
\[f(z)=z^{3}-3a^{2}z+2a^{3}+b. \tag{4.1}\]
### Divisibility properties of \(f_{k,n}\)
**Lemma 4.1**.: _Let \(k,l,n,m\) be natural numbers such that \(l\leq k\) and \(m|n\), the polynomial \(f_{l,m}\) divides \(f_{k,n}\) in \(\mathbb{Z}[a,b]\)._
Proof.: In Lemma 4.1 of [10], replacing \(\hat{f}_{k,n,d},\hat{f}_{l,m,d},\mathbb{Z}_{(p)},\{\alpha_{1},\alpha_{2},..., \alpha_{d-2},\beta\}\) with \(f_{k,n},f_{l,m},\mathbb{Z},\{a,b\}\) respectively, one obtains this lemma.
**Lemma 4.2**.: _Let \(g\) be an irreducible element of \(\mathbb{Z}[a,b]\), monic as a polynomial in \(\mathbb{Z}[a][b].\) Let \(k,l,m,n\) be non-negative integers with \(m,n\) non-zero, \(l\leq k,\)\(g.c.d.(m,n)=r.\) If \(g\) divides both \(f_{k,n}\) and \(f_{l,m}\) in \(\mathbb{Z}[a,b]\), then \(g\) divides \(f_{l,r}\) in \(\mathbb{Z}[a,b]\)._
Proof.: Similarly as lemma 4.1, one obtains this lemma from Lemma 4.2 of [10].
From Lemmas 4.1 and 4.2, one directly obtains the following corollary,
**Corollary 4.3**.: _Let \(k,l,m,n\) be non-negative integers with \(m,n\) non-zero, \(l\leq k,\)\(g.c.d.(m,n)=r.\) Then, \(f_{l,r}\) divides \(g.c.d.(f_{k,n},f_{l,m})\) in \(\mathbb{Z}[a,b].\) Moreover, The radical ideals of the ideal generated by \(f_{l,r}\) and the ideal generated by \(g.c.d.(f_{k,n},f_{l,m})\) are the same. _
### A weak version of Thurston's rigidity theorem for \(\mathcal{M}_{3}\)
**Theorem 4.4**.: _Fix \(k_{1},k_{2}\in\mathbb{N}\cup\{0\}\), and \(n_{1},n_{2}\in\mathbb{N}.\) Then, the polynomials_
\[f^{k_{1}+n_{1}}(a)-f^{k_{1}}(a)\text{ and }f^{k_{2}+n_{2}}(-a)-f^{k_{i}}(-a)\]
_are coprime in \(\mathbb{C}[a,b]\)._
Proof.: In the version of Thurston's rigidity theorem stated in [10] (Theorem 4.4), replacing \(\hat{f}^{k_{1}+n_{1}}(\alpha_{1})-\hat{f}^{k_{1}}(\alpha_{1}),\hat{f}^{k_{i}+ n_{i}}(\alpha_{i})-\hat{f}^{k_{i}}(\alpha_{i})\) with \(f^{k_{1}+n_{1}}(a)-f^{k_{1}}(a),f^{k_{2}+n_{2}}(-a)-f^{k_{i}}(-a)\) respectively, one obtains this theorem.
### Generalised Eisenstein Irreducibility criterion
**Theorem 4.5**.: _Let \(g,h\) be non-constant elements of \(\mathbb{Z}[a,b]\), both monic as elements of \(\mathbb{Z}[a][b]\). Let \(\text{Res}(g,h)\) denote the resultant of \(g\) and \(h\), both considered as polynomials in \(b\) over the integral domain \(\mathbb{Z}[a]\). Suppose the following conditions hold, 1) \(g\equiv h^{n}\)\((\text{mod }3)\), for some \(n\in\mathbb{N}\). 2) \(h\)\((\text{mod }3)\) is irreducible in \(\mathbb{F}_{3}[a,b]\). 3) \(\text{Res}(g,h)\not\equiv 0\)\((\text{mod }3^{2\cdot deg(h)})\), where \(deg(h)\) is the degree of \(h\) as a polynomial in \(b\) over \(\mathbb{Z}[a]\)._
_Then, \(g\) is irreducible in \(\mathbb{Q}[a,b]\)._
Proof.: Replacing \(p\) and \(\mathbb{Z}[\alpha_{1},...,\alpha_{p^{\varepsilon}-2},\beta]\) with \(3\) and \(\mathbb{Z}[a,b]\) respectively in the Theorem 4.6 of [10], this theorem follows.
### Extension of irreducibility
In this subsection, we relate the irreducibility of a multivariate polynomial over a number field and over \(\mathbb{C}\). We should mention that while we prove Theorem 4.6 and Corollaries 4.7, 4.8 for polynomials in two variables, they can be directly generalised for polynomials in any number of variables.
**Theorem 4.6**.: _[_1_]_ _Let \(g\) be an element of \(\mathbb{Q}[a,b]\). Let \(g(0,0)=0\), and the linear part of \(g\) is non-zero. Then,_
\(g\) _is irreducible in \(\mathbb{Q}[a,b]\iff g\) is irreducible in \(\mathbb{C}[a,b]\)._
Proof.: \(\Longleftarrow\) This part is trivial.
\(\Longrightarrow\) We will prove by contradiction. Let's assume that, \(g=g_{1}\cdot g_{2}\), such that \(g_{1},g_{2}\in\mathbb{C}[a,b]\), none of them is constant. Then, \(g_{1},g_{2}\) have algebraic coefficients, and one of \(g_{1},g_{2}\) has to have constant term \(0\), with nonzero linear part, while the other has non-zero constant term. Let's assume \(g_{1}\) has zero constant term. Then by replacing \(g\) with \(g_{1}\) in the above argument, one can obtain a factor of \(g\), which is irreducible over \(\mathbb{C}\), has zero constant term and non-zero linear part. Hence, without loss of generality, let's assume \(g_{1}\) is irreducible over \(\mathbb{C}\). Also, multiplying by a constant one can make the constant term of \(g_{2}\) as \(1\). So, we have factorization of \(g\) in \(\mathbb{C}[a,b]\), as \(g=g_{1}g_{2}\), such that \(g_{1}\) is irreducible in \(\mathbb{C}[a,b]\), has constant term \(0\) and linear part of \(g_{1}\) is same as linear part of \(g\). Now, consider the absolute Galois group of \(\mathbb{Q}\), denoted \(G_{\mathbb{Q}}\). For any \(\sigma\in G_{\mathbb{Q}}\), as \(g_{1}\) is irreducible over \(\mathbb{C}\), \(\sigma(g_{1})\) is also irreducible over \(\mathbb{C}\). As \(g\) is defined over \(\mathbb{Q}\), either \(\sigma(g_{1})\) is a constant multiple of \(g_{1}\), or \(g_{1}\cdot\sigma(g_{1})\) divides \(g\). But if \(g_{1}\cdot\sigma(g_{1})\) divides \(g\), then linear part of \(g\) is zero. So, \(\sigma(g_{1})=c\cdot g_{1}\), where \(c\in\mathbb{C}^{*}\). Now, the linear part of \(g_{1}\) is same as the linear
part of \(g\), hence \(c=1\). So, \(\sigma(g_{1})=g_{1}\). As \(\sigma\) was chosen arbitrarily from \(G_{\mathbb{Q}}\), we get \(g_{1}\in\,\mathbb{Q}[a,b]\). So, \(g\) is reducible over \(\mathbb{Q}\). Hence, we arrive at a contradiction.
**Corollary 4.7**.: _[_1_]_ _Let \(g\) be an element of \(\mathbb{Q}[a,b]\). Let's assume that \(g\) has a smooth \(\mathbb{Q}\)-rational point, i.e. there exists a point \((a^{0},b^{0})\in\,\mathbb{Q}^{2}\), such that \(g(a^{0},b^{0})=0\), and \(g\) is smooth at \((a^{0},b^{0})\). Then,_
\(g\) _is irreducible in \(\,\mathbb{Q}[a,b]\iff g\) is irreducible in \(\,\mathbb{C}[a,b]\)._
Proof.: By an affine change of coordinate, sending \((a^{0},b^{0})\) to \((0,0)\), from \(g\) one obtains a polynomial \(g^{\prime}\in\,\mathbb{Q}[a,b]\), such that constant term of \(g^{\prime}\) is zero and \(g^{\prime}\) has non-zero linear part. Also, \(g\) is irreducible over \(\mathbb{C}\) (or, over \(\mathbb{Q}\)) \(\iff\ g^{\prime}\) is irreducible over \(\,\mathbb{C}\) (or, over \(\mathbb{Q}\)). Now, applying the Theorem 4.6 on \(g^{\prime}\), one obtains the corollary.
**Corollary 4.8**.: _Let \(F\) be a number field, which means finite extension over \(\mathbb{Q}\). Let \(g\) be an element of \(F[a,b]\). Let's assume that \(g\) has a smooth \(F\)-rational point, defined similarly as in the previous corollary. Then,_
\(g\) _is irreducible in \(F[a,b]\iff g\) is irreducible in \(\,\mathbb{C}[a,b]\)._
Proof.: Replacing \(\mathbb{Q}\) with \(F\) in the proof of Lemma 4.6 and Corollary 4.7, every argument there follows verbatim, and one obtains this corollary.
### Even and odd polynomials
**Definition 4.9**.: Let \(g\) be an element in \(\,\mathbb{C}[a,b]\). we say \(g\) is even iff \(g(a,b)=g(-a,-b)\), and \(g\) is odd iff \(g(a,b)=-g(-a,-b)\).
Every non-zero polynomial \(g\in\,\mathbb{C}[a,b]\), can be written as \(g=g_{e}+g_{o}\), where \(g_{e}\in\,\mathbb{C}[a,b]\) is an even polynomial, and \(g_{o}\in\,\mathbb{C}[a,b]\) is an odd polynomial.
Let \(G_{e}(\text{and},\ G_{o})\) denote the set of all even (and, odd) polynomials in \(\,\mathbb{C}[a,b]\).
**Lemma 4.10**.: _The sets \(G_{e},G_{o}\) are additive subgroups of \(\,\mathbb{C}[a,b]\). The set \(G:=G_{e}\cup G_{o}\) is closed under multiplication. Also, if \(g_{1},g_{2}\in G\), and \(g_{1}=g_{2}\cdot h\), for some \(h\in\,\mathbb{C}[a,b]\), then \(h\) belongs to \(G\)._
Proof.: Only the last part of the lemma is non-trivial. We will prove the last part by contradiction. Let us assume that \(h\) is not even or odd polynomial. So, \(h\) admits an even-odd decomposition \(h=h_{e}+h_{o}\), where \(h_{e}\) is even polynomial, \(h_{o}\) is odd polynomial and \(h_{e}\neq 0\neq h_{o}\). Now, \(g_{2}\) being even or odd polynomial, \(g_{1}\) admits an even-odd decomposition \(g_{1}=g_{2}\cdot h_{e}+g_{2}\cdot h_{o}\), where \(g_{2}\cdot h_{e}\neq 0\neq g_{2}\cdot h_{o}\). Hence, we arrive at a contradiction.
**Lemma 4.11**.: _For \(k\in\mathbb{N}\cup\{0\},n\in\mathbb{N}\), the polynomials \(f_{k,n}\) are odd polynomials._
Proof.: Let \(l\in\mathbb{N}\cup\{0\}\). Consider the polynomial \(f^{l}(z)\in\mathbb{Z}[z,a,b]\). Every monomial term of \(f^{l}(z)\) is of odd degree. Hence, same is true for \(f^{l}(a)\). Therefore, \(f^{l}(a)\) is an odd polynomial in \(\mathbb{Z}[a,b]\) for any \(l\in\mathbb{N}\cup\{0\}\) and so is \(f_{k,n}=f^{k+n}(a)-f^{k}(a)\), for any \(k\in\mathbb{N}\cup\{0\},n\in\mathbb{N}\).
**Corollary 4.12**.: _Let \(k\in\mathbb{N}\cup\{0\},n\in\mathbb{N}\) be arbitrary but fixed. If the polynomials \(h_{l,m}\) are irreducible over \(\mathbb{Q}\) for all \(0\leq l\leq k,\ 1\leq m,\ m|n,\ (l,m)\neq(k,n)\in\mathbb{Z}^{2}\), then the polynomial \(h_{k,n}\) is even or odd polynomial._
Proof.: If the polynomials \(h_{l,m}\) are irreducible over \(\mathbb{Q}\) for all \(0\leq l\leq k,\ 1\leq m,\ m|n,\ (l,m)\neq(k,n)\in\mathbb{Z}^{2}\), then for any such \((l,m)\) including \((k,n)\), one can write
\[f_{l,m}=h_{l,m}\cdot\prod_{\begin{subarray}{c}0\leq j\leq l,\ 1\leq r,\ r|m,\\ (j,r)\neq(l,m)\in\mathbb{Z}^{2}\end{subarray}}h_{j,r}^{a_{j,r}},\ \text{for some}\ a_{j,r}\in\mathbb{N}\]
Observe that \(h_{0,1}=f_{0,1}=b-a\) is an odd polynomial. Applying induction on both \(l\) and \(m\) such that \(0\leq l\leq k,1\leq m,m|n\), and using the last part of Lemma 4.10, one gets that \(h_{k,n}\) is even or odd polynomial.
**Lemma 4.13**.: _Let \(g\in\mathbb{C}[a,b]\) be an even or odd polynomial. Let \(h\in\mathbb{C}[a,b]\) be an irreducible polynomial, having a decomposition \(h=h_{e}+h_{o},\) where \(h_{e}\) is even polynomial, and \(h_{o}\) is odd polynomial. Let \(h^{\diamond}:=h_{e}-h_{o}=h(-a,-b)\). Then the following statements are true,_
_1) \(h^{\diamond}\) is irreducible._
_2) In \(\mathbb{C}[a,b]\), \(h\) divides \(g\iff h^{\diamond}\) divides \(g\)._
_3) If \(h\) is not even or odd polynomial, then \(h\) and \(h^{\diamond}\) are distinct (which means not equal upto multiplication by a constant) irreducible polynomials in \(\mathbb{C}[a,b]\)._
Proof.: Assuming \(h^{\diamond}\) is reducible, by using change of variables \((a,b)\rightarrow(-a,-b)\), one gets reducibility for \(h\). Hence, by contradiction, the first part of the lemma is proved.
Here, \(g\) being even or odd polynomial, \(g(a,b)=\pm g(-a,-b)\). Again, by using the change of variables \((a,b)\rightarrow(-a,-b)\), one obtains the second part of the lemma.
For the third part of the lemma, if \(h\) is not even or odd polynomial, then \(h_{e}\neq 0\neq h_{o}\). So, if \(h\) and \(h^{\diamond}\) are constant multiple of each other, then \(h\) divides \(h\pm h^{\diamond}\), which are the polynomials \(2h_{e},2h_{o}\). As either \(deg(h_{e})<deg(h)\) or \(deg(h_{o})<deg(h)\), one gets a contradiction.
**Remark 4.14**.: The moduli space \(\mathcal{M}_{3}\) is the quotient space, \(\mathbb{C}^{2}/\left((a,b)\sim(-a,-b)\right)\). Let \(h\in\mathbb{C}[a,b]\). Let \(V(h),V(h^{\circ})\) be the algebraic sets in \(\,\mathbb{C}^{2}\) corresponding to \(h\) and \(h^{\circ}\), respectively. Then, \(V(h)\) and \(V(h^{\circ})\) have same image under the quotient map \(\mathbb{C}^{2}\to\mathcal{M}_{3}\). In other words, the algebraic sets of \(h\) and \(h^{\circ}\) merge together in \(\mathcal{M}_{3}\). If \(V(h)\) is an irreducible curve in \(\mathbb{C}^{2}\), then the curve in \(\mathcal{M}_{3}\) corresponding to the polynomial \(h\cdot h^{\circ}\) is an irreducible curve in \(\mathcal{M}_{3}\). We will use this fact in the next section.
## 5. Irreducibility of \(\Sigma_{k,2}\) curves
From Equation (1.1), we have
\[f(z)=z^{3}-3a^{2}z+2a^{3}+b, \tag{5.1}\]
with \(\pm a\) as finite critical points.
One observes the following factorization,
\[f(z)-f(w)=(z-w)(z^{2}+zw+w^{2}-3a^{2}). \tag{5.2}\]
We first study the polynomial \(h_{1,2}\). This will give us a glimpse into the general nature of the polynomials \(h_{k,2},\ k\in\mathbb{N}\cup\{0\}\).
**Lemma 5.1**.: _The polynomial \(h_{1,2}\) is \((b-a)^{2}+1\). It is irreducible over \(\mathbb{Q}\), but reducible and smooth over \(\mathbb{C}\). There is no \(\mathbb{Q}\)-rational point on it. Moreover, the curve \(\Sigma_{1,2}\subset\mathcal{M}_{3}\) is irreducible._
Proof.: To obtain \(h_{1,2}\), we need to factor out all irreducible factors of \(f_{0,2}\) and \(f_{1,1}\) from \(f_{1,2}\) with each irreducible factor raised to their highest power that divides \(f_{1,2}\). One computes,
\[f_{0,2}=f(b)-a=b^{3}-3a^{2}b+2a^{3}+b-a=(b-a)\left((b-a)(b+2a)+1\right), \tag{5.3}\]
\[f_{1,1}=f^{2}(a)-f(a)=f(b)-b=b^{3}-3a^{2}b+2a^{3}=(b-a)^{2}(b+2a), \tag{5.4}\]
\[f_{1,2}=f^{3}(a)-f(a)=(f^{2}(a)-a)\left((f^{2}(a))^{2}+af^{2}(a)+a^{2}-3a^{2} \right), \tag{5.5}\]
\[\frac{f_{1,2}}{f_{0,2}}=(f^{2}(a))^{2}+af^{2}(a)-2a^{2}=(f(b))^{2}+af(b)-2a^{2}\]
\[=(f(b)-a)(f(b)+2a)=f_{0,2}\;(f(b)+2a). \tag{5.6}\]
Hence, \(h_{1,2}\) divides
\[f(b)+2a=b^{3}-3a^{2}b+2a^{3}+b+2a=(b+2a)\left((b-a)^{2}+1\right).\]
As \(b+2a\) is a factor of \(f_{1,1}\) and \((b-a)^{2}+1\) is irreducible over \(\mathbb{Q}\), we get
\[h_{1,2}=(b-a)^{2}+1.\]
Let us define \(l_{1}(a,b):=(b-a+i),\ l_{2}(a,b):=(b-a-i).\) Now, it directly follows that \(h_{1,2}\) is irreducible over \(\mathbb{Q},\) reducible and smooth over \(\mathbb{C},\) and has no \(\mathbb{Q}\)-rational point on it. Also, \(l_{1}(-a,-b)=-l_{2}(a,b).\) By Remark 4.14, the lines \(l_{1}\) and \(l_{2}\) merge together in \(\mathcal{M}_{3},\) making \(\Sigma_{1,2}\) an irreducible line in \(\mathcal{M}_{3}.\)
Next, we will show that \(h_{0,2}\) is irreducible over \(\mathbb{C}.\)
**Lemma 5.2**.: _The polynomial \(h_{0,2}\) is \((b-a)(b+2a)+1\), and it is irreducible over \(\mathbb{C}.\)_
Proof.: From Equation (5.3) and the fact that \(f_{0,1}=b-a,\) we get that \(h_{0,2}=(b-a)(b+2a)+1.\) By a change of variable, one sees that \((b-a)(b+2a)+1\) is irreducible in \(\mathbb{C}[a,b]\) iff \(xy+1\) is irreducible in \(\mathbb{C}[x,y].\) Hence, the lemma is proved.
In the last two lemmas, we have seen \(h_{0,2}\) and \(h_{1,2}\) are irreducible over \(\mathbb{Q}.\) Next, we study the irreducibility of \(h_{k,2}\) polynomials, over \(\mathbb{Q},\) where \(k\) varies over all natural numbers greater than \(1.\) We will show that \(h_{k,2}\) is \(3\)-Eisenstein w.r.t. the polynomial \(h_{1,2}.\) For that, we need to check that the three conditions in generalised Eisenstein irreducibility criterion (Theorem 4.5) hold. First, we check that the condition \(1\) of Theorem 4.5 holds.
**Lemma 5.3**.: _For any \(k\in\mathbb{N}\), \(h_{k,2}\equiv h_{1,2}^{N_{k,2}}\)\((\text{mod }3),\) for some \(N_{k,2}\in\mathbb{N}\)._
Proof.: From Equation (5.1), we have \(f\equiv z^{3}-a^{3}+b\)\((\text{mod }3).\) Hence,
\[f_{k,2}=f^{k+2}(a)-f^{k}(a)=f^{k+1}(b)-f^{k-1}(b)\]
Similarly, \(f_{k,1}\equiv(b-a)^{3^{k}}\)\((\text{mod }3).\) As \(h_{k,2}\) divides \(f_{k,2}/f_{k,1}\equiv\left((b-a)^{2}+1\right)^{3^{k}}\)
\((\text{mod }3)\) and the polynomial \((b-a)^{2}+1\) is irreducible modulo \(3,\) we have \(h_{k,2}\equiv((b-a)^{2}+1)^{N_{k,2}}\equiv h_{1,2}^{N_{k,2}}\)\((\text{mod }3)\) (by Lemma 5.1), for some \(N_{k,2}\in\mathbb{N}.\)
Observe that, it directly follows from Lemma 5.1 that condition \(2\) of generalised Eisenstein irreducibility criterion (Theorem 4.5) holds for \(h_{1,2}.\) For condition \(3\) of Theorem 4.5, we need to study the resultant \(\text{Res}(h_{k,2},h_{1,2}),\) where \(k\in\mathbb{N},k>1.\) To do that, we require some divisibility properties of \(f_{k,2}\) and \(f_{0,k},\) which we study in the Lemma 5.4.
Let \(g_{1},g_{2}\in\mathbb{C}[a,b].\) Let \(o(g_{1},g_{2}):=\alpha,\) such that \(\alpha\in\mathbb{N}\cup\{0\},(g_{2})^{\alpha}|g_{1},(g_{2})^{\alpha+1}\nmid g _{1}.\) In other words, \(\alpha\) is the highest power of \(g_{2}\) that divides \(g_{1}.\)
**Lemma 5.4**.: _For any \(k\in\mathbb{N}\), we have \(o(f_{k,2},f_{0,2})\geq 2.\) For any even \(k\in\mathbb{N}\), we have \(o(f_{0,k},f_{0,2})=o(f_{0,k},h_{0,2})=1\)._
Proof.: From Equation (5.6), we get \(f_{1,2}=(f_{0,2})^{2}\cdot(f(b)+2a).\) So, \(o(f_{1,2},f_{0,2})\geq 2.\) As \(f_{1,2}\) divides \(f_{k,2}\) for any \(k\in\mathbb{N}\), the first part of the lemma follows.
For even \(k\in\mathbb{N}\), we know that \(o(f_{0,k},h_{0,2})\geq o(f_{0,k},f_{0,2})\geq 1.\) Hence to prove both equalities of the lemma, it is enough to show that \(o(f_{0,k},h_{0,2})=1.\) As \(k\) is even, let \(k=2l,l\in\mathbb{N}\). Observe that,
\[f_{0,2l}=f^{2l}(a)-a=\sum_{i=1}^{l}\big{(}f^{2i}(a)-f^{2i-2}(a)\big{)}=\sum_{i =1}^{l}f_{2i-2,2}.\]
From the first part of this lemma, \(o(f_{2i-2,2},h_{0,2})\geq o(f_{2i-2,2},f_{0,2})\geq 2,\) for \(i\in\mathbb{N}_{>1}\). Also, \(o(f_{0,2},h_{0,2})=1.\) Hence, \(o(f_{0,k},h_{0,2})=o(f_{0,2l},h_{0,2})=1.\)
In the next lemma and the following corollary, we establish condition 3 of generalised Eisenstein irreducibility criterion (Theorem 4.5) for \(h_{k,2}\) and \(h_{1,2}\).
**Lemma 5.5**.: _Let \(l=b-a+i\in\mathbb{C}[a,b].\) Then, up to multiplication by a power of \(i\), The resultant,_
\[\text{Res}(h_{k,2},l)=\left\{\begin{array}{rl}3(2ai+1);&k\text{ even,}\quad k>0\\ 3a;&k\text{ odd,}\quad k>1\end{array}\right.\]
Proof.: Let \(k\in\mathbb{N}_{>1}\). We will first remove irreducible factors of \(f_{k-1,2}\) in \(\mathbb{Z}[a,b]\), from \(f_{k,2}\) with each such factor raised to the highest power that divides \(f_{k,2}\). Consider the polynomial,
\[g_{k}(a,b):=\frac{f_{k,2}}{f_{k-1,2}}=\frac{f^{k+1}(b)-f^{k-1}(b)}{f^{k}(b)-f ^{k-2}(b)}=\big{(}f^{k}(b)\big{)}^{2}+f^{k}(b)f^{k-2}(b)+\big{(}f^{k-2}(b) \big{)}^{2}-3a^{2}\]
\[\equiv 3\left(\big{(}f^{k-2}(b)\big{)}^{2}-a^{2}\right)\ \left(\text{mod}\ f_{k-1,2}=f^{k}(b)-f^{k-2}(b)\right). \tag{5.7}\]
So, any irreducible polynomial \(s_{k}\in\mathbb{Z}[a,b]\) that divides both \(g_{k}\) and \(f_{k-1,2}\), will also divide
\[3\left((f^{k-2}(b))^{2}-a^{2}\right)=3(f^{k-2}(b)-a)(f^{k-2}(b)+a)=3f_{0,k-1}( f^{k-2}(b)+a).\]
From Thurston's rigidity theorem (Theorem 4.4), we get that \(f^{k-2}(b)+a\) and \(f_{k-1,2}\) are coprime. So, \(s_{k}\) will divide \(f_{0,k-1}\) (we can remove 3, because \(f_{k-1,2}\) is monic in \(\mathbb{Z}[a][b]\), and so are its irreducible factors). As \(s_{k}\) divides both \(f_{0,k-1}\) and \(f_{k-1,2}\), by Lemma 4.2 we have that \(s_{k}\) divides \(f_{0,1}\) if \(k\) is even, and \(f_{0,2}\) if \(k\) is odd.
**Let \(k\in\mathbb{N}\) be even.** As \(f_{0,1}=b-a\), we get that \(h_{k,2}\) divides \(g_{k}/(b-a)^{i_{k}}\), where \(i_{k}\) is the highest power of \((b-a)\) that divides \(g_{k}\). Also, \(g_{k}(a,b)/(b-a)^{i_{k}}\) is coprime to \(f_{k-1,2}\).
**Let \(k\in\mathbb{N}_{>1}\) be odd.** From Equation (5.3) and Lemma 5.2, we know that, over \(\mathbb{Q}\), the irreducible factors of \(f_{0,2}\) are \(h_{0,2}\) and \((b-a)\). From Lemma 5.4 and Equation (5.7), we have \(o(g_{k},h_{0,2})=1\), for all \(k\in\mathbb{N}_{>1}\). So, \(h_{k,2}\) divides \(g_{k}(a,b)/(h_{0,2}\cdot(b-a)^{i_{k}})\), where \(i_{k}\) is the highest power of \((b-a)\) that divides \(g_{k}\). Also, \(g_{k}(a,b)/(h_{0,2}\cdot(b-a)^{i_{k}})\) is coprime to \(f_{k-1,2}\).
Let's define
\[g_{k}^{\prime}(a,b):=\left\{\begin{array}{rcl}g_{k}(a,b)/(b-a)^{i_{k}};&k& \text{even}_{>0}\\ g_{k}(a,b)/(h_{0,2}\cdot(b-a)^{i_{k}});&k&\text{odd}_{>1}\end{array}\right. \tag{5.8}\]
Next, we will factor out the irreducible factors of \(f_{k,1}\) from \(g_{k}^{\prime}(a,b)\). But irreducible factors of \(f_{k-1,1}\) has already been factored out, since \(f_{k-1,1}\) divides \(f_{k-1,2}\). Hence, we need to consider the irreducible factors of
\[f_{k,1}/f_{k-1,1}=(f^{k}(a))^{2}+f^{k}(a)f^{k-1}(a)+(f^{k-1}(a))^{2}-3a^{2},\]
and their highest powers that divide \(g_{k}^{\prime}(a,b)\). Let's denote the product of common irreducible factors of \(f_{k,1}/f_{k-1,1}\) and \(g_{k}^{\prime}\), each raised to their highest power that divides \(g_{k}^{\prime}\), as \(t_{k}(a,b)\). Then,
\[h_{k,2}(a,b)=\frac{g_{k}^{\prime}(a,b)}{t_{k}(a,b)}. \tag{5.9}\]
Now, we can compute the resultant. We have, \(\text{Res}(h_{k,2},l)=h_{k,2}(a,a-i)\).
Putting \(b=a-i\) in \(f\), by direct computation or by using Lemma 5.1 one obtains \(f^{k}(a)=a-i\), for odd \(k\in\mathbb{N}\), and \(f^{k}(a)=-2a\), for even \(k\in\mathbb{N}\). Moreover, \(h_{0,2}(a,a-i)=-i(-i+3a)+1=-3ai\).
Observe that for any \(k\in\mathbb{N}\),
\[\left(\frac{f_{k,1}}{f_{k-1,1}}\right)(a,a-i)=(a-i)^{2}-2a(a-i)+4a^{2}-3a^{2} =-1.\]
So, \(t_{k}(a,a-i)=1\), up to multiplication by a power of \(i\) (because, any irreducible factor \(t_{k}^{\prime}(a,b)\) of \(t_{k}(a,b)\) in \(\mathbb{Z}[a,b]\) divides \(f_{k,1}/f_{k-1,1}\) in \(\mathbb{Z}[a,b]\). So \(t_{k}^{\prime}(a,a-i)\in\mathbb{Z}[i][a]\) divides \((f_{k,1}/f_{k-1,1})(a,a-i)=-1\) in \(\mathbb{Z}[i][a]\)).
From the last two paragraphs and using Equations (5.7), (5.8), (5.9), we get that up to multiplication by a power of \(i\),
**For even \(k\in\mathbb{N}\),**
\[\text{Res}(h_{k,2},l)=h_{k,2}(a,a-i)=g_{k}(a,a-i)=3(a-i)^{2}-3a^{2}=-3(2ai+1).\]
**For \(k\in\mathbb{N}_{>1}\) odd,**
\[\text{Res}(h_{k,2},l)=h_{k,2}(a,a-i)=g_{k}(a,a-i)/h_{0,2}(a,a-i)=(3(4a^{2})-3a ^{2})/-3ai=3ai.\]
Hence, the lemma is proved.
**Remark 5.6**.: Showing that \(\operatorname{Res}(h_{k,2},l)\) divides \(3(2ai+1)\) for \(k\) even, and \(3a\) for \(k\) odd\({}_{>1}\), is all one needs to prove the irreducibility of \(h_{k,2}\) in \(\mathbb{Q}[a,b]\). A proof of that statement would be much shorter. But as we will see later, proving the equality, more precisely, proving that \(\operatorname{Res}(h_{k,2},l)\) is not constant allows one to prove irreducibility of \(\Sigma_{k,2}\) in \(\mathcal{M}_{3}\).
**Corollary 5.7**.: _The resultant \(\operatorname{Res}(h_{k,2},h_{1,2})\not\equiv 0(\text{ modulo }81)\), for any \(k\in\mathbb{N}_{>1}\)._
Proof.: By Lemma 5.1, we have
\[\operatorname{Res}(h_{k,2},h_{1,2})=\operatorname{Res}(h_{k,2},b-a+i)\cdot \operatorname{Res}(h_{k,2},b-a-i)\]
\[=h_{k,2}(a,a-i)\cdot h_{k,2}(a,a+i)\]
Now, complex conjugation of \(h_{k,2}(a,a+i)\) is \(h_{k,2}(a,a-i)\). Hence, from Lemma 5.5, upto multiplication by \(\pm 1\), we have for \(k\) even, \(\operatorname{Res}(h_{k,2},h_{1,2})=3(2ai+1)\cdot 3(-2ai+1)=9(4a^{2}+1)\), and for \(k\) odd\({}_{>1}\), \(\operatorname{Res}(h_{k,2},h_{1,2})=9a^{2}\). None of them is congruent to \(0\) (mod \(81\)). Hence, the corollary is proved.
Next, we put all the previous lemmas and corollary in this section together along with generalised Eisenstein irreducibility criterion (Theorem 4.5), to show that \(h_{k,2}\) is irreducible over \(\mathbb{Q}\), for every choice of non-negative integer \(k\).
**Theorem 5.8**.: _For each \(k\in\mathbb{Z}_{\geq 0}\), the polynomial \(h_{k,2}\) is irreducible in \(\mathbb{Q}[a,b]\)._
Proof.: From Lemmas 5.2, 5.1, we get \(h_{0,2},h_{1,2}\) are irreducible over \(\mathbb{Q}\). Let \(k\in\mathbb{N}_{>1}\). Putting \(g=h_{k,2},h=h_{1,2}\) in generalised Eisenstein irreducibility criterion (Theorem 4.5), from Lemmas 5.1, 5.3 and Corollary 5.7, we get that \(h_{k,2}\) is irreducible in \(\mathbb{Q}[a,b]\).
Next, we use irreducibility of \(h_{k,2}\) and \(h_{k,1}\) over \(\mathbb{Q}\), to show that \(h_{k,2}\) is even for every \(k\in\mathbb{N}\cup\{0\}\). We will need this following corollary in the proof of Theorem 5.10.
**Corollary 5.9**.: _For each \(k\in\mathbb{Z}_{\geq 0}\), the polynomial \(h_{k,2}\) is even polynomial._
Proof.: For \(k=0,1\), the corollary follows from Lemmas 5.2, 5.1. Let \(k\in\mathbb{N}_{>1}\). From Theorem 5.7 of [10], and Theorem 5.8 above, we get that \(h_{k,2},h_{k,1}\) are irreducible polynomials over \(\mathbb{Q}\) for every choice of non-negative integer \(k\). Hence, using Corollary 4.12 we get that \(h_{k,2}\) is an even or odd polynomial. From Lemmas 5.1, 5.3, we get \(h_{k,2}(0,0)\equiv h_{1,2}(0,0)^{N_{k,2}}\equiv 1\) (mod \(3\)). So, the polynomial \(h_{k,2}\) has a non-zero constant term. Hence, \(h_{k,2}\) is an even polynomial, for any \(k\geq 0\).
Now, we will show that although \(h_{k,2}\) might not be irreducible over \(\mathbb{C}\), the curves \(\Sigma_{k,2}\) are all irreducible in \(\mathcal{M}_{3}\).
**Theorem 5.10**.: _For each \(k\in\mathbb{Z}_{\geq 0}\), the curve \(\Sigma_{k,2}\) is irreducible._
Proof.: From Lemmas 5.1, 5.2, we know that \(\Sigma_{0,2},\Sigma_{1,2}\subset\mathcal{M}_{3}\) are irreducible.
Let \(k\in N_{>1}\). From Lemma 5.5, we have that \(h_{k,2}\) intersects the line \(l=b-a+i\) at \((i/2,-i/2)\) point, for \(k\) even, and at \((0,-i)\) point, for \(k\) odd\({}_{>1}\). As \(\operatorname{Res}(h_{k,2},b-a+i)\) is linear polynomial in \(a\) (Lemma 5.5), \(h_{k,2}\) is smooth at \((i/2,-i/2)\) point, for \(k\) even, and at \((0,-i)\) point, for \(k\) odd\({}_{>1}\).
Let's assume that \(h_{k,2}\) is irreducible in \(\mathbb{Q}[i][a,b]\). The polynomial \(h_{k,2}\) has a smooth \(\mathbb{Q}[i]\)-rational point. By Corollary 4.8, we have that \(h_{k,2}\) is irreducible in \(\mathbb{C}[a,b]\). Hence, the curve \(\Sigma_{k,2}\in\mathcal{M}_{3}\) is irreducible.
Next, let's assume that \(h_{k,2}\) is reducible in \(\mathbb{Q}[i][a,b]\). As \(h_{k,2}\) is irreducible in \(\mathbb{Q}[a,b]\), we get that \(h_{k,2}=t_{k,2}\cdot\bar{t}_{k,2}\), for some irreducible polynomial \(t_{k,2}\in\mathbb{Q}[i][a,b]\), and \(\bar{t}_{k,2}\) is the complex conjugate of \(t_{k,2}\).
Now, for \(k\in\mathbb{N},k\) even, \(h_{k,2}\) passes through the point \((i/2,-i/2)\). Without loss of generality, let's assume that \(t_{k,2}(i/2,-i/2)=0\). By complex conjugation, we get \(\bar{t}_{k,2}(-i/2,i/2)=0\). Also, \(\bar{t}_{k,2}(i/2,-i/2)\neq 0\), otherwise \(h_{k,2}\) will not be smooth at \((i/2,-i/2)\). Hence, \(\bar{t}_{k,2}\) is not even or odd polynomial. But \(h_{k,2}\) is even polynomial, by Corollary 5.9. As \(\bar{t}_{k,2}\) is irreducible, from Lemma 4.13, we get that \(t_{k,2}^{\circ}=\bar{t}_{k,2}\),i.e. \(t_{k,2}(-a,-b)=\bar{t}_{k,2}(a,b)\). So, the algebraic sets of \(t_{k,2}\) and \(\bar{t}_{k,2}\) in \(\mathbb{C}^{2}\) merge together (see remark 4.14) under the quotient map \(\mathbb{C}^{2}\to\mathcal{M}_{3}\), and it is same as the algebraic set of \(h_{k,2}\) in \(\mathcal{M}_{3}\). Hence, if \(t_{k,2}\) is irreducible over \(\mathbb{C}\), then the algebraic set of \(h_{k,2}\) in \(\mathcal{M}_{3}\). which is \(\Sigma_{k,2}\), is irreducible.
So, to prove irreducibility of \(\Sigma_{k,2}\), it is enough to prove that \(t_{k,2}\) is irreducible in \(\mathbb{C}[a,b]\). Now, \(t_{k,2}\) is an irreducible polynomial in \(\mathbb{Q}[i][a,b]\). It has a smooth \(\mathbb{Q}[i]\)-rational point, namely \((i/2,-i/2)\). Hence, by Corollary 4.8, we have \(t_{k,2}\) is irreducible in \(\mathbb{C}[a,b]\). So, for even \(k\in\mathbb{N}\), \(\Sigma_{k,2}\) is irreducible in \(\mathcal{M}_{3}\).
Replacing the point \((i/2,-i/2)\) with \((0,-i)\) in the last paragraph, we get that for odd \(k>1\), \(\Sigma_{k,2}\) curves are irreducible in \(\mathcal{M}_{3}\). Hence, the theorem is proved.
## 6. On \(\Sigma_{k,q}\) curves
**Lemma 6.1**.: _For any prime \(q\in\mathbb{N}\), we have \(h_{1,q}=(f^{q}(a)+2a)/(b+2a)\). In another form, \(h_{1,q}\equiv h_{0,q}\equiv\sum_{i=0}^{q-1}(b-a)^{3^{i}-1}(\text{mod }3)\)._
Proof.: We have
\[f_{1,q}=f^{q+1}(a)-f(a)=(f^{q}(a)-a)\left((f^{q}(a))^{2}+af^{q}(a)+ a^{2}-3a^{2}\right)\] \[\qquad\qquad=(f^{q}(a)-a)^{2}(f^{q}(a)+2a)=f_{0,q}^{2}(f^{q}(a)+2a)\]
So, \(h_{1,q}\) divides \(f^{q}(a)+2a.\) As \(f^{q}(a)+2a\equiv 3a\) (mod \(f_{0,q}\)), the polynomials\(f^{q}(a)+2a\) and \(f_{0,q}\) are coprime.
As we obtain \(h_{1,q}\) by factoring out irreducible factors of \(f_{1,1}\) and \(f_{0,q}\) from \(f_{1,q}\), each raised to their highest power that divides \(f_{1,q}\), we get that \(h_{1,q}=(f^{q}(a)+2a)/h_{1,1}^{s}\), where \(s\) is the highest power of \(h_{1,1}\) that divides \(f^{q}(a)+2a\).
By Equation (5.4), we have \(h_{1,1}=b+2a.\) Putting \(b=-2a\), we see that \(f^{n}(a)=-2a,\forall n\in\mathbb{N}\). So, \(b+2a\) divides \(f^{q}(a)+2a.\) Now, we need to check if \((b+2a)^{2}\) divides \(f^{q}(a)+2a.\) As \(f\equiv z^{3}-a^{3}+b\) (mod 3), we have \(f^{q}(a)+2a\equiv f^{q}(a)-a\equiv\sum_{i=0}^{p-1}(b-a)^{3^{i}}\) (mod 3). As \((b-a)^{2}\) does not divide \(f^{q}(a)+2a\) in modulo 3, we get that \((b+2a)^{2}\) does not divide \(f^{q}(a)+2a\) in \(\mathbb{Z}[a,b].\) Hence, \(h_{1,q}=(f^{q}(a)+2a)/(b+2a).\) Reducing this equation in modulo 3, we obtain the other form of \(h_{1,q}\) mentioned in the lemma. To show that \(h_{1,q}\equiv h_{0,q}(\text{mod }3)\), observe that \(f_{0,q}\equiv(b-a)\sum_{i=0}^{q-1}(b-a)^{3^{i}-1}\) (mod 3). Hence, the lemma is proved.
**Lemma 6.2**.: _For each \(k\in\mathbb{N}\) and \(q\in\mathbb{N},q\) prime, if \(h_{1,q}\) is irreducible in \(\mathbb{F}_{3}[a,b]\), then \(h_{k,q}\equiv h_{1,q}^{N_{k,q}}(\text{mod }3)\), for some \(N_{k,q}\in\mathbb{N}\)._
Proof.: For \(k,q\in\mathbb{N},q\) prime. Then,
\[f_{k,q}=f^{k+q}(a)-f^{k}(a)\equiv\sum_{i=k}^{k+q-1}(b-a)^{3^{i}} \equiv(b-a)^{3^{k}}(\sum_{j=0}^{q-1}(b-a)^{3^{j}-1})^{3^{k}}(\text{mod }3)\]
As \(f_{k,1}\equiv(b-a)^{3^{k}}(\text{mod }3)\), we have \(h_{k,q}\) divides \(\left(\sum_{j=0}^{q-1}(b-a)^{3^{j}-1}\right)^{3^{k}}\) in modulo 3. As \(h_{1,q}\equiv\sum_{j=0}^{q-1}(b-a)^{3^{j}-1}(\text{mod }3)\) is irreducible modulo 3, the lemma follows.
The next lemma shows that this method of showing irreducibility of \(h_{k,q}\) in \(\mathbb{Q}[a,b]\), does not extend for any prime \(q\) other than 2.
**Lemma 6.3**.: _The polynomial \(h_{1,q}(\text{mod }3)\) is irreducible in \(\mathbb{F}_{3}[a,b]\iff q=2\)._
Proof.: By \(\tilde{h}_{1,q}\) we will denote the image of \(h_{1,q}\) under the quotient map \(\mathbb{Z}[a,b]\rightarrow\mathbb{F}_{3}[a,b].\) We already know that for \(q=2\), \(\tilde{h}_{1,2}\) is irreducible in \(\mathbb{F}_{3}[a,b].\) So, we need to show that \(\tilde{h}_{1,q}\) is reducible in \(\mathbb{F}_{3}[a,b]\), for any prime \(q>2.\) Now, \(\tilde{h}_{1,q}=\sum_{i=0}^{q-1}(b-a)^{3^{i}-1}\) is reducible in \(\mathbb{F}_{3}[a,b]\) iff \(g(x):=\sum_{i=0}^{q-1}x^{3^{i}-1}\) is reducible in \(\mathbb{F}_{3}[x].\) Now, consider the polynomial, \(xg(x)=\sum_{i=0}^{q-1}x^{3^{i}}.\) Consider the extension \(\mathbb{F}_{3^{q}}\) over \(\mathbb{F}_{3}.\) The orders of the two fields in this extension imply that there are non-zero,
trace \(0\), elements in \(\mathbb{F}_{3^{q}}\), for this extension. Now, any such element is a root of \(xg(x)\), as \(Gal(\mathbb{F}_{3^{q}}/\mathbb{F}_{3})\) is generated by the Frobenius elemment, \(x\mapsto x^{3}\). So, if \(g(x)\) is irreducible in \(\mathbb{F}_{3}[x]\), then we have, \(deg(g(x))\) divides \([\mathbb{F}_{3^{q}}:\mathbb{F}_{3}]=q\). Now, \(deg(g(x))=3^{q-1}-1\) divides \(q\iff q=2\). Hence, the lemma is proved.
## 7. The unicritical case
Putting \(a=0\) in Equation (1.1), we get the normal form for monic, reduced, unicritical cubic polynomial,
\[f(z)=z^{3}+b.\]
Let \(h_{k,n}^{uni}\) be the polynomial in \(\mathbb{Z}[b]\), whose roots are exactly the values of \(b\) for which \(0\) is strictly \((k,n)\)-preperiodic under \(f(z)=z^{3}+b\). Putting \(a=0\) in Equation (2.1), we get that \(h_{k,n}^{uni}\) divides \(h_{k,n}(0,b)\) in \(\mathbb{Z}[b]\). Milnor [14] conjectured that \(h_{k,n}^{uni}\) is either constant or irreducible over \(\mathbb{Q}\), for any \(k\geq 0,n\geq 1\). In this section, we will prove the irreducibility of \(h_{k,2}^{uni}\) over \(\mathbb{Q}\), for any \(k\in\mathbb{Z}_{\geq 0},\;k\) even.
**Remark 7.1**.: The polynomial \(h_{k,n}^{uni}\) can be constant for some \((k,n)\in\mathbb{Z}^{2},k\geq 0,n\geq 1\). For example, \(h_{1,n}^{uni}\) is equal to one for any \(n\in\mathbb{N}\). This can be shown from the following observation: if the critical point \(0\) is \((1,n)\)-preperiodic for the polynomial \(f(z)=z^{3}+b\), then \(0\) is \(n\)-periodic too.
**Theorem 7.2**.: _For any even \(k\in\,\mathbb{Z}_{\geq 0}\), the polynomial \(h_{k,2}^{uni}\) is either constant or an irreducible polynomial over \(\mathbb{Q}\)._
Proof.: From section 5, we get that the polynomial \(h_{k,2}\) is \(3\)-Eisenstein w.r.t the polynomial \(h_{1,2}=(b-a)^{2}+1\). From the proof of Corollary 5.7, we get that for any even \(k\in\mathbb{N}\), the resultant \(\operatorname{Res}(h_{k,2},h_{1,2})=9(4a^{2}+1)\). For any \(k\in\mathbb{N}\cup\{0\},\;\;h_{k,2}\) is monic as polynomial in \(b\) over the integral domain \(\mathbb{Z}[a]\). Hence, degree of \(h_{k,2}\) as a polynomial in \(b\) over \(\mathbb{Z}[a]\) is same as the degree of \(h_{k,2}(0,b)\) as an element of \(\mathbb{Z}[b]\). By invariance of resultant under ring homomorphisms that preserve the degree of the polynomials, we see that \(\operatorname{Res}(h_{k,2}(0,b),h_{1,2}(0,b))=\operatorname{Res}(h_{k,2},h_{1, 2})(0)=9\). Hence, the polynomial \(h_{k,2}(0,b)\) is \(3\)-Eisenstein w.r.t. the polynomaial, \(h_{1,2}(0,b)=b^{2}+1\). So for even \(k\), the polynomial \(h_{k,2}(0,b)\) is irreducible over \(\mathbb{Q}\). As \(h_{k,2}^{uni}\) divides \(h_{k,2}(0,b)\) in \(\mathbb{Q}[b]\), we get that \(h_{k,2}^{uni}\) is either constant or an irreducible polynomial in \(\mathbb{Q}[b]\).
**Remark 7.3**.: Theorem 7.2 partially proves Milnor's conjecture on the unicritical case [14]. A stronger version of this theorem has been proved in [11] (also, see [1]).
|
2309.00118 | Signatures of Majorana Zero-Modes in an isolated one-dimensional
superconductor | We examine properties of the mean-field wave function of the one-dimensional
Kitaev model supporting Majorana Zero Modes (MZMs) \emph{when restricted} to a
fixed number of particles. Such wave functions can in fact be realized as exact
ground states of interacting number-conserving Hamiltonians and amount to a
more realistic description of the finite isolated superconductors. Akin to
their mean-field parent, the fixed-number wave functions encode a single
electron spectral function at zero energy that decays exponentially away from
the edges, with a localization length that agrees with the mean-field value.
Based purely on the structure of the number-projected ground states, we
construct the fixed particle number generalization of the MZM operators. They
can be used to compute the edge tunneling conductance; however, notably the
value of the zero-bias conductance remains the same as in the mean-field case,
quantized to $2e^2/h$. We also compute the topological entanglement entropy for
the number-projected wave functions and find that it contains a `robust'
$\log(2)$ component as well as a logarithmic correction to the mean field
result, which depends on the precise partitioning used to compute it. The
presence of the logarithmic term in the entanglement entropy indicates the
absence of a spectral gap above the ground state; as one introduces
fluctuations in the number of particles, the correction vanishes smoothly. | Rohith Sajith, Kartiek Agarwal, Ivar Martin | 2023-08-31T20:15:43Z | http://arxiv.org/abs/2309.00118v1 | # Signatures of Majorana Zero-Modes in an isolated one-dimensional superconductor
###### Abstract
We examine properties of the mean-field wave function of the one-dimensional Kitaev model supporting Majorana Zero Modes (MZMs) _when restricted_ to a fixed number of particles. Such wave functions can in fact be realized as exact ground states of interacting number-conserving Hamiltonians and amount to a more realistic description of the finite isolated superconductors. Akin to their mean-field parent, the fixed-number wave functions encode a single electron spectral function at zero energy that decays exponentially away from the edges, with a localization length that agrees with the mean-field value. Based purely on the structure of the number-projected ground states, we construct the fixed particle number generalization of the MZM operators. They can be used to compute the edge tunneling conductance; however, notably the value of the zero-bias conductance remains the same as in the mean-field case, quantized to \(2e^{2}/h\). We also compute the topological entanglement entropy for the number-projected wave functions and find that it contains a 'robust' \(\log(2)\) component as well as a logarithmic correction to the mean field result, which depends on the precise partitioning used to compute it. The presence of the logarithmic term in the entanglement entropy indicates the absence of a spectral gap above the ground state; as one introduces fluctuations in the number of particles, the correction vanishes smoothly.
## I Introduction
Majorana fermions are the real solutions of the Dirac equation that act as their own antiparticles. Remarkably, in the condensed matter setting, Majorana fermions emerge as natural quasiparticles in magnetic [1, 2, 3] and superconducting systems that exhibit topological order in one [4] and two dimensions [5, 6]. While the usual Dirac, or complex, fermions can always be decomposed into pairs of Majorana fermions, it is only in certain cases, when a system has topological order, that one can realize spatially _unpaired_ Majorana fermions as quasiparticles. These states commute with the Hamiltonian and thus cost zero energy; they encode the topologically protected ground state degeneracy of the system and are referred to as Majorana zero modes (MZMs). Unlike complex fermions or Abelian anyons whose exchange only results in a phase transformation of the wave function, MZMs exhibit non-Abelian exchange statistics, whereby their exchange results in a unitary transformation on the multi-dimensional ground state manifold. This makes MZMs a valuable component of putative quantum computers operating on a quantum register of qubits encoded in the ground state degeneracy of a topological many-body system [7].
While MZMs can be built into certain interacting spin models _exactly_, they are realized in superconductors as zero-energy self-conjugate Bogoliubov quasiparticles \(\gamma\)[4, 6] satisfying \(\gamma^{2}=1\) within Bardeen-Cooper-Shrieffer (BCS) mean field theory. At the mean-field level, superconductors have a well-defined phase that is conjugate to the number of electrons; thus, the ground state has a fluctuating number of electrons. If we consider an electrically isolated piece of superconductor, such fluctuations are clearly impossible. Therefore, strictly speaking, the mean-field description cannot be correct in a finite system, and the survival of MZMs in this setting becomes a non-trivial problem.
While this could be a matter of concern even in large superconductors, it is particularly critical in thin-wire superconductors where phase fluctuations are further enhanced due to reduced spatial dimensionality and system-size effects. Given that this is precisely the setting of some topological quantum computing schemes based on manipulation of MZMs [8, 9], it is important to carefully examine the consequences of going beyond the BCS mean-field limit.
To address the concerns about possible artifacts of the BCS approximation on Majoranas, we examine the presence of MZMs in a one-dimensional superconducting chain by shifting focus from the Hamiltonian to the structure of the many-body ground state. At the mean-field level, the Kitaev p-wave superconducting chain has two MZMs in its topological phase, one at each edge of the superconductor. Instead of examining the full mean-field Kitaev ground state with a fixed phase and a fluctuating number of electrons, we consider the states obtained when \(|\Psi_{K}\rangle\) is projected onto a fixed number \(N\) of electrons, \(|N\rangle\).
It may appear that the number projection procedure on the BCS wave function is rather arbitrary and not guaranteed to give a good approximation of the many-body wave function in a superconductor with a fixed number of particles. One can show, however, that the number projection procedure gives the same result as a variational calculation of a fixed-number wave function
using a number-conserving interacting Hamiltonian [10]. In the case of Kitaev wave function, in fact, it is possible to explicitly construct a number-conserving Hamiltonian for which \(\left|\Psi_{K}\right\rangle\) is the _exact_ ground state [11; 12]. The Hamiltonian is physically meaningful, with only short-range hopping and interactions. Since the Hamiltonian does not mix different number sectors, the fact that \(\left|\Psi_{K}\right\rangle\) is the ground state, automatically implies that that all projections \(\left|N\right\rangle\) are the ground states as well. This gives us an additional reason to study the properties of \(\left|N\right\rangle\) in detail.
We generally find that the number-projected version of the Kitaev wave function indeed retains some key features typically associated with MZMs. Namely, the single-electron spectral function has a zero-frequency peak near the edges of the wire, in direct analogy to the mean-field MZMs. We are also able to construct a proper generalization of Majorana operators for the fixed number case, which induces exact transitions between ground states that differ by one in the number of electrons, \(\left|N\right\rangle\leftrightarrow\left|N+1\right\rangle\). Similar to the standard mean-field Majorana operator, this operator (superficially) appears local. However, in reality, it encodes non-local correlations via a Cooper pair operator that it explicitly contains. The Cooper pair \(P^{\dagger}\) induces a transition from the state \(\left|N\right\rangle\) to the state \(\left|N+2\right\rangle.\) The form of the Majorana operators happens to match the conjecture made recently in Ref. [8].
Focusing exclusively on the many-body ground-state wave function allows us to make Hamiltonian-independent statements. In this way, our approach is complementary to the exact solutions of bulk models of topological superconductors available in some cases [13; 14; 11], bosonization analysis [15; 16; 17; 18], and DMRG [19; 20]. However, it also makes it impossible to access some important quantities such as the gap between the ground state(s) and the excited states. One can partially address this issue by studying the entanglement properties of the wave function. For the projected Kitaev wave function \(\left|N\right\rangle\), we find that the topological entanglement entropy exhibits a robust \(\log(2)\) value, identical to that observed for the mean-field wave function in the topological phase. However, it additionally contains a logarithmic correction that is dependent on the precise geometry of the partitions used to compute the topological entanglement entropy. These results suggest that such a wave function can only appear as the ground state of a _gapless_ Hamiltonian [21]. Although this does not completely preclude the presence of topologically protected zero modes [22], it may make the dynamical manipulation of putative MZMs challenging. We leave the numerical study of braiding and measurement-based computing with MZMs for future work.
## II Fixed number wave function
### Mean field model and its ground states.
Our main object of study is the number-projected ground state wave function of Kitaev's model for the mean-field \(p\)-wave superconductor [4]. To start, let us summarize the main points about the mean-field model. Its Hamiltonian is
\[H_{MF}=-\sum_{j=1}^{L-1}\{ta_{j}^{\dagger}a_{j+1}+\mu a_{j}^{\dagger}a_{j}- \Delta a_{j}a_{j+1}+h.c.\} \tag{1}\]
where \(a_{j}\), \(a_{j}^{\dagger}\), and \(n_{j}\) are fermionic annihilation, creation, and density operators for the \(j\)-th site, \(\Delta\) is the superconducting gap, \(t\) is a hopping amplitude, \(\mu\) is the chemical potential, and \(L\) is the chain length.
The topological phase persists as long as \(\left|\mu\right|<2t\) and is characterized by the appearance of Majorana modes near the system's edges when placed on open boundary conditions. For \(\Delta=t,\) and \(\mu=0,\) they are perfectly isolated on the first and the last sites of the chain, and can be expressed in terms of the physical electrons as \(\gamma_{1}=a_{1}+a_{1}^{\dagger}\) and \(\gamma_{2}=-i(a_{L}-a_{L}^{\dagger})\). These operators, as well as the corresponding complex fermion \(f=(\gamma_{1}+i\gamma_{2})/2\), have a trivial Heisenberg evolution. This implies that \(f\) is a zero-energy fermion mode. Its occupation number \(n_{f}=f^{\dagger}f=\{0,1\}\) can be used to label the single wire ground states. Explicitly, the two ground states of the Hamiltonian at the special \(t=\Delta\) point of (1) are [23]
\[\left|\Psi_{e,o}\right\rangle=\frac{1}{2^{\frac{L-1}{2}}}\sum_{n_{1}+...+n_{L }=e,o}\left(a_{1}^{\dagger}\right)^{n_{1}}\left(a_{2}^{\dagger}\right)^{n_{2} }\ldots\left(a_{L}^{\dagger}\right)^{n_{L}}\left|0\right\rangle. \tag{2}\]
Note that the sum goes over any combination \(\{n_{j}\}\) such that the total number of electrons is either odd or even, depending on the sector. Indeed, \(H_{MF}\left|\Psi_{e,o}\right\rangle=-(L-1)\left|\Psi_{e,o}\right\rangle;\) that is, odd and even states have the same energy. Moreover, \(f\left|\Psi_{e}\right\rangle=0\) and \(f^{\dagger}\left|\Psi_{e}\right\rangle=\left|\Psi_{o}\right\rangle\) - the odd and even states are the eigenstates of \(n_{f}\) with the eigenvalues \(1\) and \(0\), respectively.
### Wave function projected to fixed number of particles
The mean-field BCS wave function is a superposition of states with different numbers of electrons and thus cannot be literally correct for an isolated system. An alternative variational treatment in a fixed-number sector, however, yields the same BCS equations for the transition temperature and the gap equation [10]. As one could anticipate, the fixed-number _generalized_ BCS wave function is nothing but the mean-field BCS wave function, projected onto a fixed number of particles.
Applied to the Kitaev chain, this procedure yields
\[\left|N\right\rangle=\frac{1}{\sqrt{\binom{L}{N}}}\sum_{n_{1}+...+n_{L}=N}\left(a _{1}^{\dagger}\right)^{n_{1}}\!\left(a_{2}^{\dagger}\right)^{n_{2}}\!\ldots\! \left(a_{L}^{\dagger}\right)^{n_{L}}\left|0\right\rangle, \tag{3}\]
Note the a change in normalization factor since \(\binom{L}{N}=\frac{L!}{N!(L-N)!}\) is the number of configurations with \(N\) electrons. Even though the wave functions \(\left|N\right\rangle\) are obtained from the mean-field wave functions \(\left|\psi_{e,o}\right\rangle\), we are interested in identifying Majorana-like features contained in \(\left|N\right\rangle\), irrespective of their mean-field origin.
The eigenstates \(\left|N\right\rangle\) could originate from a variety of Hamiltonians. A particularly nice example is a fully conserving Hamiltonian of \(N\) spinless fermions hopping on an \(L\) site one-dimensional wire with open boundary conditions constructed in Refs. [11; 12]:
\[H=-J\sum_{i=1}^{L-1}\{a_{j}^{\dagger}a_{j+1}\!+\!a_{j+1}^{\dagger}a_{j}\!-\!n _{j}\!-\!n_{j+1}\!+\!2n_{j}n_{j+1}\}. \tag{4}\]
Via a Jordan-Wigner Transformation, this Hamiltonian can be written as a spin-\(1/2\) ferromagnetic Heisenberg chain. The ground states are fully polarized states with a total spin of \(L/2\). It has the degeneracy \(L+1\) due to the arbitrary orientation of the total moment (number of distinct projections of the total moment on any given axis). The energy gap between the degenerate ground states and the first excited state corresponds to single magnon excitations and hence scales as \(L^{-2}\). The \((L+1)\)-fold ground state degeneracy of the Heisenberg model corresponds to the ground state degeneracy across \(L+1\) possible number sectors in the fermion picture.
While not important for most of the present work, having a number conserving Hamiltonians \(H\) will be needed when we study a junction-type braiding protocol in future work; here we focus exclusively on the properties of \(\left|N\right\rangle.\)
A final note. In this study, we restrict ourselves to the parent mean-field wave function \(\left|\Psi_{K}\right\rangle\) constructed for \(\mu=0\) due to its simplicity. Nevertheless, we will use it to access \(\left|N\right\rangle\) states with \(N\) values that correspond to the filling fraction \(p\) other than half-filling. Despite this simplification, the localization length of MZMs predicted by this wave function agrees remarkably well with the mean-field result even away from half-filling for a large range of \(p\).
## III Properties of projected wave function
The mean-field solution of the Kitaev Hamiltonian has a Bogoliubov quasiparticle at zero energy (i.e., at chemical potential) with probability amplitude concentrated near both ends of the chain. This quasiparticle leads to a zero-energy peak in the density of states while tunneling into the edge sites, but not into the bulk. Each of the edge modes in the mean-field treatment is associated with a Majorana zero mode.
In this section, we will see how these features manifest in the number-projected wave function. We find that the spectral function retains the zero-energy peaks near the edges and that it is possible to construct operators analogous to the Majorana operators that induce transitions between ground states with \(N\) and \(N+1\) particles, perfectly in the limit of \(L,N\rightarrow\infty\). In the process, we also construct a Cooper pair operator, which switches between states \(N\) and \(N+2\).
### Edge mode and spectral function
The hallmark of the edge Majorana modes in Kitaev wire is the appearance of the peak in spectral function at zero energy near the wire edges. In the mean-field treatment, this originates from the self-conjugate Bogoliubov quasiparticles at the edge. In the many-body setting such quasiparticles a priori may not exist, but the spectral function can be computed for any number-projected ground states \(\left|N\right\rangle\). It is defined as
\[A_{i}(\omega)=\sum_{n}|\bra{\psi_{n}}a_{i}^{\dagger}\left|N\right\rangle|^{2} \delta(\omega-E_{n}+E_{N}), \tag{5}\]
where the sum goes over all states \(\left|\psi_{n}\right\rangle\) connected to \(\left|N\right\rangle\) by a single electron creation operator. Let us examine matrix elements for the transitions between ground states; thus, we may set \(\left|\psi_{n}\right\rangle=\left|N+1\right\rangle\). As an example, suppose we try to add an electron to site \(1\). The result is
\[a_{1}^{\dagger}\left|N\right\rangle=\frac{1}{\sqrt{\binom{L}{N}}}\sum_{n_{2}+ \ldots+n_{L}=N}a_{1}^{\dagger}\!\left(a_{2}^{\dagger}\right)^{n_{2}}\!\left( a_{3}^{\dagger}\right)^{n_{3}}\!...\!\left(a_{L}^{\dagger}\right)^{n_{L}}\left|0 \right\rangle. \tag{6}\]
There are \(\binom{L-1}{N}\) terms in this sum. The overlap with the number-projected state with \(N+1\) electrons is therefore
\[\left\langle N+1\right|a_{1}^{\dagger}\left|N\right\rangle=\frac{\binom{L-1}{N }}{\sqrt{\binom{L}{N}\binom{L}{N+1}}}\rightarrow\sqrt{p(1-p)}, \tag{7}\]
the latter valid in the limit of large \(L\) and \(N\), and finite \(\frac{N}{L}\equiv p.\) By symmetry, the same result holds for the matrix element of \(a_{L}^{\dagger}.\)
We may also compute the amplitude to insert an electron at an arbitrary site \(j,\)\(\left\langle N+1\right|a_{j}^{\dagger}\left|N\right\rangle.\) Due to the anticommutation of fermion operators, we may express this as a sum over the number of fermions \(k\) that are present at the sites \(1\leq k\leq j-1\)
\[\left\langle N+1\right|a_{j}^{\dagger}\left|N\right\rangle=\sum_{k=0}^{j-1}(-1 )^{k}\frac{\binom{j-1}{k}\binom{L-j}{N-k}}{\sqrt{\binom{L}{N}\binom{L}{N+1}}} \tag{8}\]
In the limit of large system size and \(j\ll L\), the above expression simplifies as
\[\begin{split}\left\langle N+1\right|a_{j}^{\dagger}\left|N\right\rangle& =\frac{\binom{L-1}{N}}{\sqrt{\binom{L}{N}\binom{L}{N+1}}}\\ &\times\sum_{k=0}^{j-1}\binom{j-1}{k}(-1)^{k}p^{k}(1-p)^{j-1-k} \\ &=\sqrt{p(1-p)}(1-2p)^{j-1}\end{split} \tag{9}\]
The numerator in the prefactor, just as before, is the number of states where one of \(N+1\) electrons is fixed in the lattice; the terms under the sum correspond to the probabilities to have \(k\) electrons in the first \(j-1\) sites, with the sign determined by the parity of \(k\). Note that below half-filling (\(p<0.5\)) the matrix element has a positive sign, while above half-filling, it is oscillatory. Exactly at half-filling, it is only possible to create an electron on the first site without exciting outside the ground-state manifold. Ignoring the sign changes, the general expression of the inverse decay length is \(\xi_{p}^{-1}=\ln|1-2p|\). In Fig. 1, we compare the exact combinatorial evaluation of the matrix element and the limiting result of Eq. (9) for a system of total length \(L=500\). The agreement is very good near the edges of the wire.
We can further compare the localization lengths for the number-projected wave functions with those of the mean-field ground states of the Kitaev Hamiltonian. At the special point with \(t=\Delta\), the localization length of Majorana edge modes in the topological phase is determined by \(\xi^{-1}=\ln\frac{|\mu|}{2t}\)[4]. The chemical potential \(\mu\) is related to average electron density; by diagonalizing the mean-field Hamiltonian, we find
\[p(\mu)=\int_{0}^{2\pi}\frac{dk}{4\pi}\left[1+\frac{t\cos k+\mu}{\sqrt{t^{2}+ \mu^{2}+2t\mu\cos k}}\right]. \tag{10}\]
In a finite-length chain, this value of filling \(p\) is only defined approximately due to the uncertainty in the number of particles that scales as \(\sqrt{N}\) in the mean-field ground state. The fluctuation of fillings \(\Delta p\sim L^{-0.5}\) is also finite at intermediate fillings. Thus, for large but finite system sizes, the mean-field wave function at \(p\neq 0.5\) can be quite different from the one \(p=0.5\) (chemical potential \(\mu=0\)), which we use as the parent wave function for our number-projected ground states. Despite this difference, a direct comparison of the mean-field Majorana localization length as a function of density and the decay length of the single electron matrix element between the number-projected ground states [Eq. (9)] shows an excellent match in a finite range of fillings near \(p=0.5\); see Fig. 2.
behavior of Majorana operators subject to the constraint that states must have fixed number of electrons.
Building upon this observation, we next construct an operator \(\Gamma\) that converts between number-projected states perfectly, satisfying \(\left|N+1\right\rangle=\Gamma^{\dagger}\left|N\right\rangle.\) This is in contrast to \(a_{1}^{\dagger}\) which only gives a \(\mathcal{O}(1)\) matrix element as demonstrated in Eq. 7. The key ingredient to construct \(\Gamma^{\dagger}\) is the Cooper pair operator \(P^{\dagger},\) which accomplishes the transition between states \(\left|N+2\right\rangle=P^{\dagger}\left|N\right\rangle\).
### Cooper pair creation operators at fixed N
We define the Cooper pair creation operator as the operator that transforms \(\left|\Psi_{N}\right\rangle\) into \(\left|\Psi_{N+2}\right\rangle.\) It is simple to see that the ansatz
\[P^{\dagger}=\sum_{i=1}^{L-1}a_{i}^{\dagger}a_{i+1}^{\dagger} \tag{11}\]
accomplishes precisely that for \(L,N\rightarrow\infty,\) since
\[\frac{\left\langle N+2\right|P^{\dagger}\left|N\right\rangle}{\sqrt{\left\langle N \right|PP^{\dagger}\left|N\right\rangle}}\to 1. \tag{12}\]
That is, the normalized state \(P^{\dagger}\left|N\right\rangle\) becomes identical to \(\left|N+2\right\rangle\). To leading order in \(L\), Eq. (12) can be written as \(\binom{L-2}{N}/\sqrt{\binom{L}{N+2}\binom{L-4}{N-2}},\) which indeed approaches \(1\) for any finite \(p\) as \(L\rightarrow\infty\).
Viewing \(\left|N\right\rangle\) as a superposition of bitstrings, it might seem surprising that \(P^{\dagger}\) creates all bitstrings with \(N+2\) particles since we only add particles on adjacent sites. However, the set of \(N+2\) particle states that are reachable from \(\left|N\right\rangle\) via \(P^{\dagger}\) are those that have at least \(1\) pair of adjacent particles somewhere within the system. At fixed filling \(p=N/L\) and \(N,L\rightarrow\infty,\) all random bit strings have such a local pair somewhere within the system with probability \(1\), meaning that \(P^{\dagger}\) indeed reaches all \(N+2\) particle states in the large system limit.
Interestingly, the Cooper pair operator of Eq. (13) is not unique. It is easy to show that
\[P_{\ell}^{\dagger}=\sum_{i=1}^{L-\ell}a_{i}^{\dagger}a_{i+\ell}^{\dagger} \tag{13}\]
also works. First of all, for \(L,N\rightarrow\infty\)
\[\left\langle N+2\right|a_{i}^{\dagger}a_{i+\ell}^{\dagger}\left|N\right\rangle =p(1-p)(1-2p)^{\ell-1}. \tag{14}\]
Note that the decay law of this anomalous correlator (14) is the same as the matrix element for a single-electron zero-energy transition between number-projected states in Eq. (9). Only for \(p=0.5\) this matrix element vanishes for \(\ell>1\). Normalizing in the same way as Eq. (12) shows that \(P_{\ell}^{\dagger}\left|N\right\rangle=\operatorname{sgn}(1-2p)^{\ell-1} \left|N+2\right\rangle\).
The redundancy in the definition of the Cooper pair operator thus implies that they all act identically in the ground state manifold of the number-projected states. This follows from Eq. (12):
\[\left\langle N\right|P\left|N+2\right\rangle\!\left\langle N+2\right|P^{ \dagger}\left|N\right\rangle=\left\langle N\right|PP^{\dagger}\left|N\right\rangle.\]
Inserting the resolution of identity on the right-hand side shows that the operator \(P^{\dagger}\) only connects one ground state to another ground state with two additional electrons.
This perfect transformation also implies that a linear superposition with arbitrary amplitudes \(\alpha_{\ell},\)\(\sum_{\ell}\alpha_{\ell}P_{\ell}\) is also a legitimate Cooper pair operator (see Ref. [12], where \(\alpha_{l}=\text{const.}\)).
In what follows, we will assume that the Cooper pair operators are normalized, such that \(\left|N+2\right\rangle=P^{\dagger}\left|N\right\rangle.\)
### Majorana Operators at fixed N
We are now ready to define the Majorana operators \(\Gamma^{\dagger}\) for the number-projected case as an operator that induces a perfect transition between ground states with \(N\) and \(N+1\) electrons:
\[\frac{\left\langle N+1\right|\Gamma^{\dagger}\left|N\right\rangle}{\sqrt{ \left\langle N\right|\Gamma^{\dagger}\left|N\right\rangle}}\to 1. \tag{15}\]
Motivated by the mean-field analogy and the result in Eq. (9), we look for operators of the form \(\Gamma_{L}^{\dagger}=\sum_{j=1}^{L}\beta^{j-1}(a_{j}^{\dagger}+a_{j}P^{ \dagger})\) at the left edge of the wire, and analogously, \(\Gamma_{R}^{\dagger}=i\sum_{j=1}^{L}\beta^{j-1}(a_{L+1-j}^{\dagger}-a_{L+1-j} P^{\dagger})\) at the right edge of the wire. Using the fact that \(P^{\dagger}\left|N\right\rangle=\left|N+2\right\rangle\), in the limit of large \(L\) and finite \(p\), we find \(\beta=(1-2p)\), the same decay constant as in Eq. (9). The minus sign in front of the annihilation operator in \(\Gamma_{R}\) is evident for \(p=0.5\) since the parity of permutations needed to apply \(a_{L}^{\dagger}\) to \(\left|N\right\rangle\) is opposite from that needed to apply \(a_{L}\) to \(\left|N+2\right\rangle\). In both expressions, we ignore the overlap of \(\Gamma_{L}\) with \(\Gamma_{R}\), taking the \(L\rightarrow\infty\) and \(\beta^{L}\to 0\) limits. Including the normalization, we obtain
\[\Gamma_{L}^{\dagger}=4p(1-p)\sum_{j=1}^{L}(1-2p)^{j-1}(a_{j}^{\dagger}+a_{j}P^ {\dagger}). \tag{16}\]
\[\Gamma_{R}^{\dagger}=i\times 4p(1-p)\sum_{j=1}^{L}(1-2p)^{j-1}(a_{L+1-j}^{ \dagger}-a_{L+1-j}P^{\dagger}). \tag{17}\]
Note that in the limit of large \(N\) and \(L\), \(\left(\Gamma_{L}^{\dagger}\right)^{2}=\left(\Gamma_{R}^{\dagger}\right)^{2}=P^{\dagger}\). Thus, while these operators do not square to unity, as the canonical Majorana operators do, they square to the operators that induce a transition between two neighboring ground states of the same parity, which is as close to the trivial operator as is possible in the number conserving case.
Given the form of the ground state wave functions, Eq. (6), \(\Gamma_{L}^{\dagger}\) induces transitions \(\left|N\right>\rightarrow\left|N+1\right>\), while the action of \(\Gamma_{R}^{\dagger}\) is more complex, \(\left|N\right>\to i(-1)^{N}\left|N+1\right>\). This replicates the canonical fermionic anticommutation relations \(\Gamma_{L}^{\dagger}\Gamma_{R}^{\dagger}=-\Gamma_{R}^{\dagger}\Gamma_{L}^{\dagger}\), further extending the correspondence between the \(\Gamma\) operators introduced here and the Majorana operators that appear in the mean-field treatment of the Kitaev model. Note that this by itself does not imply that the operators \(\Gamma\) are simple fermionic operators, only that they act as such within the ground state manifold. Furthermore, since these operators are constrained only with respect to their action on the ground states, they may contain arbitrary terms that act in the orthogonal subspace with no visible effect for our purposes.
The \(\Gamma_{L,R}\) operators given by Eqs. (16, 17) have an explicit dependence on the number of particles \(N\); however, this dependence is smooth, only via the filling \(p=N/L\). To make a connection with the mean-field limit, we recall that there are the number fluctuations are \(O(N^{1/2})\), which translates into a variation of \(p\) of order \(\mathcal{O}(L^{-1/2})\) that vanishes in the large system limit. Therefore, for large systems, we can expect \(\Gamma_{L,R}\) to directly correspond with the mean-field MZM operator \(\gamma\). Indeed, \(\Gamma_{L}\) and \(\Gamma_{R}\) take a form closely similar to the traditional MZM operators within mean-field theory, with the annihilation operator part of the Bogoliubov quasiparticle "decorated" by the Cooper pair operator for the wire. To convert Eqs. (16, 17) into the mean-field expressions, it is sufficient to replace the Cooper pair creation operator with its expectation value, which is merely the superconducting order parameter. We finally note that the form of Majorana operators in Eqs. (16, 17) is precisely the one conjectured by Lin and Leggett in Ref. [8].
## IV Topological entanglement entropy
We can measure the topological robustness of the system whose ground state is a number-projected wave function by examining the topological entanglement entropy. Such a measure was first proposed for two-dimensional gapped topological systems [24] and is designed to isolate a constant long-range contribution to the entanglement entropy and can be related to the topological degeneracy of the ground state. Here we study a one-dimensional analog of this quantity [25], \(\mathcal{S}_{\text{topo}}\), defined as
\[\mathcal{S}_{\text{topo}}=\mathcal{S}_{AB}+\mathcal{S}_{BC}-\mathcal{S}_{B}- \mathcal{S}_{ABC} \tag{18}\]
where \(\mathcal{S}_{A}\) refers to, for instance, the usual von Neumann entanglement entropy of subsystem \(A\).
For the mean-field Kitaev Hamiltonian, the behavior of \(\mathcal{S}_{\text{topo}}\) has been studied in great detail, even in the presence of additional interactions, and changes abruptly from \(0\) to \(\log(2)\) as one enters the topological Kitaev phase from the trivial phase [25]. For \(\Delta=t\), the computation of the entanglement entropies can be carried out exactly as in [25]; one finds \(\mathcal{S}_{AB}=\mathcal{S}_{B}=\mathcal{S}_{ABC}=\log(2)\) while \(\mathcal{S}_{BC}=2\log(2)\), which yields \(\mathcal{S}_{\text{topo}}=\log(2)\). The \(\log(2)\) implies a topological ground state degeneracy of \(2\) in this case, as expected. Note that the exact size of regions \(A,B,C,D\) does not affect the outcome of the calculations as one should expect for a topologically robust phase; only \(\mathcal{S}_{BC}\) is different here, and this can be understood from the fact that the subsystem BC is composed of disjoint parts as shown in Fig. 3. The \(\log(2)\) topological entanglement entropy can be related to the presence of two Majorana zero modes at the edge of the system. We note crucially however that topological entanglement entropy is _not_ in itself a consequence of the MZMs. This is most easy to see in the case \(\Delta=t\). Turning on a boundary term in the Hamiltonian (1) \(\delta H_{MF}=b(a_{L}^{\dagger}a_{1}-a_{L}^{\dagger}a_{1}^{\dagger}+h.c.)\) converts from the open to the periodic boundary conditions. It is easy to check that the mean-field ground state wave function (2) is also an eigenstate of \(\delta H_{MF}\) with the eigenvalue \((-1)^{P}b\); that is, the state (2) remains an eigenstate regardless of the boundary conditions - even though the energy of this state depends on the strength of the boundary term and the fermion parity \(P\). Clearly, in the case of periodic (or twisted) boundary conditions, there are no edge modes. The fact that the ground state wave function remains unchanged implies that the TEE as defined in Eq. (18) is the same for open and periodic boundary conditions. We are therefore led to conclude that this definition of TEE is sensitive to the topological properties of the state even when the system has no edges (reminiscent of the appearance of edge modes in the entanglement spectrum of topological insulators [26]).
We now evaluate \(\mathcal{S}_{\text{topo}}\) for the fixed-\(N\) projected states and analyze its robustness compared to the mean-field limit. A straightforward calculation assuming the thermodynamic limit \(N,L\rightarrow\infty\), with \(p\equiv N/L,q\equiv 1-p\) (details in App. A), yields
\[\mathcal{S}_{L_{1}}=\frac{1}{2}\log\left(2\pi epq\tilde{L}\right) \tag{19}\]
where \(\tilde{L}=L_{1}L_{2}/L\) depends on the partitioning, \(L_{1}+L_{2}=L\), and \(e\) is the Euler's constant. For entanglement entropy of disconnected segments, the expression is similar, with \(L_{1}\) being now the sum total of their lengths, and with an additional contribution \(\log 2\).
Figure 3: Partitioning of the wave function into \(4\) regions \(A,B,D,C\) to compute the topological entanglement entropy.
\[\mathcal{S}_{AB} = \frac{1}{2}\log\left(2\pi epqL_{AB}L_{CD}/L\right)\] \[\mathcal{S}_{BC} = \frac{1}{2}\log\left(2\pi epqL_{BC}L_{AD}/L\right)+\log(2)\] \[\mathcal{S}_{B} = \frac{1}{2}\log\left(2\pi epqL_{B}L_{ADC}/L\right)\] \[\mathcal{S}_{ABC} = \frac{1}{2}\log\left(2\pi epqL_{ABC}L_{D}/L\right)\] \[\mathcal{S}_{\mathrm{topo}} = \frac{1}{2}\log\left(\frac{L_{AB}L_{CD}L_{BC}L_{AD}}{L_{B}L_{ADC }L_{ABC}L_{D}}\right)+\log(2) \tag{20}\]
In the limit where \(L_{AB}\ll L_{CD}\), \(\mathcal{S}_{AB}\approx\frac{1}{2}\log L_{AB}\); the logarithmic behavior of the entanglement entropy indicates that this wave function is the ground state of a gapless Hamiltonian [21], which is indeed true for the Hamiltonian in Eq. (4). This contrasts with the result for the mean-field Kitaev wave function for which the contributions to the entanglement entropy are purely area law, as the parent Hamiltonian is gapped. However, we also see that the piece \(\mathcal{S}_{BC}\) has a robust geometry independent \(\log(2)\) contribution which ultimately comes from the fact that BC is a subsystem composed of physically disjoint parts. This \(\log(2)\) agrees with the result for the mean-field Kitaev wave function and is a signature of topological order in the projected wave function. We thus note that gaplessness does not preclude the presence of robust edge modes which may be algebraically or even exponentially localized [22]. From the practical standpoint, however, it may be significantly more challenging to reach the ground state, or to perform braiding operations with such modes without exciting above the ground state.
To study the topological entanglement entropy more generally, we numerically compute it for arbitrary filling fraction \(p=N/L\) for a particular partitioning of the system, with \(L_{A}=L_{B}=L_{C}=L_{D}\), for which \(\mathcal{S}_{\mathrm{topo}}=\log(8/3)\) at arbitrary but finite \(p\) [from Eq.(20)]. We find excellent numerical agreement with this result at \(p=1/2\) and find that this value is robust for a broad range of \(p\) around half-filling. As the filling fraction approaches the extreme values of \(p=0,1\), \(\mathcal{S}_{\mathrm{topo}}\) approaches \(0\) as expected. However, as the system size is increased, this transition appears to become sharper.
Finally, we note that the picture does not appear to change much when \(t\neq\Delta\neq 1\) or \(\mu\neq 0\), indicating the robustness of the result of Eq. (20). For further discussion and data to this effect, see App. B.
## V Tunneling conductance
Having constructed explicit many-body operators \(\Gamma_{L}^{\dagger}\) and \(\Gamma_{R}^{\dagger}\) that induce transitions between \(\left|N\right>\) and \(\left|N+1\right>\), we are now ready to examine whether the tunneling conductance into the edges of a wire whose ground states are given by Eq. (6) differs from the well-known mean-field result of \(2e^{2}/h\) at zero bias and temperature. [27; 28]
Without making any assumptions except that the wire has degenerate ground states \(\left|N\right>\), the coupling to a tunneling probe is described by the Hamiltonian
\[H = H_{\mathrm{lead}}+H_{\mathrm{T}}\] \[= \sum_{k}\epsilon_{k}c_{k}^{\dagger}c_{k}-\sum_{k,N}t_{k}\left|N+ 1\right>\left<N\right|c_{k}+c_{k}^{\dagger}\left|N\right>\left<N+1\right|\]
Note that the \(\sum_{N}^{\prime}\left|N+1\right>\left<N\right|\) are precisely the \(\Gamma_{L}^{\dagger}\) operator that we constructed earlier, assuming that we limit the range of values of \(N\) to \(\left[\bar{N}-M,\bar{N}+M\right]\) with \(M\ll\bar{N}\) (signified by the prime in the summation above). This is a standard assumption within the tunneling approximation - that in the course of tunneling, the system's macroscopic density does not deviate significantly from its initial value \(\bar{N}\). Here, it helps us to aggregate the tunneling terms for different \(N\) into \(\Gamma_{L}^{\dagger}\), which are defined for a fixed filling fraction \(p\), Eq. (16).
To proceed, it is convenient to transform to the conjugate phase basis, \(\left|N\right>=\int_{0}^{2\pi}\frac{d\phi}{2\pi}\,e^{i\psi\phi}\left|\phi\right>\). In this basis, we have \(\Gamma_{L}^{\dagger}\propto\int_{0}^{2\pi}d\phi d\phi^{\prime}e^{i\phi^{\prime }}\delta_{\phi-\phi^{\prime}}\left|\phi^{\prime}\right>\left<\phi\right|\), where \(\delta_{x}\) is the Dirac delta function with a finite width \(\sim 1/M\). To capture the fermionic character of the \(\Gamma\) operator, we also introduce an auxiliary Majorana fermion mode \(\gamma_{L}\), which plays the same role as the Klein factor in bosonization, yielding
\[\Gamma_{L}^{\dagger} = \sum_{N}\left|N+1\right>\left<N\right| \tag{22}\] \[= \gamma_{L}\int_{0}^{2\pi}\frac{d\phi d\phi^{\prime}}{4\pi^{2}}e^ {i\phi^{\prime}}\delta_{\phi-\phi^{\prime}}\left|\phi^{\prime}\right>\left< \phi\right|.\]
This allows us to rewrite the tunneling problem Eq. (V) in terms of the auxiliary Majorana operator \(\gamma_{L}\), with
Figure 4: Topological Entanglement Entropy (TEE) \(S_{topo}\) plotted for various filling fractions \(p.\) As we approach the infinite system limit, the TEE saturates to a value of \(\log(8/3)\) for all \(p\neq 0,1.\) This is in contrast to the mean-field Kitaev limit, where the TEE is \(\log(2)\) precisely in the topological phase and zero everywhere else.
the replacement of the number state representation by the phase representation [29],
\[H_{T} =\int_{0}^{2\pi}\frac{d\phi d\phi^{\prime}\,\delta_{\phi-\phi^{ \prime}}\left|\phi^{\prime}\right\rangle\langle\phi|}{4\pi^{2}}\sum_{k}t_{k}(e^ {i\phi^{\prime}}c_{k}+e^{-i\phi^{\prime}}c_{k}^{\dagger})\gamma_{L}.\]
Note that despite the similarity with the mean-field Hamiltonian for tunneling into Majorana fermions [30], here there is no assumption of the ordering of the superconducting phase - the phase variable \(\phi\) is a free parameter that has to be integrated over.
To compute the tunneling conductance, we expand the operator for the tunneling current,
\[I = i\left[\sum_{k}c_{k}^{\dagger}c_{k},H_{T}\right]\] \[= i\int_{0}^{2\pi}\frac{d\phi d\phi^{\prime}\,\delta_{\phi-\phi^{ \prime}}\left|\phi^{\prime}\right\rangle\langle\phi|}{4\pi^{2}}\sum_{k}t_{k}( e^{i\phi^{\prime}}c_{k}-e^{-i\phi^{\prime}}c_{k}^{\dagger})\gamma_{L}.\]
The expectation value of the current as a function of bias voltage defines the tunneling conductance. It can be computed using the standard methods of linear response theory [31].
The tunnel current, or any other observable, is computed as an expectation value over the initial state, which we can choose, for instance, to be a product state of \(|N\rangle\) in the superconductor and the filled Fermi sea in the lead, with the chemical potential different from the superconductor by the value of the applied voltage.
All calculations of this kind are simplified by the following observation: In any order of perturbation theory, we will encounter contraction over the phase variable. Thanks to the presence of the delta function of width \(1/M\), these contractions are trivial - select a single value of phase for all terms involved, until we reach the order \(\sim M\). Therefore, the result is equivalent to computing expectation values at fixed phase \(\phi\), and then taking the average over it. The last step selects only the terms that are independent of \(\phi\). Given our choice of the fixed-\(N\) wave functions as the number projected version of the mean-field wave function, the tunnel current into the Majorana states is guaranteed to match the mean-field result.
We thus conclude that the tunneling conductance (or any other observable computed perturbatively over the ground state) would be the same, whether it is computed relative to the number-conserving Kitaev ground-state, or the mean-field Kitaev ground state. In particular, we should expect the \(2e^{2}/h\) zero-bias tunneling conductance result to remain unchanged. An important caveat is that if the Hamiltonian is gapless, there will be generally other contributions to conductance coming from tunneling between the lead and the other low-energy states. However, unlike the Majorana contribution, the tunneling into these states will become suppressed at weak tunneling as \(t_{k}^{2}\) and thus can be filtered out.
## VI Summary and discussion
Motivated by the question of which properties of MZMs survive in isolated superconductors, we investigated the ground state of the Kitaev chain projected to fixed electron number sectors, \(|N\rangle\). Using the exact form of these wave functions at \(\mu=0,t=\Delta\), we were able to demonstrate the presence of zero-energy edge excitations by explicit computation of the spectral function at zero energy. The localization length of these edge modes obtained from these projected wave functions matches closely the mean-field theory of MZMs (at appropriate chemical potential) near half-filling but starts to deviate at filling fractions near the boundary of the topological and trivial phases of the mean-field Kitaev chain. Further, we constructed many-body Majorana operators in the number-conserving setting that transition between different fixed number states. These operators explicitly involve the Cooper pair operator and anticommute within the ground state manifold. Unlike the mean-field case, however, they square to the Cooper pair operator, instead of identity. We further showed that tunneling from a lead into these modified Majorana operators yields the same quantized value zero-bias tunneling conductance as in the mean-field case.
To shed further light on the entanglement structure of the number projected wave functions \(|N\rangle\), we computed their topological entanglement entropy (TEE) and found it contains a logarithmic correction to the mean-field result of \(\log(2)\). This can be done analytically and exactly numerically for \(t=\Delta,\mu=0\). For \(t\neq\Delta,\mu\neq 0\), we computed TEE by projecting the mean-field Kitaev wave functions to fixed filling fractions in a finite system; the computed TEE agrees well with that predicted from the projected mean-field \(\mu=0,t=\Delta\) wave functions, pointing at the universality of this result. However, the saturation value that we obtained in (20) contains a geometric piece that depends on the details of partitioning into subsystems \(A,B,D,C\). This indicates that the Hamiltonian [such as in Eq. (4)] that realizes the wave functions \(|N\rangle\) as ground states is likely to be gapless. It is yet unclear how detrimental this is to the topological robustness of the MZMs and thus requires further investigation. Whether it is possible to extract solely the partition-independent contribution to the TEE from the number-projected wave functions, e.g. by placing the system on a non-open wire geometry, is also an open question.
In this work, we focused on Hamiltonian-independent properties, which could be gleaned purely from the ground state wave function. As a result, some important questions remain outside the scope of this study. In particular, it is well established that braiding the mean-field Majorana zero-modes leads to nontrivial transformations in the degenerate ground state manifold, enabling topological quantum computation. It remains to be seen whether a T-junction braid [23] within a fully number-conserving regime recovers the non-Abelian
statistics realized in the mean-field limit. The presence of non-topological low-energy modes may make braiding very challenging, both in theory and in practice. It is also worth considering whether a measurement-based approach to braiding could be more robust to the presence of low-energy excitations. We leave these questions to future work.
A related question is whether the procedure of computing the ground state of a mean-field Hamiltonian and projecting it to a fixed number can yield a state that is the ground state of a _gapped_ Hamiltonian. Such a state would enable a more robust implementation of a dynamical braiding procedure and allow for a way to extract solely the universal piece of the topological entanglement entropy.
## VII Acknowledgements
We thank Tony Leggett and Roman Lutchyn for useful discussions. This work was funded by the Materials Sciences and Engineering Division, Basic Energy Sciences, Office of Science, US DOE.
|
2309.07677 | Aligning Speakers: Evaluating and Visualizing Text-based Diarization
Using Efficient Multiple Sequence Alignment (Extended Version) | This paper presents a novel evaluation approach to text-based speaker
diarization (SD), tackling the limitations of traditional metrics that do not
account for any contextual information in text. Two new metrics are proposed,
Text-based Diarization Error Rate and Diarization F1, which perform utterance-
and word-level evaluations by aligning tokens in reference and hypothesis
transcripts. Our metrics encompass more types of errors compared to existing
ones, allowing us to make a more comprehensive analysis in SD. To align tokens,
a multiple sequence alignment algorithm is introduced that supports multiple
sequences in the reference while handling high-dimensional alignment to the
hypothesis using dynamic programming. Our work is packaged into two tools,
align4d providing an API for our alignment algorithm and TranscribeView for
visualizing and evaluating SD errors, which can greatly aid in the creation of
high-quality data, fostering the advancement of dialogue systems. | Chen Gong, Peilin Wu, Jinho D. Choi | 2023-09-14T12:43:26Z | http://arxiv.org/abs/2309.07677v1 | # Aligning Speakers: Evaluating and Visualizing Text-based Diarization
###### Abstract
This paper presents a novel evaluation approach to text-based speaker diarization (SD), tackling the limitations of traditional metrics that do not account for any contextual information in text. Two new metrics are proposed, Text-based Diarization Error Rate and Diarization F1, which perform utterance- and word-level evaluations by aligning tokens in reference and hypothesis transcripts. Our metrics encompass more types of errors compared to existing ones, allowing us to make a more comprehensive analysis in SD. To align tokens, a multiple sequence alignment algorithm is introduced that supports multiple sequences in the reference while handling high-dimensional alignment to the hypothesis using dynamic programming. Our work is packaged into two tools, align4d providing an API for our alignment algorithm and TranscribeView for visualizing and evaluating SD errors, which can greatly aid in the creation of high-quality data, fostering the advancement of dialogue systems.
## 1 Introduction
The rise of data-driven dialogue systems, such as BlenderBot (Shuster et al., 2022) and ChatGPT1, powered by large language models (Brown et al., 2020; Lewis et al., 2020; Raffel et al., 2020), has generated significant interest across various groups. Conversational AI has emerged as a central focus for numerous organizations, presenting a wealth of potential applications. Many institutes have started utilizing recordings of human-to-human dialogues collected over the years to develop dialogue models. However, the majority of these recordings were not intended for data-driven model development originally, resulting in low-quality audio with prominent background noise. This poses inevitable challenges for automatic speech recognition (ASR) systems, while the lack of dedicated channels for individual speakers necessitates the use of robust speaker diarization (SD) techniques.
Footnote 1: [https://chat.openai.com](https://chat.openai.com)
Traditionally, SD performance has been evaluated on audio segments by testing the system's ability to recognize the number of speakers in each segment. However, these segments are often uniformly split from an audio stream, disregarding speaker context. A more insightful analysis can be made by correctly aligning tokens between reference and hypothesis transcripts and directly evaluating SD performance on the transcripts, where utterances are segmented based on speaker turns. This work carefully revisits traditional ASR/SD evaluation metrics (Section 2) and compares them with our new approach to verify
Figure 1: An example illustrating speaker diarization errors introduced during automatic speech recognition.
its effectiveness (Section 5). Our contributions are:
1. An efficient multiple sequence alignment algorithm that maps tokens between reference and hypothesis transcripts (Sections 3).
2. Two metrics for evaluating the task of text-based speaker diarization (Section 4).
3. A CPython API for our alignment algorithm and a web-based visualization interface for analysis of ASR and SD errors (Section 6).
## 2 Background
This section provides a brief overview of the most commonly used evaluation metrics for audio-based SD (Section 2.1), text-based SD (Section 2.2), and ASR (Section 2.3), as well as their limitations.
### Diarization Error Rate
For a set of audio segments \(\mathcal{S}\), the SD performance is often tested using Diarization Error Rate (DER), which measures the proportion of time in an audio segment incorrectly attributed to a speaker or left unassigned (Fiscus et al., 2006a):
\[\small\texttt{DER}=\frac{\sum_{\forall s\in\mathcal{S}}(\mathit{dur}(s)\cdot( \max(N_{r}(s),N_{h}(s))-N_{c}(s)))}{\sum_{\forall s\in\mathcal{S}}\mathit{dur}(s )\cdot N_{r}(s)} \tag{1}\]
\(\mathit{dur}(s)\) is the time duration of an audio segment \(s\). \(N_{r}(s)\) and \(N_{h}(s)\) are the numbers of speakers in \(s\) given the reference (ground-truth) and hypothesis (system-generated) transcripts, respectively. \(N_{c}(s)\) is the number of correctly identified speakers in \(s\). For a more detailed analysis, DER can be decomposed into four types of diarization errors:
**Speaker Error** occurs when a segment is attributed to a wrong speaker:
\[\small\begin{split}\mathcal{T}&=\{s:\forall s\in \mathcal{S}.\,N_{h}(s)=N_{r}(s)\}\\ E_{se}&=\frac{\sum_{\forall t\in\mathcal{T}}\mathit{ dur}(t)\cdot(N_{*}(t)-N_{c}(t))}{\sum_{\forall t\in\mathcal{T}}\mathit{dur}(t) \cdot N_{r}(t)}\end{split} \tag{2}\]
**False Alarm** occurs when a non-speech segment (e.g., pause) is assigned to a speaker, or more speakers than actual ones are identified for a segment:
\[\small\begin{split}\mathcal{T}&=\{s:\forall s\in \mathcal{S}.\,N_{h}(s)>N_{r}(s)\}\\ E_{fa}&=\frac{\sum_{\forall t\in\mathcal{T}} \mathit{dur}(t)\cdot(N_{h}(t)-N_{r}(t))}{\sum_{\forall t\in\mathcal{T}} \mathit{dur}(t)\cdot N_{r}(t)}\end{split} \tag{3}\]
**Missed Speech** occurs when the system misses to recognize a segment from a speaker, resulting in a gap in the speaker's transcript:
\[\small\begin{split}\mathcal{R}&=\{s:\forall s\in \mathcal{S}.\,N_{h}(s)<N_{r}(s)\}\\ E_{ms}&=\frac{\sum_{\forall r\in\mathcal{R}} \mathit{dur}(r)\cdot(N_{r}(r)-N_{h}(r))}{\sum_{\forall r\in\mathcal{R}} \mathit{dur}(r)\cdot N_{r}(r)}\end{split} \tag{4}\]
**Overlapping Speech** occurs when multiple speakers speak at the same time and the system fails to recognize all speakers in a segment. In this case, \(N_{h}(s)<N_{r}(s)\), and thus, it is included in \(E_{ms}\).
Given this decomposition, DER can be reformulated as follows:
\[\small\texttt{DER}=E_{se}+E_{fa}+E_{ms} \tag{5}\]
### Word-level Diarization Error Rate
Current state-of-the-art results in SD are achieved by jointly training ASR & SD (Shafey et al., 2019), leading to the need for new evaluation metrics beyond traditional audio-based metrics such as DER. Thus, Word-level Diarization Error Rate (WDER) is proposed to evaluate the SD performance of such joint systems (Park et al., 2021). Unlike DER that focuses only on time-based errors, WIDER provides a more detailed evaluation of SD performance by considering the alignment of words and speakers in the transcriptions as follows:
\[\small\texttt{WDER}=\frac{U_{s}+O_{s}}{U+O} \tag{6}\]
\(U\) is the set of substitutions, where each substitution replaces the actual word with an incorrect one, and \(O\) is the set of correctly recognized words. \(U_{s}\) and \(O_{s}\) are the subsets of words in \(U\) and \(O\) respectively, whose speaker IDs are incorrectly identified.
It is important to note that WIDER only takes into account the words aligned between the reference and hypothesis transcripts, \(U\) and \(O\) in Equation 6, such that it does not consider inserted and deleted words. As a result, among the four types of errors in Section 2.1, WIDER only captures speaker errors; the other 3 types of errors, reflected in the deleted and inserted words, are not assessed by WIDER.
### Word Error Rate
Word Error Rate (WER) is a commonly used metric for evaluating ASR performance (Klakow and Peters, 2002). It quantifies the similarity between the reference and hypothesis transcripts by counting the min-number of edit operations (insertions, deletions, and substitutions) required to transform the hypothesis into the reference and dividing it by the total number of words in the reference as follows:
\[\small\texttt{WER}=\frac{\#(\text{insertions})+\#(\text{deletions})+\#( \text{substitutions})}{\text{Total }\#\text{ of Words in Reference}} \tag{7}\]
While WER is widely adapted, it focuses solely on word-level errors and does not consider speaker information so that it cannot capture errors related to speaker identification or segmentation. Therefore, WER is inadequate for evaluating SD.
Multiple Sequence Alignment
To evaluate text-based SD (Section 4), tokens in the hypothesis transcript must be aligned with the most similar tokens in the reference transcript. In Fig. 2, the hypothesis \(X\) has 3 errors against the reference, \(Y\) and \(Z\), causing difficulties in aligning them:
1. A spelling and word recognition error; '_going_' is recognized as '_gonna_' in the hypothesis.
2. A missing word; '_uh_' is not recognized.
3. Overlapped utterances; B's utterance is spoken while A utters '_Amsterdam_', which are merged into one utterance for A'.
The first two types are ASR errors that can be handled by most pairwise alignment methods such as the Needleman-Wunsch (NW) algorithm [15]. However, the third type is an SD error involving multiple sequences, occurring when utterances by distinct speakers overlap in time. Figure 3 illustrates how the NW algorithm treats them as insertion and deletion errors, leading to incomplete alignment of those tokens:
To overcome this challenge, a new multi-sequence alignment algorithm is designed by increasing the dimension of dynamic programming, allowing us to process utterances from all sequences in parallel (Figure 4). Our algorithm shares a similar idea with the one by [10] for expanding the reference into multiple sequences based on speakers and applying multi-dimensional dynamic programming to solve the alignment problem. While their solution is based on the Levenshtein distance with the aid of a directed acyclic graph, however, our algorithm uses the NW algorithm for efficiency. Furthermore, we use a different scoring criteria consisting of fully match, partially match, mismatch, and gap that we find more effective, whereas they use match, insertion, deletion, and substitution.
### Algorithm: Scoring Matrix
Our multi-sequence alignment algorithm extends the NW algorithm to handle multiple dimensions by enhancing the scoring matrix and backtracking strategy. Let \(X=[x_{1},..,x_{\ell}]\) be a sequence created by listing all tokens in the hypothesis transcript regardless of segmentation. Let \(Y_{j}=[y_{j1},..,y_{jm}]\) be a sequence created by listing tokens of Speaker \(Y_{j}\), the \(j\)'th speaker, in the reference transcript. Given \(E=[X,Y_{1},..,Y_{n}]\), the algorithm first populates the scoring matrix \(F\), a multidimensional matrix whose dimensions are determined by the input sequence lengths, where all cells are initialized to \(0\):
```
Input :\(E=\{X,Y_{1},\ldots,Y_{n}\}\) Output :The scoring matrix \(F\) Create \(F\in\mathbb{R}^{(|X|+1)\times(|Y_{1}|+1)\times\cdots\times(|Y_{n}|+1)}\); \(C\leftarrow[\gamma\subset\{0,1,\ldots,n\}]\setminus\varnothing\); foreach\(\gamma\in C\)do foreach\(\psi\in\textit{index\_perm}(\gamma,E)\)do \(F_{\psi}\leftarrow\textit{score}(\psi,E,F)\); return\(F\);
```
**Algorithm 1**Scoring Matrix Population
Algorithm 1 illustrates how the scoring matrix is populated, which is generalizable to any number of sequences. Once the scoring matrix is created (L1), it generates a list comprising all combinations of \(\{0,..,n\}\) expect for the empty set (L2). For the popular case of 2-speaker dialogues where \(n=2\), \(C\) is generated as follows:
\[[\{0\},\{1\},\{2\},\{0,1\},\{0,2\},\{1,2\},\{0,1,2\}]\]
Note that the order of subsets in \(C\) matters because the indices produced by earlier combinations must be processed before the later ones. The numbers in a combination represent the input sequences, where \(0\) represents \(X\) and \(i\) represents \(Y_{i}\) (\(\forall i>0\)). Each combination \(\gamma\) and \(E\) are passed to the _index_perm_ function that returns a list of index tuples (L3-4). The tuples are generated with indices for the corresponding sequences while the indices for the other sequences remain at \(0\). Table 1 describes the lists of index tuples for the above combinations.
Figure 4: The result by our multi-sequence alignment algorithm for the above example.
Figure 3: The result by the NW algorithm.
Figure 2: Examples of transcript errors, where the reference consists of multiple sequences.
For each index tuple \(\psi=(i,j,..,k)\) where \(i\) indicates \(x_{i}\in X\), \(j\) indicates \(y_{1j}\in Y_{1}\), and \(k\) indicates \(y_{nk}\in Y_{n}\), the _score_ function considers the scores of all cells straightly prior to \(x_{i}\) such as:
\[\{(i-1,j,..,k),(i,j-1,..,k),\ldots,(i,j,..,k-1)\}\]
or diagonally prior to \(x_{i}\) such as:
\[\{(i-1,j-1,..,k),\ldots,(i-1,j,..,k-1)\}\]
and measures the score of \(F_{\psi}\) as follows (L5):
\[F_{i,j,..,k} \leftarrow \max(\mathcal{G}(E,F,(i,j,..,k)))\] \[\mathcal{G}(E,F,\psi) \leftarrow \begin{cases}\quad F_{i-1,j,..,k}+\textit{match}(x_{i})\\ \quad F_{i,j-1,..,k}+\textit{match}(y_{1j})\\ \quad\quad\vdots\\ \quad F_{i,j,..,k-1}+\textit{match}(y_{nk})\\ \quad F_{i-1,j-1,..,k}+\textit{match}(x_{i},y_{1j})\\ \quad\quad\vdots\\ F_{i-1,j,..,k-1}+\textit{match}(x_{i},y_{nk})\end{cases} \tag{8}\]
The _match_ function returns \(-1\) if only one token from \(E\) is passed, indicating that it can be matched only with gaps that are artificially inserted to handle tokens not finding any match with ones in the other sequences. If two tokens are passed, it measures the Levenshtein Distance (\(LD\)) between them and returns the value as follows:2
Footnote 2: For our experiments, \(d=1\) is used.
\[\textit{match}(x,y)\leftarrow\begin{cases}2&\text{if }LD(x,y)=0\text{ (fully match)}\\ 1&\text{if }LD(x,y)\leq d\text{ (partial match)}\\ -1&\text{if }LD(x,y)>d\text{ (mismatch)}\end{cases}\]
Note that when two tokens are passed to the _match_ function, one of them must be \(x_{i}\) so that it always compares a token in \(X\) (hypothesis) with another token in \(Y_{*}\) (reference), but never compares two tokens in \(Y_{*}\) (e.g., \(\textit{match}(y_{1j},y_{nk})\)) that are both from the reference. Moreover, the algorithm does not allow \(x_{i}\) to match with multiple tokens in \(Y_{*}\) (e.g., \(\textit{match}(x_{i},y_{1j},y_{nk})\)). Although it is possible for two speakers to say the exact same token at the same time, it is rare and considered an exception.
### Algorithm: Backtracking
Algorithm 2 outlines our backtracking strategy that takes the list of input sequences \(E\) and the scoring matrix \(F\) in Section 3.1, and returns the alignment matrix \(A\). It creates \(A\), where the 0'th and \(i\)'th rows will be filled with tokens in \(X\) and \(Y_{i}\) respectively or gap tokens (L1).3 Thus, the number of columns \(\rho=\max(|X|+g_{x},|Y_{i}|+g_{i}:\forall i)\), where \(g_{x}\) and \(g_{i}\) are the numbers of gap tokens inserted to find the best alignment for \(X\) and \(Y_{i}\), respectively.
Footnote 3: The value of \(\rho\) cannot be determined at this stage because the number of gap tokens needed for the alignment is unknown until the backtracking process is completed.
```
Input :\(E=\{X,Y_{1},\ldots,Y_{n}\}\), the scoring matrix \(F\). Output : The alignment matrix \(A\)
1 Create \(A\in\mathbb{R}^{|E|\times\rho}\);
2\(\psi\leftarrow(|X|,|Y_{1}|,\ldots,|Y_{n}|)\);
3while\(\psi\neq(0,0,\ldots,0)\)do
4\((\psi^{\prime},\alpha)\leftarrow\textit{argmax}(\mathcal{G}(E,F,\psi))\);
5 Append \(\alpha\) to \(A\) accordingly;
6\(\psi\leftarrow\psi^{\prime}\);
7returnA;
```
**Algorithm 2**Backtracking Strategy
The backtracking process starts from the last cell indexed by \(\psi\) (L2). It then finds a cell (L4) using the _argmax_ function, which returns the index tuple \(\psi^{\prime}\) and the token list \(\alpha\) that maximize the alignment score (\(|\alpha|=|E|\)). The \(0|i\)'th item in \(\alpha\) is either the currently visited token in \(X|Y_{i}\) or a gap token '\(-\)'. For example, among the conditions in \(\mathcal{G}(E,F,\psi)\) (Eqn. 8), suppose that \(F_{i-1,j-1,..,k}+\textit{match}(x_{i},y_{1j})\) provides the highest score. In this case, it returns \(\psi^{\prime}=(i-1,j-1,..,k)\) and \(\alpha=[x_{i},y_{1j},...,-]\). The tokens in \(\alpha\) are appended to the corresponding sequences (L5). For the above example, tokens in \(\alpha\) are appended to \(A\) as follows:
\[\begin{array}{ccccc}A_{0}&\leftarrow&A_{0}&\oplus&[x_{i}]\\ A_{1}&\leftarrow&A_{1}&\oplus&[y_{1j}]\\ &\vdots&&\\ A_{2}&\leftarrow&A_{n}&\oplus&[-]\end{array}\]
Finally, it moves to the next cell indexed by \(\psi^{\prime}\) (L6). This process continues until the algorithm reaches the first cell (L3). Figure 5 shows the backtracking performed by Algorithm 2 using the scoring matrix produced by Algorithm 1 (Table 5; Appendix A.2) for the working example. The resulting alignment matrix of this example is presented in Table 6 (A.2).
\begin{table}
\begin{tabular}{c|c|c} \hline \(\boldsymbol{\gamma}\) & \(\textit{index\_perm}(\boldsymbol{\gamma},\boldsymbol{E})\) & **Size** \\ \hline \(\{0\}\) & \([(1,0,0),..,(|X|,0,0)]\) & \(|X|\) \\ \(\{1\}\) & \([(0,1,0),..,(0,|Y_{1}|,0)]\) & \(|Y_{1}|\) \\ \(\{2\}\) & \([(0,0,1),..,(0,0,|Y_{2}|)]\) & \(|Y_{2}|\) \\ \(\{0,1\}\) & \([(1,1,0),..,(|X|,|Y_{1}|,0)]\) & \(|X|\cdot|Y_{1}|\) \\ \(\{0,2\}\) & \([(1,0,1),..,(|X|,0,|Y_{2}|)]\) & \(|X|\cdot|Y_{2}|\) \\ \(\{1,2\}\) & \([(0,1,1),..,(0,|Y_{1}|,|Y_{2}|)]\) & \(|Y_{1}|\cdot|Y_{2}|\) \\ \(\{1,2,3\}\) & \([(1,1,1),..,(|X|,|Y_{1}|,|Y_{2}|)]\) & \(|X|\cdot|Y_{1}|\cdot|Y_{2}|\) \\ \hline \end{tabular}
\end{table}
Table 1: Generating permutations of index tuples.
### Optimization
To conserve memory when aligning sequences with a large number of speakers/tokens, a segmentation method is implemented. This involves segmenting the dialogue into smaller chunks based on detecting short absolutely aligned segments as barriers, with the length of each segment set to a given minimum. The segmentation is performed at the mid-point of each barrier, and each segment is aligned separately. This approach limits the maximum memory usage.
It is worth mentioning that the number of cells in the scoring matrix that the original NW algorithm compares is the sum of all combinations (\(n^{\prime}=|E|\)), \(\sum_{i=1}^{n^{\prime}}C(n^{\prime},i)=2^{n^{\prime}}-1\). However, our algorithm matches the hypothesis tokens with only necessary tokens in the reference, as cells that do not involve the hypothesis token or involve more than two non-gap tokens are ignored (Section 3.1). This reduces the number of cells for each comparison to \(2\cdot n^{\prime}-1\), greatly reducing the decoding time consumed.
## 4 Text-based SD Evaluation
Section 2 addresses the limitations and challenges of existing metrics for evaluating SD. To overcome these issues, two metrics are proposed, Text-based Diarization Error Rate (Section 4.1) and diarization F1 (Section 4.2), which are made possible by the token alignment achieved in Section 3.
### Text-based Diarization Error Rate
The original DER quantifies the amount of time, in which an audio segment is incorrectly assigned to a speaker (Section 2.1). For text-based evaluation, the duration of the audio can be directly translated into the sequence length, i.e., the number of tokens in the sequence. This allows us to estimate several types of SD errors by examining tokens aligned to incorrect speakers or gap tokens. With these adaptations, Text-based Diarization Error Rate (TDER) can be formulated as follows:
\[\texttt{TDER}=\frac{\sum_{\forall u\in U}len(u)\cdot(\max(N_{r}(u),N_{h}(u))- N_{c}(u))}{\sum_{\forall u\in U}len(u)\cdot N_{r}(u)} \tag{9}\]
\(U=[u_{1},..,u_{q}]\) is the reference transcript where \(u_{i}\) is the \(i\)'th utterance in \(U\). \(len(u)\) is the number of tokens in \(u\). \(N_{r}(u)\) and \(N_{h}(u)\) are the numbers of speakers in \(u\) given the reference and hypothesis transcripts, respectively. \(N_{c}(u)\) is the number of correctly identified speakers in \(u\). Unlike an audio segment that can involve multiple speakers, a text utterance in the reference transcript is always spoken by one speaker, so \(N_{r}(u)=1\). Hence, TDER can be rewritten as follows:
\[\texttt{TDER}=\frac{\sum_{\forall u\in U}len(u)\cdot(\max(1,N_{h}(u))-N_{c}(u ))}{\sum_{\forall u\in U}len(u)} \tag{10}\]
TDER captures different types of SD errors:
* _Speaker errors_ (\(E_{sc}\)) are detected in the scenario when \(N_{h}(u)=1\) and \(N_{c}(u)=0\).
* When \(N_{h}(u)>1\), it indicates _false alarm_ errors (\(E_{fa}\)). False alarm errors in text-based SD mostly occur when the system identifies certain parts of an utterance not correctly as spoken by different speakers. These errors can occur for non-speech segments if the system includes them in the transcript (e.g., "Hello, (_pause_) how are you?").
* \(N_{h}(u)=0\) implies _missed speech_ errors (\(E_{ms}\)) in which case, the system misses to transcribe those segments of the audio.
* Like DER, _overlapping speech_ errors result in \(N_{h}(u)=0\); thus, they are included in \(E_{ms}\). Note that in an audio segment containing overlapping speeches, the corresponding text transcript may include multiple utterances, while ASR systems usually transcribe only one of them, so the untranscribed utterances are considered "missed".
TDER can handle any number of sequences in the reference transcript, as well as the situation when the hypothesis contains a different number of tokens from the reference. Compared to the existing evaluation metrics such as WDER (Section 2.2) or WER (Section 2.3), TDER assesses a greater variety of error types, making it easy to perform a comprehensive analysis in text-based speaker diarization.
Figure 5: The backtracking example using Algorithm 2 and the scoring matrix in Table 5 (Appendix A.2).
### Diarization F1
While TDER is a comprehensive evaluation metric, it only considers utterances in the reference and ignores tokens in the hypothesis that are not aligned to any tokens in the reference. Hence, TDER does not penalize the inclusion of additional tokens in the hypothesis that do not correspond to the audio. To address this limitation, we propose Diarization F1 (DF1), which performs token-level analysis by measuring precision and recall, i.e., how many tokens in the hypothesis and reference are correctly identified with speakers, respectively:
\[\begin{split}\text{Precision}&=\frac{|speaker\_ match(T_{r},T_{h})|}{|T_{h}|}\\ \text{Recall}&=\frac{|speaker\_match(T_{r},T_{h})|}{|T_ {r}|}\end{split} \tag{11}\]
\(T_{r}\) and \(T_{h}\) are sequences of tokens in the reference and hypothesis transcripts, respectively. The function _speaker_match\((T_{r},T_{h})\)_ returns a sequence of tokens in \(T_{r}\), say \(T_{r}^{\prime}\), such that each token \(t_{r}\in T_{r}^{\prime}\) is aligned with some token \(t_{h}\in T_{h}\), and the speaker of \(t_{r}\) by the reference is the same as the speaker identified for \(t_{h}\) by the hypothesis.
## 5 Experiments
### Automatic Transcribers and Corpus
While there are many automatic transcribers, most of them do not perform SD [1, 2] so that only limited off-the-shelf options are available. For our experiments, we use two transcribers, Amazon Transcribe and Rev AI, which are publicly available, can perform both ASR and SD, and offer a free tier of usage,4 making them accessible options for our study.
Footnote 4: [https://aws.amazon.com/transcribe](https://aws.amazon.com/transcribe)
[https://www.rev.ai](https://www.rev.ai)
We use the CABank English CallHome Corpus [1], comprising 120 unscripted, informal telephone conversations between native English speakers that cover various topics. Their transcripts follow the CHAT (Codes for the Human Analysis of Transcripts) format, capturing several aspects of spoken language such as speaker turns, pauses, overlapping speech, and non-verbal cues.
For evaluation, we manually select 10 conversations from this corpus based on their audio quality. Each conversation lasts approximately 30 minutes, but the reference transcript only covers the first 10 minutes. Thus, each audio is cut into a 10-minute segment and transcribed by the above two systems.
### Speech Recognition Evaluation
Building on the previous work [13], we assess the quality of transcripts produced by the two transcribers in Section 5.1, and analyze the following four types of ASR errors:
* **Missing Token**: It occurs when a token present in the reference is not detected by the system, resulting in a missing token in the hypothesis.
* **Extra Token**: It occurs when the system inserts an extra token not present in the reference, resulting in an extra token in the hypothesis.
* **Substitution**: It occurs when the system replaces a token in the reference with a different token in the hypothesis.
* **Overlapping**: It occurs when two or more speakers talk at the same time so that the system cannot accurately transcribe all speakers.
Table 2 summarizes the error distributions for Amazon Transcribe (AT) and Rev AI (RA). AT exhibits a higher rate of _missing tokens_, suggesting that it skips audio segments that are unclear or difficult to recognize, which leads to omit the corresponding tokens. In contrast, RA has higher rates in the other three error types, implying that it tends to preserve most of the information. This is also reflected in the _overlapping_ where AT transcribes no overlapping tokens while RA does, making it more challenging to accurately transcribe and susceptible to errors.
### Token Alignment Evaluation
To evaluate the robustness of our multiple sequence alignment algorithm (MSA; Section 3), we employ hypothesis transcripts by AT and RA and measure the proportions of correctly aligned tokens in the reference transcripts. The final accuracy is obtained by averaging the results from all 10 transcripts. The MSA performance is compared with the character-level (the original NW) algorithm and also a token-level alignment algorithm without multi-sequence support, which is MSA restricted to utilize only a 2-dimensional scoring matrix and linearize multiple sequences in the reference as in Figure 3.
\begin{table}
\begin{tabular}{l|c c c c|c} \hline \hline
**Transcriber** & **MT** & **ET** & **ST** & **OL** & \(\mathbf{\sum}\) \\ \hline Amazon (AT) & 6.8 & **1.5** & **2.8** & **0.0** & **11.1** \\ Rev AI (RA) & **5.1** & 2.7 & 3.1 & 0.5 & 11.4 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Average percentages of the four types of errors over all tokens. MT: missing tokens, ET: extra tokens, ST: substitutions, OL: overlapping.
Table 3 demonstrates a significant improvement in the performance of MSA compared to the other two algorithms, ensuring its robustness for adaptation in text-based SD evaluation (Section 4).
### Speaker Diarization Evaluation
To evaluate text-based SD, we use the Hungarian algorithm, an efficient method for finding the optimal assignment in a cost matrix [10]. In our case, the cost matrix reflects the errors in assigning reference speakers to hypothesis speakers. It then determines the optimal assignment by minimizing the total cost based on the cost matrix.
Table 4 shows the average scores measured by our new metrics, TDEF (SS4.1) and DF1 (SS4.2), as well as the traditional metrics, DER (SS2.1), WDEF (SS2.2), and WER (SS2.3) on the hypothesis transcripts for the 10 conversations. AT performs better than RA in SD as shown by lower DER and WDEF, indicating that it accurately segments different speakers at the audio level. Conversely, RA performs better than AT in ASR, as evidenced by the lower WER.
The (Precision, Recall) scores for DF1 are (0.87, 0.73) and (0.88, 0.81) for AT and RA, respectively. Notably, WDEF only considers aligned tokens (as explained in Section 2.2). Moreover, Section 5.2 observes AT's tendency to omit tokens during ASR, which can cause a lower recall score and a skewed WDEF result. Due to this, AT has a lower DF1 score than RA contributed by a low recall score of 0.73.
While traditional SD metrics such as DER and WDEF indicate an advantage for AT over RA, our new metrics, TDEF and TF1, reveal RA's superiority in text-based SD performance. This highlights the strength of our new metrics, as they evaluate SD at the utterance-level, providing a more accurate reflection of the diarization quality than the traditional metrics, which are based on uniformly-split audio segments or word-level analysis.5
Footnote 5: More details of this analysis are provided in Appendix A.1.
## 6 Applications
We offer two tools to facilitate the adaptation of our work: **align4d**, an efficient MSA tool with an user-friendly API (Section 6.1) and **TranscribeView**, a visualization interface to analyze ASR/SD errors through multiple evaluatioin metrics (Section 6.2). These tools enable us to thoroughly analyze those errors and create higher-quality transcript data.
\begin{table}
\begin{tabular}{l|c c|c c} \hline \hline
**Transcriber** & **DER** & **WDEF** & **WER** & **TDEF** & **DF1** \\ \hline Amazon (AT) & **0.24** & **0.15** & 0.34 & 0.53 & 0.79 \\ Rev AI (RA) & 0.26 & 0.20 & **0.29** & **0.50** & **0.84** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comparing the traditional metrics (DER, WDEF, WER) with our new evaluation metrics (TDEF, DF1).
Figure 6: A screenshot of our visualization tool, TranscribeView. Given the reference and hypothesis transcripts, it simultaneously displays all sequences and allows us to view token alignments as well as selected ARR/SD errors.
### Align4d
Our MSA algorithm can be computationally intensive due to its creation of a high-dimensional matrix and an exhaustive search to find the global optimum through dynamic programming. To improve its efficiency, we have implemented the algorithm in C++ and compiled it as a CPython extension, which can be imported as a Python package. To enhance its adaptability, the C++ dependencies are restricted to the C++20 standard template library.
Our Python API takes reference and hypothesis sequences in JSON as input. It also allows users to strip punctuation, and parameterize the Levenshtein Distance for matching (SS3.1) and the segmentation length for optimization (SS3.3). Finally, it returns the alignment matrix in JSON (SS3.2). Based on our testing, align4d can comfortably handle 200,000 tokens involving 5 speakers using an average laptop. It is publicly available as an open source API: [https://github.com/anonymous/align4d](https://github.com/anonymous/align4d).
### TranscribeView
Figure 6 shows the graphical user interface of TranscribeView, which offers a comprehensive analysis of ASR and SD. This tool uses align4d (Section 6.1) to align tokens from the reference and hypothesis transcripts and presents them side-by-side for easy comparison. It also provides statistical information about the transcripts, such as the number of tokens and speakers (Figure 8 in Appendix A.1). Users can select evaluation metrics from the following: WDER (SS2.2), WER (SS2.3), TDER (SS4.1), and DF1 (SS4.2). The evaluation scores are displayed at the top of the alignment area, in which every utterance is marked by a virtual colored bar implying the corresponding speaker. Users can also hover over tokens to see the corresponding aligned tokens. Moreover, SD errors are indicated by red underlines.
TranscribeView is a web-based application built using the Streamlit framework with custom HTML elements such that it can be accessed using any web browser. It is publicly available as an open-source software: [https://github.com/anonymous/TranscribeView](https://github.com/anonymous/TranscribeView).
## 7 Conclusion
This paper presents a novel approach to evaluating text-based speaker diarization by introducing two metrics, Text-based Diarization Error Rate (TDER) and Diarization F1 (DF1), along with an enhanced algorithm for aligning transcripts with multiple sequences. Our multiple sequence alignment (MSA) algorithm enables accurate token-to-token mapping between reference and hypothesis transcripts. Our web-based tool, TranscribeView, provides a comprehensive platform that allows researchers to visualize and evaluate errors in speech recognition as well as speaker diarization.
While this work provides valuable contributions, it also recognizes a few limitations. The robustness of our alignment algorithm and the effectiveness of our proposed evaluation metrics can be further verified by annotating more transcripts, which is labor-intensive. The increased computational complexity from the enhanced MSA algorithm may also limit its applicability. Future work aims to improve the efficiency and effectiveness of TranscribeView and align4d to handle a wider range of research.
Figure 7: A screenshot of the alignment area in TranscribeView. Each vertical colored bar represents the alignment between speakers (e.g., \(spk_{0}\) is aligned to \(C\)), while greyed-out speaker labels indicate unmapped speakers (e.g., \(spk_{1}\) is not mapped to any speakers in the reference). Users can hover over tokens to view their corresponding aligned token (highlighted in yellow). Diarization errors are indicated by red underlines. |
2309.04356 | Duality Arguments in the Analysis of a Viscoelastic Contact Problem | We consider a mathematical model which describes the quasistatic frictionless
contact of a viscoelastic body with a rigid-plastic foundation. We describe the
mechanical assumptions, list the hypotheses on the data and provide three
different variational formulations of the model in which the unknowns are the
displacement field, the stress field and the strain field, respectively. These
formulations have a different structure. Nevertheless, we prove that they are
pairwise dual of each other. Then, we deduce the unique weak solvability of the
contact problem as well as the Lipschitz continuity of its weak solution with
respect to the data. The proofs are based on recent results on
history-dependent variational inequalities and inclusions. Finally, we present
numerical simulations in the study of the contact problem, together with the
corresponding mechanical interpretations. | Piotr Bartman, Anna Ochal, Mircea Sofonea | 2023-09-08T14:28:05Z | http://arxiv.org/abs/2309.04356v1 | # Duality Arguments in the Analysis of a Viscoelastic Contact Problem
###### Abstract
We consider a mathematical model which describes the quasistatic frictionless contact of a viscoelastic body with a rigid-plastic foundation. We describe the mechanical assumptions, list the hypotheses on the data and provide three different variational formulations of the model in which the unknowns are the displacement field, the stress field and the strain field, respectively. These formulations have a different structure. Nevertheless, we prove that they are pairwise dual of each other. Then, we deduce the unique weak solvability of the contact problem as well as the Lipschitz continuity of its weak solution with respect to the data. The proofs are based on recent results on history-dependent variational inequalities and inclusions. Finally, we present numerical simulations in the study of the contact problem, together with the corresponding mechanical interpretations.
**AMS Subject Classification :** 74M15, 74M10, 47J22, 49J40, 49J21, 34G25.
**Key words :** viscoelastic material, frictionless contact, history-dependent variational inequality, history-dependent inclusion, weak solution, numerical simulations.
Dedicated to Professor Zhenhai Liu on the occasion of his 65th birthday.
Introduction
Contact phenomena between deformable bodies arise in industry and everyday life. They are modeled by strongly nonlinear boundary value problems which usually do not have classical solutions. Therefore, their study is made by using a variational approach, that consists to replace the strong formulation of the problem by a weak or variational formulation, which is more convenient for mathematical analysis and numerical simulations.
The weak formulations of contact problems vary from problem to problem, from author to author and even from paper to paper. They lead to challenging nonlinear problems which, in general, are expressed in terms of either variational and hemi-variational inequalities or inclusions, including differential inclusions. Comprehensive references in the theory of variational inequalities are [3, 4] and, more recently, [9]. There, various existence and uniqueness results are presented, obtained by using different functional arguments. Hemivariational inequalities are inequality problems governed by a locally Lipschitz continuous function. Their analysis is carried out by using arguments of pseudomonotonicity for multivalued operators combined with the properties of the generalized directional derivative and the subdiffrential in the sense of Clarke. Basic references in the field are [18, 22]. Finally, for the theory of differential inclusion,we mention the book [14] and the survey paper [29]. The book [14] deals with the theory of semilinear differential inclusions in infinite dimensional spaces, in a setting in which neither convexity of the map or compactness of the multi-operators is supposed. There, arguments of degree theory are used for solving operator inclusions, fixed points and optimization problems. The theory is applied to the investigation of semilinear differential inclusions in Banach spaces. In the survey paper [29] the authors discuss applications of differential and operator inclusions to some optimization and optimal control problems, including an optimal feedback control problem for a mathematical model of the motion of weakly concentrated water polymer solutions.
For most of the problems which describe the contact of a viscoelastic material, the variational formulation is given in a form of a variational inequality with time-independent unilateral constraints in which the unknown is the displacement field. References on this topic include [5, 7, 8, 10, 15, 20, 24]. Nevertheless, for several problems it is more convenient to consider the stress field as the main unknown and, therefore, to obtain a variational formulation in term of the stress field. Such a formulation is usually in a form of a variational inequality too, but it has a different structure since in this case the unilateral constraints are time-dependent. References in the field are [16, 23, 24], for instance. Besides the displacement and the stress fields, the strain field can be successfully used to study various contact problems, as proved recently. Choosing the strain field as the main unknown leads to a variational formulation which is in the form of a history-dependent inclusion or a sweeping process. Reference in the field are [1, 2, 17, 26], for instance.
The aim of this current paper is two fold. The first one is to provide three different variational formulations for a viscoelastic contact problem (in which the unknowns are the displacement, the stress and the strain field, respectively), to prove their equivalence and their unique solvability, as well. Our proofs show that the corresponding variational formulations are pairwise dual to each other (in the sense introduced in [11, 23]), which consists the first trait of novelty of our work. Our second aim in this paper is to introduce a numerical approximation scheme of the problem (based on the variational formulation in displacements) and to provide numerical simulations together with the corresponding mechanical interpretations. This represents the second novelty of the current paper.
The rest of the manuscript is organized as follows. In Section 2 we present some notation and preliminary material which are needed in the next sections. This concerns the properties of the function spaces we use, a result on the history-dependent operators and some abstract results for history-dependent variational inequalities and inclusions. In Section 3 we introduce the viscoelastic model of contact and we provide a description of the equations and boundary value conditions. Then, we list the hypotheses on the data. In Section 4 we consider there variational formulations of the problem and prove that these formulations are pairwise dual of each other. Then, in Section 5 we state and prove existence and uniqueness results, which allow us to define the concept of a weak solution to the contact model. Finally, we end this paper with Section 6 in which we present a numerical scheme for the displacement variational formulation, together with some numerical simulations and the corresponding mechanical interpretations.
## 2 Notation and preliminaries
The preliminary material we present in this section concerns basic notation, an existence and uniqueness result for a class of time-dependent inclusions, and some properties of the function spaces in Contact Mechanics. Everywhere in this section \(X\) represents a real Hilbert space endowed with an inner product \((\cdot,\cdot)_{X}\) and its associated norm \(\|\cdot\|_{X}\), and \(2^{X}\) denotes the set of parts of \(X\).
**Basic notation.** We use the notation \(N_{K}\) for the outward normal cone of a nonempty closed convex subset \(K\subset X\). It is well known that \(N_{K}\colon X\to 2^{X}\) and, for any \(u\), \(f\in X\), we have
\[f\in N_{K}(u)\iff u\in K,\quad(f,v-u)_{X}\leq 0\quad\text{for all}\;\;v\in K. \tag{2.1}\]
We also recall that a convex function \(\varphi\colon X\to\mathbb{R}\) is said to be subdifferentiable (in the sense of the convex analysis) if for any \(u\in X\) there exists an element \(\xi\in X\) such that
\[\varphi(v)-\varphi(u)\geq(\xi,v-u)_{X}\quad\text{ for all}\;\;\;v\in X.\]
Consider now an interval of time \([0,T]\) with \(T>0\). We denote by \(C([0,T];X)\) the space of continuous functions defined on \([0,T]\) with values in \(X\). Then, it is well known that \(C([0,T];X)\) is a Banach space equipped with the norm
\[\|v\|_{C([0,T];X)}=\max_{t\in[0,T]}\,\|v(t)\|_{X}. \tag{2.2}\]
For an operator \(\mathcal{S}\colon C([0,T];X)\to C([0,T];X)\) and a function \(u\in C([0,T];X)\) we use the shorthand notation \(\mathcal{S}u(t)\) to represent the value of the function \(\mathcal{S}u\) at the point \(t\in[0,T]\), that is, \(\mathcal{S}u(t):=(\mathcal{S}u)(t)\). Moreover, if \(A\colon X\to X\), then \(A+\mathcal{S}\) will represent a shorthand notation for the operator which maps any function \(u\in C([0,T];X)\) to the function \(t\mapsto Au(t)+\mathcal{S}u(t)\in C([0,T];X)\).
**Definition 2.1**.: _An operator \(\mathcal{S}\colon C([0,T];X)\to C([0,T];X)\) is said to be a history-dependent operator if there exists \(L>0\) such that_
\[\|\mathcal{S}u(t)-\mathcal{S}v(t)\|_{X}\leq L\int_{0}^{t}\|u(s)-v(s)\|_{X}\, ds\quad\ \forall\,u,\,v\in C([0,T];X),\ t\in[0,T].\]
History-dependent operators arise in Functional Analysis, Solid Mechanics and Contact Mechanics, as well. General properties, examples and mechanical interpretations can be found in [24]. An important property of history-dependent operators which will be useful in this paper is the following.
**Theorem 2.2**.: _Let \(\widetilde{A}\colon X\to X\) be a linear continuous operator such that_
\[(\widetilde{A}u,u)_{X}\geq m\|u\|_{X}^{2}\ \ \forall\,u\in X\]
_with some \(m>0\) and consider a history-dependent operator \(\widetilde{\mathcal{S}}\colon C([0,T];X)\to C([0,T];X)\). Then the operator \(\widetilde{A}+\widetilde{\mathcal{S}}\colon C([0,T];X)\to C([0,T];X)\) is invertible and its inverse is of the form \(\widetilde{A}^{-1}+\widetilde{\mathcal{R}}\colon C([0,T];X)\to C([0,T];X)\), where \(\widetilde{A}^{-1}:X\to X\) represents the inverse of the operator \(\widetilde{A}\) and \(\widetilde{\mathcal{R}}\colon C([0,T];X)\to C([0,T];X)\) is a history-dependent operator._
A proof of Theorem 2.2 can be found in [25, p. 55], based on results on nonlinear implicit equations in Banach spaces.
**History-dependent variational inequalities and inclusions.** Consider a set \(K\), the operators \(A\), \(\mathcal{S}\), a function \(f\) and a set-valued mapping \(\Sigma\), which satisfy the following conditions.
\((\mathcal{K})\quad\)\(K\subset X\) is a nonempty closed convex subset.
\((\mathcal{A})\quad\)\(A:X\to X\) is a strongly monotone and Lipschitz continuous operator.
\((\mathcal{S})\quad\)\(\mathcal{S}\colon C([0,T];X)\to C([0,T];X)\) is a history-dependent operator.
\((j)\quad\)\(j\colon X\to\mathbb{R}\) is a convex lower semicontinuous function.
\(f\in C([0,T];X)\).
\((\Sigma)\): \([0,T]\to 2^{X}\) and there exist a nonempty closed convex set \(\Sigma_{0}\subset X\) and a function \(g\in C([0,T];X)\) such that \(\Sigma(t)=\Sigma_{0}+g(t)\) for all \(t\in[0,T]\).
We have the following existence and uniqueness results.
**Theorem 2.3**.: _Assume \((\mathcal{K})\), \((\mathcal{A})\), \((\mathcal{S})\), \((j)\) and \((f)\). Then, there exists a unique function \(u\in C([0,T];X)\) such that for all \(t\in[0,T]\) the following inequality holds:_
\[u(t)\in K, (Au(t),v-u(t))_{X}+(\mathcal{S}u(t),v-u(t))_{X}\] \[+j(v)-j(u(t))\geq(f(t),v-u)_{X}\qquad\forall\,v\in K.\]
**Theorem 2.4**.: _Assume \((\mathcal{A})\), \((\mathcal{S})\) and \((\Sigma)\). Then, there exists a unique function \(u\in C([0,T];X)\) such that for all \(t\in[0,T]\) the following inclusion holds:_
\[-u(t)\in N_{\Sigma(t)}\big{(}Au(t)+\mathcal{S}u(t)\big{)}.\]
Theorem 2.3 represents a direct consequence of a result proved in [24, Ch.3] while Theorem 2.4 is a direct consequence of a result proved in [23, Ch.6]. Their proofs are based on arguments of convex analysis, monotone operators and a fixed point result for history-dependent operators.
**Function spaces.** For the contact problem we consider in this paper we introduce some specific notation we shall need in the following sections. First, \(\mathbb{S}^{d}\) stands for the space of second order symmetric tensors on \(\mathbb{R}^{d}\) with \(d\in\{2,3\}\). Moreover, " \(\cdot\) " and \(\|\cdot\|\) represent the inner product and the Euclidean norm on the spaces \(\mathbb{R}^{d}\) and \(\mathbb{S}^{d}\), respectively. In addition, \(\Omega\subset\mathbb{R}^{d}\) is a bounded domain with a Lipschitz continuous boundary \(\Gamma\). The outward unit normal at \(\Gamma\) will be denoted by \(\boldsymbol{\nu}\), and \(\Gamma_{1}\) is a measurable part of \(\Gamma\) with positive measure.
We use the standard notation for the Lebesgue and Sobolev spaces associated to \(\Omega\) and \(\Gamma\). Typical examples are the spaces \(L^{2}(\Omega)^{d}\), \(L^{2}(\Gamma)^{d}\) and \(H^{1}(\Omega)^{d}\) equipped with their canonical Hilbertian structure. For an element \(\boldsymbol{v}\in H^{1}(\Omega)^{d}\) we still write \(\boldsymbol{v}\) for the trace \(\gamma\boldsymbol{v}\in L^{2}(\Gamma)^{d}\) and \(v_{\nu}\), \(\boldsymbol{v}_{\tau}\) for the normal and tangential traces on the boundary, i.e., \(v_{\nu}=\boldsymbol{v}\cdot\boldsymbol{\nu}\) and \(\boldsymbol{v}_{\tau}=\boldsymbol{v}-v_{\nu}\boldsymbol{\nu}\). Moreover, \(\boldsymbol{\varepsilon}(\boldsymbol{v})\) denotes the symmetric part of the gradient of \(\boldsymbol{v}\), i.e.,
\[\boldsymbol{\varepsilon}(\boldsymbol{v})=\frac{1}{2}\big{(}\nabla\boldsymbol {v}+\nabla^{T}\boldsymbol{v}\big{)}.\]
In addition, for a regular tensor-valued field \(\boldsymbol{\sigma}\colon\Omega\to\mathbb{S}^{d}\) we shall use \(\sigma_{\nu}\) and \(\boldsymbol{\sigma}_{\tau}\) for the normal and tangential components of the stress vector \(\boldsymbol{\sigma}\boldsymbol{\nu}\) on \(\Gamma\), i.e., \(\sigma_{\nu}=\boldsymbol{\sigma}\boldsymbol{\nu}\cdot\boldsymbol{\nu}\) and \(\boldsymbol{\sigma}_{\tau}=\boldsymbol{\sigma}\boldsymbol{\nu}-\sigma_{\nu} \boldsymbol{\nu}\).
Next, for the displacement field we need the space \(V\) and for the stress and strain fields we need the space \(Q\), defined as follows:
\[V=\{\,\boldsymbol{v}\in H^{1}(\Omega)^{d}:\ \boldsymbol{v}= \boldsymbol{0}\ \ \mbox{on}\ \ \Gamma_{1}\,\},\] \[Q=\{\,\boldsymbol{\sigma}=(\sigma_{ij}):\ \sigma_{ij}=\sigma_{ji} \in L^{2}(\Omega)\quad\forall\,i,\,j=1,\ldots,d\,\}.\]
The spaces \(V\) and \(Q\) are real Hilbert spaces endowed with the inner products
\[(\boldsymbol{u},\boldsymbol{v})_{V}=\int_{\Omega}\boldsymbol{ \varepsilon}(\boldsymbol{u})\cdot\boldsymbol{\varepsilon}(\boldsymbol{v})\, dx,\qquad(\boldsymbol{\sigma},\boldsymbol{\tau})_{Q}=\int_{\Omega}\boldsymbol{ \sigma}\cdot\boldsymbol{\tau}\,dx. \tag{2.4}\]
The associated norms on these spaces will be denoted by \(\|\cdot\|_{V}\) and \(\|\cdot\|_{Q}\), respectively. Recall that the completeness of the space \((V,\|\cdot\|_{V})\) follows from the assumption \(meas\,(\Gamma_{1})>0\), which allows the use of Korn's inequality. Note also that, by the definition of the inner product in the spaces \(V\) and \(Q\), we have
\[\|\boldsymbol{v}\|_{V}=\|\boldsymbol{\varepsilon}(\boldsymbol{v})\|_{Q}\quad \mbox{ for all}\ \ \boldsymbol{v}\in V \tag{2.5}\]
and, using the Sobolev theorem, we deduce that
\[\|\boldsymbol{v}\|_{L^{2}(\Gamma)^{d}}\leq c_{0}\,\|\boldsymbol{v}\|_{V}\quad \mbox{for all}\ \ \boldsymbol{v}\in V. \tag{2.6}\]
Here, \(c_{0}\) is a positive constant which depends on \(\Omega\) and \(\Gamma_{1}\).
We also use notation \(\mathbf{Q}_{\infty}\) for the space of fourth order tensor fields defined by
\[\mathbf{Q}_{\infty}=\{\,\mathcal{C}=(c_{ijkl})\ :\ c_{ijkl}=c_{jikl}=c_{ klij}\in L^{\infty}(\Omega)\quad\forall\,i,\,j,\,k,\,l=1,\ldots,d\,\},\]
equipped with the norm
\[\|\mathcal{C}\|_{\mathbf{Q}_{\infty}}=\max_{1\leq i,j,k,l\leq d}\|c_{ijkl}\|_ {L^{\infty}(\Omega)}.\]
We end this section with the following result we shall use in the rest of the paper.
**Lemma 2.5**.: _There exists a linear continuous operator \(G\colon Q\to V\) such that for any \(\boldsymbol{\omega}\in Q\) and \(\boldsymbol{u}\in V\) the following implication hold:_
\[\boldsymbol{\omega}=\boldsymbol{\varepsilon}(\boldsymbol{u})\quad\implies \quad\boldsymbol{u}=G\boldsymbol{\omega}.\]
The proof of Lemma 2.5 is obtained by standard ortogonality arguments used in various books and surveys and, therefore, we skip it. Such arguments have been used in [27], for instance, in the study of Navier-Stokes equations.
## 3 The viscoelastic contact model
We now describe the mathematical model of contact we consider in this paper. The physical setting is the following: a viscoelastic body occupies, in its reference configuration, a bounded domain \(\Omega\subset\mathbb{R}^{d}\) (\(d\in\{2,3\}\)), with regular boundary \(\partial\Omega=\Gamma\). We
assume that \(\Gamma\) is decomposed into three parts \(\overline{\Gamma}_{1}\), \(\overline{\Gamma}_{2}\) and \(\overline{\Gamma}_{3}\), with \(\Gamma_{1}\), \(\Gamma_{2}\) and \(\Gamma_{3}\) being relatively open and mutually disjoint and, moreover, the \(d-1\) measure of \(\Gamma_{1}\), denoted by \(meas\left(\Gamma_{1}\right)\), is positive. The body is fixed on the part \(\Gamma_{1}\) of its boundary, is acted upon by body forces and surface tractions on \(\Gamma_{2}\), and is in contact with an obstacle on \(\Gamma_{3}\), the co-called foundation. As a result, its mechanical state evolves. To describe its evolution we denote by \([0,T]\) the time interval of interest, where \(T>0\). Moreover, we use \(\boldsymbol{x}\) to denote a typical point in \(\Omega\cup\Gamma\) and, for simplicity, we sometimes skip the dependence of various functions on the spatial variable \(\boldsymbol{x}\). Then, the viscoelastic contact model we consider is as follows.
**Problem \(\mathcal{P}\)**.: _Find a displacement field \(\boldsymbol{u}\colon\Omega\times[0,T]\to\mathbb{R}^{d}\), a stress field \(\boldsymbol{\sigma}\colon\Omega\times[0,T]\to\mathbb{S}^{d}\) and a strain field \(\boldsymbol{\omega}\colon\Omega\times[0,T]\to\mathbb{S}^{d}\) such that for any \(t\in[0,T]\) the following hold:_
\[\boldsymbol{\sigma}(t)=\mathcal{A}\boldsymbol{\omega}(t)+\int_{ 0}^{t}\mathcal{B}(t-s)\boldsymbol{\omega}(s)\,ds \text{in}\quad\Omega, \tag{3.1}\] \[\boldsymbol{\omega}(t)=\boldsymbol{\varepsilon}(\boldsymbol{u}(t ))\text{in}\quad\Omega,\] (3.2) \[\text{Div}\,\boldsymbol{\sigma}(t)+\boldsymbol{f}_{0}(t)= \boldsymbol{0} \text{in}\quad\Omega,\] (3.3) \[\boldsymbol{u}(t)=\boldsymbol{0} \text{on}\quad\Gamma_{1},\] (3.4) \[\boldsymbol{\sigma}(t)\boldsymbol{\nu}=\boldsymbol{f}_{2}(t) \text{on}\quad\Gamma_{2},\] (3.5) \[\sigma_{\nu}(t)=0 \text{if}\;\;\;u_{\nu}(t)<0\] \[-F\leq\sigma_{\nu}(t)\leq 0 \text{if}\;\;\;u_{\nu}(t)=0\] \[\sigma_{\nu}(t)=-F \text{if}\;\;\;\;u_{\nu}(t)>0\] \[\boldsymbol{\sigma}_{\tau}(t)=\boldsymbol{0} \text{on}\quad\Gamma_{3}. \tag{3.7}\]
A short description of the equations and boundary conditions in Problem \(\mathcal{P}\) is as follows. First, equality (3.1) is the viscoelastic constitutive law with long memory in which \(\mathcal{A}\) and \(\mathcal{B}\) are the elasticity and the relaxation tensors, respectively. It was considered in many books, including [6, 7, 21]. In particular, existence and uniqueness results for displacement-tractions boundary value problems involving such a constitutive law have been considered in [7]. Equality (3.2) represents the definition of the strain tensor. Next, equation (3.3) is the equilibrium equation in which \(\boldsymbol{f}_{0}\) denotes the time-dependent density of body forces. We use this equation here since we assume that the mechanical process is quasistatic and, therefore, we neglect the inertial term in the equation of motion. The boundary condition (3.4) is the displacement condition and models the setting when the body is held fixed on the part \(\Gamma_{1}\) of its boundary. Condition (3.5) is the traction boundary condition in which \(\boldsymbol{f}_{2}\) represents the density of surface tractions which act on \(\Gamma_{2}\), assumed to be time-dependent. Condition (3.6) describes the contact with a rigid-plastic foundation. It shows that when there is
separation (i.e., when \(u_{\nu}(t)<0\)) then the reaction of the foundation vanishes (since \(\sigma_{\nu}(t)=0\)); moreover, it shows that penetration arise only if the normal stress reaches the value \(F\), which is interpreted as the yield limit of the foundation. More details and mechanical interpretation on this condition and similar interface laws could be found in [23, p. 280] and [24, 25], for instance. Finally, condition (3.7) shows that the shear on the contact surface vanishes during the process. We use this condition here since we assume that the contact is frictionless. The case of a frictional contact problem can be considered and treated by using similar arguments, too. Nevertheless its analysis is more difficult since in the frictional case the function \(j\) and the set \(\Sigma\) we introduce below depend on the solution itself.
We end our comments on the model (3.1)-(3.7) with the remark that in the case when the memory term in (3.1) vanishes (i.e., when \({\cal B}\equiv{\bf 0}\)), then Problem \({\cal P}\) reduces to a time-dependent elastic contact problem. A comparison between the solution of this elastic problem and the original Problem \({\cal P}\) has been made in [23, p. 301-302], under specific assumptions. For the example presented there, it was proved that the memory term does not affect the stress field but, in contrast, it affects the strain and the displacement field. Moreover, the solution of the elastic contact problem can be obtained from the solution of the viscoelastic contact problems, in the limit as the relaxation tensor converges to zero. We also mention that a comparison of the numerical solutions for the elastic and viscoelastic problems will be made in Section 6 (see Figure 2).
In the study of Problem \({\cal P}\) we assume that the viscosity and the elasticity operators satisfy the following conditions.
\[\left\{\begin{array}{l}\mbox{(a) }{\cal A}=(a_{ijkl})\in{\bf Q}_{\infty}.\\ \mbox{(b) There exists }m_{{\cal A}}>0\ \ \mbox{such that}\\ {\cal A}(\mathbf{x},\mathbf{\tau})\cdot\mathbf{\tau$ }\geq m_{{\cal A}}\|\mbox{\boldmath$\tau}\|^{2}\ \forall\,\mathbf{\tau}\in\mathbb{S}^{d},\ \mbox{a.e.}\ \mathbf{x}\in\Omega.\end{array}\right. \tag{3.8}\]
\[{\cal B}\in C([0,T];{\bf Q}_{\infty}). \tag{3.9}\]
Moreover, the density of applied forces and the yield limit of the foundation have the regularity
\[\mathbf{f}_{0}\in C([0,T];L^{2}(\Omega)^{d}). \tag{3.10}\] \[\mathbf{f}_{2}\in C([0,T];L^{2}(\Gamma_{2})^{d}).\] (3.11) \[F\in L^{2}(\Gamma_{3}),\quad F(\mathbf{x})\geq 0\quad \mbox{a.e.}\ \ \mathbf{x}\in\Gamma_{3}. \tag{3.12}\]
We shall keep assumptions (3.8)-(3.12) in the next three sections, even if we do not mention it explicitly. Our main aim there is to provide the variational analysis of Problem \({\cal P}\), including existence, uniqueness and convergence results.
Variational formulations
In order to deduce the variational formulations for Problem \(\mathcal{P}\) we introduce the operators \(A\colon V\to V\), \(\widetilde{A}\colon Q\to Q\), \(\mathcal{S}\colon C([0,T];V)\to C([0,T];V)\) and \(\widetilde{\mathcal{S}}\colon C([0,T];Q)\to C([0,T];Q)\) defined by equalities
\[(A\boldsymbol{u},\boldsymbol{v})_{V}=\int_{\Omega}\mathcal{A} \boldsymbol{\varepsilon}(\boldsymbol{u})\cdot\boldsymbol{\varepsilon}( \boldsymbol{v})\,dx\qquad\forall\,\boldsymbol{u},\,\boldsymbol{v}\in V, \tag{4.1}\] \[(\widetilde{A}\boldsymbol{\omega},\boldsymbol{\tau})_{Q}=\int_{ \Omega}\mathcal{A}\boldsymbol{\omega}\cdot\boldsymbol{\tau}\,dx\qquad \forall\,\boldsymbol{\omega},\,\boldsymbol{\tau}\in Q,\] (4.2) \[(\mathcal{S}\boldsymbol{u}(t),\boldsymbol{v})_{V}=\int_{\Omega} \int_{0}^{t}\mathcal{B}(t-s)\boldsymbol{\varepsilon}(\boldsymbol{u}(s))\,ds \cdot\boldsymbol{\varepsilon}(\boldsymbol{v})\,dx\] (4.3) \[\qquad\qquad\forall\,\boldsymbol{u}\in C([0,T];V),\,\boldsymbol{v }\in V,\ t\in[0,T],\] \[(\widetilde{\mathcal{S}}\boldsymbol{\omega}(t),\boldsymbol{\tau} )_{Q}=\int_{\Omega}\int_{0}^{t}\mathcal{B}(t-s)\boldsymbol{\omega}(s)\,ds \cdot\boldsymbol{\tau}\,dx\] (4.4) \[\qquad\qquad\forall\,\boldsymbol{\omega}\in C([0,T];Q),\, \boldsymbol{\tau}\in Q,\ t\in[0,T].\]
Using the assumptions on the elasticity tensor \(\mathcal{A}\), it is easy to see that \(A\colon V\to V\) and \(\widetilde{A}\colon Q\to Q\) are linear continuous symmetric and coercive operators. Moreover, it is easy to see that \(\mathcal{S}\colon C([0,T];V)\to C([0,T];V)\) and \(\widetilde{\mathcal{S}}\colon C([0,T];Q)\to C([0,T];Q)\) are history-dependent operators. This allows us to use Theorem 2.2 on the space \(X=Q\). Below in this section we denote by \(\widetilde{A}^{-1}\colon Q\to Q\) the inverse of the operator \(\widetilde{A}\) and we use \(\widetilde{\mathcal{R}}\colon C([0,T];Q)\to C([0,T];Q)\) for the corresponding history-dependent operator.
Next, we consider the functions \(j\colon V\to\mathbb{R}\), \(\boldsymbol{f}\colon[0,T]\to V\) and the set \(\Sigma(t)\) defined by equalities
\[j(\boldsymbol{v})=\int_{\Gamma_{3}}Fv_{\nu}^{+}\,da\qquad\forall\, \boldsymbol{v}\in V, \tag{4.5}\] \[(\boldsymbol{f}(t),\boldsymbol{v})_{V}=\int_{\Omega}\boldsymbol {f}_{0}(t)\cdot\boldsymbol{v}\,dx+\int_{\Gamma_{2}}\boldsymbol{f}_{2}(t)\cdot \boldsymbol{v}\,da\qquad\forall\,\boldsymbol{v}\in V,\ t\in[0,T]\] (4.6) \[\Sigma(t)=\{\,\boldsymbol{\tau}\in Q\,:\,(\boldsymbol{\tau}, \boldsymbol{\varepsilon}(\boldsymbol{v}))_{Q}+j(\boldsymbol{v})\geq( \boldsymbol{f}(t),\boldsymbol{v})_{V}\ \ \forall\,\boldsymbol{v}\in V\,\}\ \forall\,t\in[0,T]. \tag{4.7}\]
Note that in (4.5) and below, we use notation \(r^{+}\) for the positive part of \(r\in\mathbb{R}\), that is, \(r^{+}=\max\,\{r,0\}\). Therefore, \(j\) is a positively homogeneous function, i.e., \(j(\lambda\boldsymbol{v})=\lambda j(\boldsymbol{v})\) for each \(\lambda>0\) and \(\boldsymbol{v}\in V\).
Assume now that \((\boldsymbol{u},\boldsymbol{\sigma},\boldsymbol{\omega})\) are sufficiently regular functions which satisfy Problem \(\mathcal{P}\). We use (3.4), (3.1) and (3.2) to see that
\[\boldsymbol{u}(t)\in V,\qquad\boldsymbol{\sigma}(t)\in Q,\qquad\boldsymbol{ \omega}(t)\in Q\qquad\forall\,t\in[0,T]. \tag{4.8}\]
Let \(\mathbf{v}\in V\) and \(t\in[0,T]\). Then, using standard arguments based on integration by parts we deduce that
\[\int_{\Omega}\,\mathbf{\sigma}(t)\cdot(\mathbf{\varepsilon}(\mathbf{v})-\mathbf{ \varepsilon}(\mathbf{u}(t)))\,dx+\int_{\Gamma_{3}}Fv_{\nu}^{+}\,da-\int_{\Gamma_{3} }Fu_{\nu}^{+}(t)\,da\] \[\qquad\qquad\geq\int_{\Omega}\mathbf{f}_{0}(t)\cdot(\mathbf{v}-\mathbf{u}(t)) \,dx+\int_{\Gamma_{2}}\mathbf{f}_{2}(t)\cdot(\mathbf{v}-\mathbf{u}(t))\,da.\]
Next, we use notation (4.5) and (4.6) to deduce that
\[(\mathbf{\sigma}(t),\mathbf{\varepsilon}(\mathbf{v})-\mathbf{\varepsilon}(\mathbf{u}(t)))_{Q}+j( \mathbf{v})-j(\mathbf{u}(t))\geq(\mathbf{f}(t),\mathbf{v}-\mathbf{u}(t))_{V}. \tag{4.9}\]
We now use the constitutive law (3.1) and notation (4.1), (4.3) to see that
\[(\mathbf{\sigma}(t),\mathbf{\varepsilon}(\mathbf{v})-\mathbf{\varepsilon}(\mathbf{u}(t)))_{Q}=(A \mathbf{u}(t),\mathbf{v}-\mathbf{u}(t))_{V}+(\mathcal{S}\mathbf{u}(t),\mathbf{v}-\mathbf{u}(t))_{V}. \tag{4.10}\]
Therefore, substituting (4.10) in (4.9) and using (4.8) we deduce the following variational formulation of the contact Problem \(\mathcal{P}\) in terms of displacement.
**Problem \(\mathcal{P}_{1}^{V}\)**.: _Find a displacement field \(\mathbf{u}\in C([0,T];V)\) such that for all \(t\in[0,T]\) the following inequality holds:_
\[(A\mathbf{u}(t),\mathbf{v}-\mathbf{u}(t))_{V}+(\mathcal{S}\mathbf{u}(t),\mathbf{v}- \mathbf{u}(t))_{V}+j(\mathbf{v})-j(\mathbf{u}(t)) \tag{4.11}\] \[\qquad\qquad\geq(\mathbf{f}(t),\mathbf{v}-\mathbf{u}(t))_{V}\qquad\forall\, \mathbf{v}\in V.\]
We now consider the following two variational formulations of Problem \(\mathcal{P}\), in terms of the stress and strain field, respectively.
**Problem \(\mathcal{P}_{2}^{V}\)**.: _Find a stress field \(\mathbf{\sigma}\in C([0,T];Q)\) such that for all \(t\in[0,T]\) the following inequality holds:_
\[\mathbf{\sigma}(t)\in\Sigma(t),\quad(\widetilde{A}^{-1}\mathbf{\sigma}(t),\mathbf{\tau}- \mathbf{\sigma}(t))_{Q}+(\widetilde{\mathcal{R}}\mathbf{\sigma}(t),\mathbf{\tau}-\mathbf{ \sigma}(t))_{Q}\geq 0\quad\forall\,\mathbf{\tau}\in\Sigma(t). \tag{4.12}\]
**Problem \(\mathcal{P}_{3}^{V}\)**.: _Find a strain field \(\mathbf{\omega}\in C([0,T];Q)\) such that for all \(t\in[0,T]\) the following inclusion holds:_
\[-\mathbf{\omega}(t)\in N_{\Sigma(t)}(\widetilde{A}\mathbf{\omega}(t)+\widetilde{ \mathcal{S}}\mathbf{\omega}(t)). \tag{4.13}\]
Note that inequality (4.12) and inclusion (4.13) can be derived directly from the statement of the contact Problem \(\mathcal{P}\). Nevertheless, to avoid repetitions we do not provide this derivation, and we restrict ourselves to mention that Problems \(\mathcal{P}_{2}^{V}\) and \(\mathcal{P}_{3}^{V}\) are fully justified by the following results.
**Proposition 4.1**.: _Let \(\mathbf{u}\) be a solution of Problem \(\mathcal{P}_{1}^{V}\) and let \(\mathbf{\sigma}\colon[0,T]\to Q\) be the function defined by equality_
\[\mathbf{\sigma}(t)=\widetilde{A}\mathbf{\varepsilon}(\mathbf{u}(t))+\widetilde{\mathcal{S} }\mathbf{\varepsilon}(\mathbf{u}(t))\quad\forall\,t\in[0,T]. \tag{4.14}\]
_Then \(\mathbf{\sigma}\) is a solution of Problem \(\mathcal{P}_{2}^{V}\)._
Proof.: The regularity \(\boldsymbol{\sigma}\in C([0,T];Q)\) is obvious. Moreover, using (4.14) and Theorem 2.2 we deduce that
\[\boldsymbol{\varepsilon}(\boldsymbol{u}(t))=\widetilde{A}^{-1}\boldsymbol{ \sigma}(t)+\widetilde{\mathcal{R}}\boldsymbol{\sigma}(t)\quad\forall\,t\in[0, T]. \tag{4.15}\]
Let \(\boldsymbol{v}\in V\) and \(t\in[0,T]\). We use definitions (4.1)-(4.4) and (4.14) to see that
\[(A\boldsymbol{u}(t),\boldsymbol{v}-\boldsymbol{u}(t))_{V}+(\mathcal{S} \boldsymbol{u}(t),\boldsymbol{v}-\boldsymbol{u}(t))_{V}=(\boldsymbol{\sigma} (t),\boldsymbol{\varepsilon}(\boldsymbol{v})-\boldsymbol{\varepsilon}( \boldsymbol{u}(t)))_{Q}\]
and, therefore, (4.11) implies that
\[(\boldsymbol{\sigma}(t),\boldsymbol{\varepsilon}(\boldsymbol{v})- \boldsymbol{\varepsilon}(\boldsymbol{u}(t)))_{Q}+j(\boldsymbol{v})-j( \boldsymbol{u}(t))\geq(\boldsymbol{f}(t),\boldsymbol{v}-\boldsymbol{u}(t))_ {V}\qquad\forall\,\boldsymbol{v}\in V. \tag{4.16}\]
We now test in (4.16) with \(\boldsymbol{v}=2\boldsymbol{u}(t)\) and \(\boldsymbol{v}=\boldsymbol{0}_{V}\) to see that
\[(\boldsymbol{\sigma}(t),\boldsymbol{\varepsilon}(\boldsymbol{u}(t)))_{Q}+j( \boldsymbol{u}(t))=(\boldsymbol{f}(t),\boldsymbol{u}(t))_{V}. \tag{4.17}\]
Therefore, using (4.16) and (4.17) we find that
\[(\boldsymbol{\sigma}(t),\boldsymbol{\varepsilon}(\boldsymbol{v}))_{Q}+j( \boldsymbol{v})\geq(\boldsymbol{f}(t),\boldsymbol{v})_{V}.\]
This inequality combined with definition (4.7) implies that
\[\boldsymbol{\sigma}\in\Sigma(t).\]
To proceed, we use (4.7), (4.8) and (4.17) to see that
\[(\boldsymbol{\tau}-\boldsymbol{\sigma}(t),\boldsymbol{\varepsilon}( \boldsymbol{u}(t)))_{Q}\geq 0\qquad\forall\,\boldsymbol{\tau}\in\Sigma(t)\]
and, using (4.15) we find that
\[(\boldsymbol{\tau}-\boldsymbol{\sigma}(t),\widetilde{A}^{-1}\boldsymbol{ \sigma}(t)+\widetilde{\mathcal{R}}\boldsymbol{\sigma}(t))_{Q}\geq 0\qquad \forall\,\boldsymbol{\tau}\in\Sigma(t).\]
This shows that \(\boldsymbol{\sigma}\) is a solution to Problem \(\mathcal{P}^{V}_{2}\), which concludes the proof.
**Proposition 4.2**.: _Let \(\boldsymbol{\sigma}\) be a solution of Problem \(\mathcal{P}^{V}_{2}\) and let \(\boldsymbol{\omega}\colon[0,T]\to Q\) be the function defined by equality_
\[\boldsymbol{\omega}(t)=\widetilde{A}^{-1}\boldsymbol{\sigma}(t)+\widetilde{ \mathcal{R}}\boldsymbol{\sigma}(t)\quad\forall\,t\in[0,T]. \tag{4.18}\]
_Then \(\boldsymbol{\omega}\) is a solution of Problem \(\mathcal{P}^{V}_{3}\)._
Proof.: The regularity \(\boldsymbol{\omega}\in C([0,T];Q)\) is obvious. Moreover, using (4.18) and Theorem 2.2 we find that
\[\boldsymbol{\sigma}(t)=\widetilde{A}\boldsymbol{\omega}(t)+\widetilde{ \mathcal{S}}\boldsymbol{\omega}(t)\quad\forall\,t\in[0,T]. \tag{4.19}\]
Let \(t\in[0,T]\). Then, using (4.12), (4.18) and (4.19) we obtain that
\[\widetilde{A}\boldsymbol{\omega}(t)+\widetilde{\mathcal{S}}\boldsymbol{\omega }(t)\in\Sigma(t),\quad(\widetilde{A}\boldsymbol{\omega}(t)+\widetilde{ \mathcal{S}}\boldsymbol{\omega}(t)-\boldsymbol{\tau},\boldsymbol{\omega}(t))_{ Q}\leq 0\quad\forall\,\boldsymbol{\tau}\in\Sigma(t). \tag{4.20}\]
Then, with notation \(N_{\Sigma(t)}\) for the outward normal cone of the set \(\Sigma(t)\subset Q\), equivalence (2.1) and inequality (4.20) imply that
\[-\boldsymbol{\omega}(t)\in N_{\Sigma(t)}(\widetilde{A}\boldsymbol{\omega}(t) +\widetilde{\mathcal{S}}\boldsymbol{\omega}(t)).\]
We conclude from here that \(\boldsymbol{\omega}\) is a solution to Problem \(\mathcal{P}^{V}_{3}\), which ends the proof.
**Proposition 4.3**.: _Let \(\boldsymbol{\omega}\) be a solution of Problem \(\mathcal{P}^{V}_{3}\). Then there exists a function \(\boldsymbol{u}\colon[0,T]\to V\) such that_
\[\boldsymbol{\omega}(t)=\boldsymbol{\varepsilon}(\boldsymbol{u}(t))\quad\forall \,t\in[0,T]. \tag{4.21}\]
_Moreover, \(\boldsymbol{u}\) is a solution of Problem \(\mathcal{P}^{V}_{1}\)._
Proof.: Let \(\boldsymbol{\sigma}\colon[0,T]\to Q\) be the function defined by (4.19) and let \(t\in[0,T]\) be fixed. Then, using (4.13) and equivalence (2.1) we deduce that
\[\boldsymbol{\sigma}(t)\in\Sigma(t),\qquad(\boldsymbol{\tau}-\boldsymbol{ \sigma}(t),\boldsymbol{\omega}(t))_{Q}\geq 0\qquad\forall\,\boldsymbol{ \tau}\in\Sigma(t). \tag{4.22}\]
Next, consider an element \(\boldsymbol{z}\in Q\) such that
\[(\boldsymbol{z},\boldsymbol{\varepsilon}(\boldsymbol{v}))_{Q}=0\qquad \forall\boldsymbol{v}\in V. \tag{4.23}\]
Then, the definition (4.7) implies that \(\boldsymbol{\sigma}(t)\pm\boldsymbol{z}\in\Sigma(t)\) and, testing with \(\boldsymbol{\tau}=\boldsymbol{\sigma}(t)\pm\boldsymbol{z}\) in (4.22) we find that
\[(\boldsymbol{\omega}(t),\boldsymbol{z})_{Q}=0. \tag{4.24}\]
Equalities (4.23) and (4.24) show that \(\boldsymbol{\omega}(t)\in(\boldsymbol{\varepsilon}(V)^{\perp})^{\perp}\) where \(M^{\perp}\) denotes the orthogonal of the set \(M\) in the Hilbertian structure of the space \(Q\). Now, since \(\boldsymbol{\varepsilon}(V)\) is a closed subspace of \(Q\) we have \((\boldsymbol{\varepsilon}(V)^{\perp})^{\perp}=\boldsymbol{\varepsilon}(V)\). We conclude from here that \(\boldsymbol{\omega}(t)\in\boldsymbol{\varepsilon}(V)\) which shows that there exists an element \(\boldsymbol{u}(t)\in V\) such that (4.21) holds.
It is easy to see that the function \(t\mapsto\boldsymbol{u}(t):[0,T]\to V\) defined above is continuous, i.e., \(\boldsymbol{u}\in C([0,T];V)\). We now prove that \(\boldsymbol{u}\) satisfies inequality (4.11). To this end, let \(t\in[0;T]\). We combine (4.21) and (4.22) to see that
\[(\boldsymbol{\tau}-\boldsymbol{\sigma}(t),\boldsymbol{\varepsilon}( \boldsymbol{u}(t)))_{Q}\geq 0\qquad\forall\,\boldsymbol{\tau}\in\Sigma(t). \tag{4.25}\]
Recall now that the function \(j\colon V\to\mathbb{R}\) is subdifferentible on \(V\), which allows us to consider an element \(\boldsymbol{\xi}(t)\in V\) such that
\[j(\boldsymbol{v})-j(\boldsymbol{u}(t))\geq(\boldsymbol{\xi}(t),\boldsymbol{v }-\boldsymbol{u}(t))_{V}\qquad\forall\,\boldsymbol{v}\in V. \tag{4.26}\]
Let \(\boldsymbol{\tau}_{0}(t)=\boldsymbol{\varepsilon}(\boldsymbol{f}(t)- \boldsymbol{\xi}(t))\in Q\). Then using (2.4) and (4.26) it is easy to see that
\[(\boldsymbol{\tau}_{0}(t),\boldsymbol{\varepsilon}(\boldsymbol{v})- \boldsymbol{\varepsilon}(\boldsymbol{u}(t)))_{Q}+j(\boldsymbol{v})-j( \boldsymbol{u}(t))\geq(\boldsymbol{f}(t),\boldsymbol{v}-\boldsymbol{u}(t))_{ V}\quad\forall\,\boldsymbol{v}\in V. \tag{4.27}\]
We now test in (4.27) with \(\boldsymbol{v}=2\boldsymbol{u}(t)\) and \(\boldsymbol{v}=\boldsymbol{0}_{V}\) to see that
\[(\boldsymbol{\tau}_{0}(t),\boldsymbol{\varepsilon}(\boldsymbol{u}(t)))_{Q}+ j(\boldsymbol{u}(t))=(\boldsymbol{f}(t),\boldsymbol{u}(t))_{V}. \tag{4.28}\]
Therefore, using (4.27) and (4.28) we find that
\[(\boldsymbol{\tau}_{0}(t),\boldsymbol{\varepsilon}(\boldsymbol{v}))_{Q}+j( \boldsymbol{v})\geq(\boldsymbol{f}(t),\boldsymbol{v})_{V}\qquad\forall\, \boldsymbol{v}\in V\]
which implies that \(\boldsymbol{\tau}_{0}(t)\in\Sigma(t)\). This regularity allows us to test with \(\boldsymbol{\tau}=\boldsymbol{\tau}_{0}(t)\) in (4.25) in order to see that
\[(\boldsymbol{\tau}_{0}(t),\boldsymbol{\varepsilon}(\boldsymbol{u}(t)))_{Q}+j( \boldsymbol{u}(t))\geq(\boldsymbol{\sigma}(t),\boldsymbol{\varepsilon}( \boldsymbol{u}(t)))_{Q}+j(\boldsymbol{u}(t))\]
and, using (4.28), we deduce that
\[(\boldsymbol{f}(t),\boldsymbol{u}(t))_{V}\geq(\boldsymbol{\sigma}(t), \boldsymbol{\varepsilon}(\boldsymbol{u}(t)))_{Q}+j(\boldsymbol{u}(t)). \tag{4.29}\]
On the other hand, since \(\boldsymbol{\sigma}(t)\in\Sigma(t)\) we find that
\[(\boldsymbol{\sigma}(t),\boldsymbol{\varepsilon}(\boldsymbol{v}))_{Q}+j( \boldsymbol{v})\geq(\boldsymbol{f}(t),\boldsymbol{v})_{V}\quad\forall\, \boldsymbol{v}\in V \tag{4.30}\]
which, in particular, implies that
\[(\boldsymbol{\sigma}(t),\boldsymbol{\varepsilon}(\boldsymbol{u}(t)))_{Q}+j( \boldsymbol{u}(t))\geq(\boldsymbol{f}(t),\boldsymbol{u}(t))_{V}\quad\forall \,\boldsymbol{v}\in V. \tag{4.31}\]
We now combine inequalities (4.29) and (4.31) to obtain that
\[(\boldsymbol{\sigma}(t),\boldsymbol{\varepsilon}(\boldsymbol{u}(t)))_{Q}+j( \boldsymbol{u}(t))=(\boldsymbol{f}(t),\boldsymbol{u}(t))_{V}\quad\forall\, \boldsymbol{v}\in V, \tag{4.32}\]
then we use (4.30) and (4.32) to find that (4.9) holds, for each \(\boldsymbol{v}\in V\). Moreover, we observe that (4.19), (4.21) and definitions (4.1) - (4.4) of the operators \(A\), \(\widetilde{A}\), \(\mathcal{S}\) and \(\widetilde{\mathcal{S}}\), respectively, imply that
\[(\boldsymbol{\sigma}(t),\boldsymbol{\varepsilon}(\boldsymbol{v})-\boldsymbol {\varepsilon}(\boldsymbol{u}(t)))_{Q}=(A\boldsymbol{u}(t)+\mathcal{S} \boldsymbol{u}(t),\boldsymbol{v}-\boldsymbol{u}(t))_{V}.\]
We substitute this equality in (4.9) and deduce that (4.11) holds. This implies that \(\boldsymbol{u}\) is a solution of Problem \(\mathcal{P}^{V}_{1}\) and concludes the proof.
We now end this section with two remarks concerning the variational problems \(\mathcal{P}^{V}_{1}\), \(\mathcal{P}^{V}_{2}\) and \(\mathcal{P}^{V}_{3}\). The first one (Remark 1 below) is mathematical in nature; the second one (Remark 2) is mechanical in nature.
**Remark 1**.: _We now follow [11, 23] to recall the following definition: two abstract Problems \(\mathcal{P}\) and \(\mathcal{Q}\) defined on the normed spaces \(X\) and \(Y\), respectively, are said to be dual of each other if there exists an operator \(D\colon X\to Y\) such that:_
* \(D\) _is bijective;_
* _Both_ \(D\colon X\to\ Y\) _and its inverse_ \(D^{-1}\colon Y\to X\) _are continuous;_
* \(u\in X\) _is a solution of Problem_ \(\mathcal{P}\) _if and only if_ \(\sigma:=Du\in Y\) _is a solution of Problem_ \(\mathcal{Q}\)_._
_Then, it follows from Propositions_ 4.1-4.3 _that Problems_ \(\mathcal{P}^{V}_{1}\)_,_ \(\mathcal{P}^{V}_{2}\) _and_ \(\mathcal{P}^{V}_{3}\) _are pairwise dual of each other._
**Remark 2**.: _The arguments presented at the beginning of this section show that if the triple (\(\mathbf{u},\mathbf{\sigma},\mathbf{\omega}\)) represents a solution to Problem \(\mathcal{P}\), then \(\mathbf{u}\) is a solution of Problem \(\mathcal{P}^{V}_{1}\). This allows us to consider Problem \(\mathcal{P}^{V}_{1}\) as a (first) variational formulation of the contact Problem \(\mathcal{P}\). Moreover, notations (4.2) and (4.4) show that equality (4.14) is equivalent with the constitutive law_
\[\mathbf{\sigma}(t)=\mathcal{A}\mathbf{\varepsilon}(\mathbf{u}(t))+\int_{0}^{t}\mathcal{B} (t-s)\mathbf{\varepsilon}(\mathbf{u}(s))\,ds\quad\forall\,t\in[0,T] \tag{4.33}\]
_which, obviously, represents a consequence of equalities (3.1) and (3.2). Therefore, Proposition 4.1 shows that if \(\mathbf{u}\) is a solution of Problem \(\mathcal{P}^{V}_{1}\) and \(\mathbf{\sigma}\) is defined by the constitutive law (4.33), then \(\mathbf{\sigma}\) is a solution of Problem \(\mathcal{P}^{V}_{2}\). This allows us to consider Problem \(\mathcal{P}^{V}_{2}\) as a (second) variational formulation of the contact Problem \(\mathcal{P}\). Next, Theorem 2.2 and equality (4.21) imply that notation (4.18) is equivalent with the constitutive law (4.14) which, in turn, is equivalent with the constitutive law (4.33), as proved above. Therefore, Proposition 4.3 shows that if \(\mathbf{\sigma}\) is a solution of Problem \(\mathcal{P}_{2}\) and \(\mathbf{\omega}\) is defined by (4.18), then \(\mathbf{\omega}\) is a solution of Problem \(\mathcal{P}^{V}_{3}\). This allows us to consider Problem \(\mathcal{P}^{V}_{3}\) as a (third) variational formulation of the contact Problem \(\mathcal{P}\). The arguments above provide the legitimacy of weak formulations \(\mathcal{P}^{V}_{1}\), \(\mathcal{P}^{2}_{V}\) and \(\mathcal{P}^{V}_{3}\). These formulations are expressed in terms of different unknowns and have a different structure. Nevertheless, each one can be considered as a variational formulation of the original contact problem \(\mathcal{P}\)._
## 5 Weak solvability
In this section we turn to the solvability of the variational Problems \(\mathcal{P}^{V}_{1}\), \(\mathcal{P}^{V}_{2}\) and \(\mathcal{P}^{V}_{3}\). Our main result on this matter is the following.
**Theorem 5.1**.: _Assume \((\ref{eq:1})\)-\((\ref{eq:2})\). Then, Problems \(\mathcal{P}^{V}_{1}\), \(\mathcal{P}^{V}_{2}\) and \(\mathcal{P}^{V}_{3}\) have a unique solution. Moreover, the solution depends Lipschitz continuously on the data \((F,\mathbf{f})\in L^{2}(\Gamma_{3})\times C([0,T];V)\)._
Proof.: Using assumption (3.8) it is easy to see that the operator \(A\colon V\to V\) defined by (4.1) is a strongly monotone Lipschitz continuous operator. Moreover, recall that the operator \(\mathcal{S}\colon C([0,T];V)\to C([0,T];V)\) given by (4.3) is a history-dependent operator. In addition, assumption (3.12) guarantees that the function \(j\colon V\to\mathbb{R}\) defined by (4.5) is a continuous seminorm and, therefore, it is convex and lower semicontinuous. Finally, the regularities (3.10), (3.11) imply that \(\mathbf{f}\in C([0,T];V)\). Therefore, we are in a position to apply Theorem 2.3 with \(K=X=V\). In this way we prove the existence of a unique solution to Problem \(\mathcal{P}^{V}_{1}\).
Next, using Propositions 4.1 and 4.2 we deduce the solvability of Problems \(\mathcal{P}^{V}_{2}\) and \(\mathcal{P}^{V}_{3}\), respectively. To prove their uniqueness, we proceed as follows. Assume that
\(\boldsymbol{\omega}\) and \(\widetilde{\boldsymbol{\omega}}\) represent two solutions of Problem \(\mathcal{P}^{V}_{3}\). Then Proposition 4.3 shows that there exist two functions \(\boldsymbol{u}\), \(\widetilde{\boldsymbol{u}}\colon[0,T]\to V\) such that
\[\boldsymbol{\omega}(t)=\boldsymbol{\varepsilon}(\boldsymbol{u}(t)),\quad \widetilde{\boldsymbol{\omega}}(t)=\boldsymbol{\varepsilon}(\widetilde{ \boldsymbol{u}}(t))\quad\forall\,t\in[0,T]. \tag{5.1}\]
Moreover, \(\boldsymbol{u}\) and \(\widetilde{\boldsymbol{u}}\) are solutions of Problem \(\mathcal{P}^{V}_{1}\). Now, using the uniqueness of the solution of Problem \(\mathcal{P}^{V}_{1}\) we deduce that \(\boldsymbol{u}=\widetilde{\boldsymbol{u}}\) and, therefore (5.1) implies that \(\boldsymbol{\omega}=\widetilde{\boldsymbol{\omega}}\). This proves the uniqueness of the solution of Problem \(\mathcal{P}^{V}_{3}\).
Similarly, assume that \(\boldsymbol{\sigma}\) and \(\widetilde{\boldsymbol{\sigma}}\) represent two solutions of Problem \(\mathcal{P}^{V}_{2}\). Then Proposition 4.2 combined with the uniqueness of the solution of Problem \(\mathcal{P}^{V}_{3}\) show that
\[\widetilde{A}^{-1}\boldsymbol{\sigma}(t)+\widetilde{\mathcal{R}}\boldsymbol{ \sigma}(t)=\widetilde{A}^{-1}\widetilde{\boldsymbol{\sigma}}(t)+\widetilde{ \mathcal{R}}\widetilde{\boldsymbol{\sigma}}(t)\qquad\forall\,t\in[0,T].\]
Then the inversibility of the operator \(\widetilde{A}^{-1}+\widetilde{\mathcal{R}}\colon C([0,T];Q)\to C([0,T];Q)\), guaranteed by Theorem 2.2, implies that \(\boldsymbol{\sigma}=\widetilde{\boldsymbol{\sigma}}\). This proves the uniqueness of the solution of Problem \(\mathcal{P}^{V}_{2}\).
Assume now that \((F_{1},\boldsymbol{f}^{1}),\,(F_{2},\boldsymbol{f}^{2})\in L^{2}(\Gamma_{3}) \times C([0,T];V)\) and denote by \(\boldsymbol{u}_{i}\in C([0,T];V)\) the solution of inequality (4.11) for \(F=F_{i}\) and \(\boldsymbol{f}=\boldsymbol{f}^{i},\,i=1,2\). Then, for any \(t\in[0,T]\) and \(\boldsymbol{v}\in V\) we have
\[(A\boldsymbol{u}_{1}(t), \boldsymbol{v}-\boldsymbol{u}_{1}(t))_{V}+(\mathcal{S}\boldsymbol {u}_{1}(t),\boldsymbol{v}-\boldsymbol{u}_{1}(t))_{V} \tag{5.2}\] \[+\int_{\Gamma_{3}}F_{1}\,v_{\nu}^{+}\,da-\int_{\Gamma_{3}}F_{1}\, u_{1\nu}^{+}(t)\,da\geq(\boldsymbol{f}^{1}(t),\boldsymbol{v}-\boldsymbol{u}_{1}(t))_{ V},\]
\[(A\boldsymbol{u}_{2}(t), \boldsymbol{v}-\boldsymbol{u}_{2}(t))_{V}+(\mathcal{S}\boldsymbol {u}_{2}(t),\boldsymbol{v}-\boldsymbol{u}_{2}(t))_{V} \tag{5.3}\] \[+\int_{\Gamma_{3}}F_{2}\,v_{\nu}^{+}\,da-\int_{\Gamma_{3}}F_{2}\, u_{2\nu}^{+}(t)\,da\geq(\boldsymbol{f}^{2}(t),\boldsymbol{v}-\boldsymbol{u}_{2}(t))_{ V}.\]
We take \(\boldsymbol{v}=\boldsymbol{u}_{2}(t)\) in (5.2), \(\boldsymbol{v}=\boldsymbol{u}_{1}(t)\) in (5.3), then we add the resulting inequalities to find that
\[(A\boldsymbol{u}_{1}(t)-A\boldsymbol{u}_{2}(t),\boldsymbol{u}_{1 }(t)-\boldsymbol{u}_{2}(t))_{V}\leq(\mathcal{S}\boldsymbol{u}_{1}(t)- \mathcal{S}\boldsymbol{u}_{2}(t),\boldsymbol{u}_{2}(t)-\boldsymbol{u}_{1}(t))_ {V}\] \[+\int_{\Gamma_{3}}(F_{1}-F_{2})(u_{2\nu}^{+}(t)-u_{1\nu}^{+}(t))\, da+(\boldsymbol{f}^{1}(t)-\boldsymbol{f}^{2}(t),\boldsymbol{u}_{1}(t)-\boldsymbol{u}_{2}(t ))_{V}.\]
Next, the strong monotonicity of \(A\) and the trace inequality (2.6) yield
\[m_{\mathcal{A}}\|\boldsymbol{u}_{1}(t)-\boldsymbol{u}_{2}(t)\|_ {V}^{2}\leq\|\mathcal{S}\boldsymbol{u}_{1}(t)-\mathcal{S}\boldsymbol{u}_{2}(t )\|_{V}\|\boldsymbol{u}_{1}(t)-\boldsymbol{u}_{2}(t)\|_{V}\] \[+c_{0}\|F_{1}-F_{2}\|_{L^{2}(\Gamma_{3})}\|\boldsymbol{u}_{1}(t)- \boldsymbol{u}_{2}(t)\|_{V}+\|\boldsymbol{f}^{1}(t)-\boldsymbol{f}^{2}(t)\|_{ V}\|\boldsymbol{u}_{1}(t)-\boldsymbol{u}_{2}(t)\|_{V}.\]
The previous inequality implies that
\[m_{\mathcal{A}}\|\boldsymbol{u}_{1}(t)-\boldsymbol{u}_{2}(t)\|_ {V}\leq\|\mathcal{S}\boldsymbol{u}_{1}(t)-\mathcal{S}\boldsymbol{u}_{2}(t)\|_{V}\] \[+c_{0}\|F_{1}-F_{2}\|_{L^{2}(\Gamma_{3})}+\|\boldsymbol{f}^{1}(t) -\boldsymbol{f}^{2}(t)\|_{V}.\]
We now use definition (4.3) and assumption (3.9) to see that there exists \(c_{1}>0\) such that
\[m_{\mathcal{A}}\|\mathbf{u}_{1}(t)-\mathbf{u}_{2}(t)\|_{V}\leq c_{1}\int_{0 }^{t}\|\mathbf{u}_{1}(s)-\mathbf{u}_{2}(s)\|_{V}\,ds\] \[+c_{0}\|F_{1}-F_{2}\|_{L^{2}(\Gamma_{3})}+\|\mathbf{f}^{1}(t)-\mathbf{f}^{ 2}(t)\|_{V}.\]
Next, by using a Gronwall argument and definition (2.2) we see that there exists \(C>0\) which does not depend on \(F_{i}\) and \(\mathbf{f}^{i},\,i=1,2\), such that
\[\|\mathbf{u}_{1}-\mathbf{u}_{2}\|_{C([0,T];V)}\leq C\big{(}\|F_{1}-F_{2}\|_{L^{2}( \Gamma_{3})}+\|\mathbf{f}^{1}-\mathbf{f}^{2}\|_{C([0,T];V)}\big{)}\]
which shows that the solution \(\mathbf{u}\in C([0,T];V)\) depends Lipschitz continuously on the data \((F,\mathbf{f})\in L^{2}(\Gamma_{3})\times C([0,T];V)\). The Lipschitz continuity of the solutions \(\mathbf{\sigma}\) and \(\mathbf{\omega}\) follows now from equalities (4.14) and (4.18) and the properties of the operators \(A\), \(\mathcal{S}\), \(\widetilde{A}^{-1}\) and \(\widetilde{\mathcal{R}}\).
**Remark 3**.: _Note that the unique solvability of Problem \(\mathcal{P}^{V}_{2}\) can be obtained directly. A sketch of the proof is as follows. First, note that_
\[\Sigma(t)=\Sigma_{0}+\mathbf{\varepsilon}(\mathbf{f}(t))\quad\forall\,t\in[0,T],\]
_where \(\Sigma_{0}\) is the time-independent nonempty closed convex subset of \(Q\) defined by_
\[\Sigma_{0}=\{\,\mathbf{\tau}\in Q\,:\,(\mathbf{\tau},\mathbf{\varepsilon}(\mathbf{v}))_{Q}+j( \mathbf{v})\geq 0\ \ \forall\,\mathbf{v}\in V\,\}.\]
_Then, using the change of unknown given by \(\mathbf{\sigma}=\overline{\mathbf{\sigma}}+\mathbf{\varepsilon}(\mathbf{f})\), we find that Problem \(\mathcal{P}^{V}_{2}\) is equivalent with a history-dependent variational inequality of the form (2.3) on the space \(X=Q\), associated to the convex \(K=\Sigma_{0}\), in which the unknown is the auxiliary stress field \(\overline{\mathbf{\sigma}}\). Theorem 2.3 guarantees the unique solvability of this inequality which, in turn, provides the unique solvability of Problem \(\mathcal{P}^{V}_{2}\)._
**Remark 4**.: _The unique solvability of Problem \(\mathcal{P}^{V}_{3}\) can be obtained directly, by using Theorem 2.4. Indeed, it is easy to check that assumptions \((\mathcal{A})\), \((\mathcal{S})\) and \((\Sigma)\) are satisfied for the inclusion (4.13) with \(X=Q\), operators \(\widetilde{A}^{-1}\), \(\widetilde{R}\) and the function \(g=\mathbf{\varepsilon}(\mathbf{f})\)._
We end this section with the remark that (4.11), (4.12) represent history-dependent inequalities and (4.13) is a history-dependent inclusion. Despite the fact that these problems have a different structure, each of them can be interpreted as a variational formulation of the contact Problem \(\mathcal{P}\). We conclude from here that the variational formulation of contact models is not unique and could lead to different mathematical problems which, in fact, are dual of each other. Moreover, anyone among the displacement, the stress or the strain field can be considered as main unknown of the corresponding contact model, provided that an appropriate variational formulation is used.
We refer to a triple \((\mathbf{u},\mathbf{\sigma},\mathbf{\omega})\) such that \(\mathbf{u}\) is a solution of Problem \(\mathcal{P}^{V}_{1}\), \(\mathbf{\sigma}\) is a solution of Problem \(\mathcal{P}^{V}_{2}\) and \(\mathbf{\omega}\) is a solution of Problem \(\mathcal{P}^{V}_{3}\), as a weak solution to the contact Problem \(\mathcal{P}\). We note that Theorem 5.1 provides the unique weak solvability of Problem \(\mathcal{P}\) as well as the Lipschitz continuous dependence of the weak solution with respect to the data \(\mathbf{f}\) and \(F\).
Moreover, using standard arguments and Remark 2 it can be proved that if any of the solution to Problems \(\mathcal{P}^{V}_{1}\), \(\mathcal{P}^{V}_{2}\) or \(\mathcal{P}^{V}_{3}\) is smooth enough, then the weak solution satisfies the equations and boundary conditions (3.1)-(3.7) in the strong sense, i.e. at each point \(\mathbf{x}\in\Omega\) and at any time moment \(t\in[0,T]\).
## 6 Numerical approximation
In this section, we present numerical simulations for the contact Problem \(\mathcal{P}\) by using its variational formulation given in Problem \(\mathcal{P}^{V}_{1}\). Throughout the rest of this paper, we assume that (3.8)-(3.12) hold, even if we do not mention it explicitly. By virtue of Theorem 5.1, we conclude that Problem \(\mathcal{P}^{V}_{1}\) has a unique solution \(\mathbf{u}\in C([0,T];V)\). We start by introducing a fully-discrete scheme to approximate the solution of Problem \(\mathcal{P}^{V}_{1}\).
Let \(V^{h}\) be a finite-dimensional subspace of \(V\), where \(h\) is positive real number which denotes the spatial discretization step. Throughout this section, we assume that \(V^{h}\) is the space of piecewise affine continuous functions, given by
\[V^{h}=\{w^{h}\in C(\bar{\Omega};\mathbb{R}^{d})\;\mid\;w^{h}|_{\widetilde{T}} \in[\mathbb{P}_{1}(\widetilde{T})]^{d}\;\;\;\text{for all}\;\widetilde{T}\in \mathcal{T}^{h}\}\subset V.\]
Here, \(\mathcal{T}^{h}\) is a family of finite element partitions of \(\Omega\) and \(\mathbb{P}_{1}(\widetilde{T})\) denotes the space of affine functions on \(\widetilde{T}\). We divide the time interval \([0,T]\) into \(N\) equal pieces of length \(k=\frac{T}{N}\) and we denote \(t_{n}=k\,n\), for all \(n=1,2,\ldots,N\). Moreover, for a continuous function \(g=g(t)\) we use the short-hand notation \(g_{n}=g(t_{n})\), for all \(n=1,2,\ldots,N\).
We now can introduce the following discrete version of Problem \(\mathcal{P}^{V}_{1}\).
**Problem \(\mathcal{P}^{Vh}_{1}\)**. _Find a displacement \(\mathbf{u}^{hk}=\{\mathbf{u}^{hk}_{i}\}_{i=1}^{N}\subset V^{h}\) such that the inequality below holds:_
\[(A\mathbf{u}^{hk}_{i},\mathbf{v}^{h}-\mathbf{u}^{hk}_{i})_{V}+((\mathcal{S} \mathbf{u}^{hk})_{i}\,,\mathbf{v}^{h}-\mathbf{u}^{hk}_{i})_{V}+j(\mathbf{v}^{h})-j(\mathbf{u}^{hk}_ {i}) \tag{6.1}\] \[\qquad\qquad\geq(\mathbf{f}_{i},\mathbf{v}^{h}-\mathbf{u}^{hk}_{i})_{V}\qquad \forall\,\mathbf{v}^{h}\in V^{h},\;i=1,\ldots,N.\]
The unique solvability of the discrete Problem \(\mathcal{P}^{Vh}_{1}\) can be easily proved by using arguments similar to those used in the proof of Theorem 5.1.
To present the numerical solution of problem (6.1), we utilize the mechanical example based on the two-dimensional physical setting shown in Figure 1, which represents the cross-section of a three dimensional viscoelastic body. We note this
section by \(\Omega\subset\mathbb{R}^{2}\). The body is clamped on the part \(\Gamma_{1}=[0\,\mathrm{m},1\,\mathrm{m}]\times\{0\,\mathrm{m}\}\) and, therefore, the displacement field vanishes there. On the part \(\Gamma_{3}=[4\,\mathrm{m},5\,\mathrm{m}]\times\{0\,\mathrm{m}\}\) the body is in potential frictionless contact with a rigid-plastic penetrable foundation with the yield limit \(F\). Moreover, it is acted from the top by a vertical force of density \(\boldsymbol{f}_{2}\). Therefore, denoting by \(\Gamma_{2}\) the remaining part of the boundary of \(\Omega\) and using notation \(\mathcal{C}^{u}((x_{0},y_{0}),\,r)\) for the upper semicircle of radius \(r\) (\(y\geq y_{0}\)) centred at the point \((x_{0},y_{0})\), for any \(\boldsymbol{x}=(x,y)\in\Gamma_{2}\) and \(t\in[0,T]\) we have
\[\boldsymbol{f}_{2}((x,y),t)=\left\{\begin{array}{ll}(0,f_{2\,y}(t))\,\mathrm{ N}\,\mathrm{m}^{-2},&\mbox{if $(x,y)\in\mathcal{C}^{u}((2.5\,\mathrm{m},2.0\,\mathrm{m}),\,2.5\, \mathrm{m})$},\\ (0,0)\,\mathrm{N}\,\mathrm{m}^{-2},&\mbox{otherwise}.\end{array}\right.\]
For simplicity of the analysis, we neglect the body forces and, therefore, we assume that \(\boldsymbol{f}_{0}=\boldsymbol{0}\). Moreover, we assume that the body behaves linearly and the components of elasticity and relaxation tensors are given by
\[(\mathcal{A}\boldsymbol{\omega})_{ij} = \frac{E\kappa}{(1+\kappa)(1-2\kappa)}(\omega_{11}+\omega_{22}) \delta_{ij}\,+\,\frac{E}{1+\kappa}\omega_{ij} \tag{6.2}\] \[\forall\,\boldsymbol{\omega}=(\omega_{ij})\in\mathbb{S}^{2},\ i, j=1,2,\] \[(\mathcal{B}(t)\,\boldsymbol{\omega})_{ij} = b\,\omega_{ij}\qquad\forall\,\boldsymbol{\omega}=(\omega_{ij}) \in\mathbb{S}^{2},\ i,j=1,2,\quad t\in[0,T]. \tag{6.3}\]
Here and below \(\delta_{ij}\) is the Kronecker delta, \(b\) is a relaxation parameter, and \(E\) and \(\kappa\) are Young's modulus and Poisson's ratio of the body material, respectively. For the
Figure 1: Reference configuration of the body.
simulations we present below, we use the following input parameters:
\[E =10^{4}\,\mathrm{N}\,\mathrm{m}^{-2},\quad\kappa=0.4,\] \[F =10,\quad b=10^{4}\,\mathrm{N}\,\mathrm{m}^{-2}\mathrm{s}^{-1},\] \[f_{2\,y}(t) =10\sin t,\quad t\in[0,T].\]
In order to obtain a numerical solution, a spatial discretization with variable mesh size is used, with a maximum size of \(0.275\,\mathrm{m}\) inside the domain \(\Omega\) and not exceeding \(0.06\,\mathrm{m}\) for elements lying directly on \(\Gamma_{1}\) and \(\Gamma_{3}\). This gives rise to a spatial domain discretized into 822 elements, including 20 contact elements, the total number of degrees of freedom being 1644.
To find a solution of the discrete variational inequality (6.1) with linear elasticity and relaxation tensors defined as in (6.2) and (6.3), respectively, we use an optimization-based method described in detail in [12]. We approximate the integral term by the right rectangle formula in each subinterval \([t_{i},t_{i+1}]\) of \([0,T]\). We use the following approximation of the time integral operator in (4.11):
\[\int_{0}^{t_{n}}\mathcal{B}(t_{n}-s)\boldsymbol{\varepsilon}(\boldsymbol{u}(s) )\,ds\approx k\sum_{j=1}^{n}\mathcal{B}(t_{n}-t_{j})\boldsymbol{\varepsilon}( \boldsymbol{u}_{j}).\]
Therefore, using (4.3) we have
\[((\mathcal{S}\boldsymbol{u})_{i},\boldsymbol{v})_{V} =\left(k\sum_{j=1}^{i}\mathcal{B}(t_{i}-t_{j})\boldsymbol{ \varepsilon}(\boldsymbol{u}_{j}),\,\boldsymbol{\varepsilon}(\boldsymbol{v}) \right)_{Q} \tag{6.4}\] \[=((\mathcal{S}\boldsymbol{u})_{i-1},\boldsymbol{v})_{V}+(k \mathcal{B}(0)\boldsymbol{u}_{i},\boldsymbol{v})_{V}=((\mathcal{S}\boldsymbol {u})_{i-1},\boldsymbol{v})_{V}+(kb\boldsymbol{u}_{i},\boldsymbol{v})_{V}\]
for all \(\boldsymbol{u},\boldsymbol{v}\in V^{h}\) and \(i=1,\ldots,n\). Note that in \(i\)-th time step the values of \(\boldsymbol{u}_{0},\ldots,\boldsymbol{u}_{i-1}\) are known and, therefore, \((\mathcal{S}\boldsymbol{u})_{i-1}\) is known too. Thus, for every time step \(i\), we use (6.4) in order to introduce the cost functional \(\mathcal{L}_{i}\colon V\to\mathrm{I\!R}\) given by
\[\mathcal{L}_{i}(\boldsymbol{w}^{h})=\frac{1}{2}(A\boldsymbol{w}^{h}+kb \boldsymbol{w}^{h},\boldsymbol{w}^{h})_{V}+j(\boldsymbol{w}^{h})+((\mathcal{S} \boldsymbol{u}^{hk})_{i-1}-\boldsymbol{f}_{i}\,,\boldsymbol{w}^{h})_{V}\]
for all \(\boldsymbol{w}^{h}\in V^{h}\), which is a convex functional. We are now in a position to find a sequence of minimizers of functionals \(\mathcal{L}_{i}\), i.e., solve the following optimization problem.
**Problem \(\mathcal{P}^{Oh}_{1}\)**.: _Find \(\boldsymbol{u}^{hk}=\{\boldsymbol{u}^{hk}_{i}\}_{i=1}^{N}\subset V^{h}\) such that_
\[0\in \partial\mathcal{L}_{i}(\boldsymbol{u}^{hk}_{i})\qquad\forall\ i=1,\ldots,N.\]
It can be shown that the operators appearing in inequality (6.1) satisfy the assumptions considered in [8] and, furthermore, the optimization Problem \(\mathcal{P}^{Oh}_{1}\) is equivalent to Problem \(\mathcal{P}^{Vh}_{1}\). For a deeper insight into the theory related to the optimization approach and the appropriate potential energy functional we mention the book [28].
Moreover, a more detailed analysis of the selected approach compared to other widely used methods can be found in [19]. To solve Problem \(\mathcal{P}_{1}^{Oh}\) we use our original software _Connech_ that is a user-friendly tool written entirely in Python for conducting contact simulations and analyzing results. To enhance the performance of native Python, we utilized the just-in-time compiler _Numba_[13]. The package provides comprehensive support for simulations, ranging from easy definition of the shape and material properties of the body to generating computational meshes and performing empirical error analysis. It supports simulations for both static, quasistatic and dynamic problems, in two or three dimensions. The goal of the package is to easily extend existing models with additional physical effects, which is achieved thanks to the modularity of the software. The package is open-source and provided under the GPL-3.0 license.
Our numerical results are presented in Figures 2-4 and are described in what follows.
First, in the upper row of Figure 2 we plot the graph of the function \(f_{2y}(t)=10\,\sin t\) scaled by a factor 0.2. In the lower row we plot the evolution in time of minimum of the normal displacement on the potential contact surface \(\Gamma_{3}\). We consider two cases: the case when the body has a purely elastic behavior (i.e., the relaxation coefficient \(b\) vanishes, see the left column of the figure) and the case when the body has a viscoelastic behavior. In this case the relaxation is taken into account (i.e., \(b=10^{4}\,\mathrm{Nm}^{-2}\mathrm{s}^{-1}\), see the right column of the figure). It results from this figure that, as expected, the relaxation reduces the sensitivity of the body to changes in applied forces and constrains it to return to its reference configuration faster. This is particularly visible at the time moment marked by the dashed line, which represents the moment when the contact between the body and the foundation arises along the entire boundary \(\Gamma_{3}\). This time moment we denote by \(t_{c}\). In the elastic case
Figure 2: Time evolution of tractions (top row) and normal displacements (bottom row), in the case without relaxation (right column) and with relaxation (left column).
Figure 3: Deformed configuration of the body and stress vectors at \(t=1.5\,\)s and \(t=2.75\,\)s.
Figure 4: Deformed configuration of the body and stress vectors at \(t=4\,\)s and \(t=5\,\)s. |
2309.03439 | Personalized Tucker Decomposition: Modeling Commonality and Peculiarity
on Tensor Data | We propose personalized Tucker decomposition (perTucker) to address the
limitations of traditional tensor decomposition methods in capturing
heterogeneity across different datasets. perTucker decomposes tensor data into
shared global components and personalized local components. We introduce a mode
orthogonality assumption and develop a proximal gradient regularized block
coordinate descent algorithm that is guaranteed to converge to a stationary
point. By learning unique and common representations across datasets, we
demonstrate perTucker's effectiveness in anomaly detection, client
classification, and clustering through a simulation study and two case studies
on solar flare detection and tonnage signal classification. | Jiuyun Hu, Naichen Shi, Raed Al Kontar, Hao Yan | 2023-09-07T01:43:47Z | http://arxiv.org/abs/2309.03439v1 | # Personalized Tucker Decomposition: Modeling Commonality and Peculiarity on Tensor Data
###### Abstract
We propose personalized Tucker decomposition (perTucker) to address the limitations of traditional tensor decomposition methods in capturing heterogeneity across different datasets. perTucker decomposes tensor data into shared global components and personalized local components. We introduce a mode orthogonality assumption and develop a proximal gradient regularized block coordinate descent algorithm that is guaranteed to converge to a stationary point. By learning unique and common representations across datasets, we demonstrate perTucker's effectiveness in anomaly detection, client classification, and clustering through a simulation study and two case studies on solar flare detection and tonnage signal classification.
_Keywords:_ Tucker decomposition, Personalization, Heterogeneous data
Introduction
In recent years, tensor decomposition methods have grown rapidly, providing the ability to analyze and utilize high-dimensional data structures, which are essentially multi-dimensional arrays or matrices. These decompositions are powerful mathematical tools that facilitate the extraction of latent features and patterns from complex data, enabling efficient data representation, dimensionality reduction, compression, completion, noise removal, and prediction. Indeed, tensor decomposition has seen immense success across a wide variety of applications that include: natural image and video processing (Gatto et al., 2021; Momeni and Ebrahimkhanlou, 2022), health care systems (Ren et al., 2022; Sandhu et al., 2018), point cloud data (Du et al., 2022; Yan et al., 2019) and manufacturing (Yan et al., 2014; Zhen et al., 2023), amongst many others.
Among the widely used techniques in this area, Tucker decomposition stands out as a prominent approach that has been successfully tested and deployed in various settings and applications (Kolda and Bader, 2009; Li et al., 2020; Zubair and Wang, 2013). The Tucker approach generalizes singular value decomposition (SVD) to higher-order tensors, providing a core tensor and a set of factor matrices that capture the interactions between dimensions (Tucker, 1966). The factor matrices represent the underlying patterns and structures in the data, while the core tensor captures the interaction between these patterns. By analyzing factor matrices and the core tensor, one can identify and extract meaningful features that can be used for further analysis, such as anomaly detection (Yan et al., 2014) and process optimization (Yan et al., 2019).
Despite the efficacy of tensor decomposition methods, they assume that complex, heterogeneous data can be adequately represented by a single set of global factor matrices and a core tensor. This assumption may oversimplify the intrinsic disparities that exist when the datasets come from different sources, clients, or modalities, potentially compromising the accuracy of the resulting representations. In practice, nowadays, it is common to collect data across various edge devices, such as sensors and phones, which exhibit unique local data patterns due to various local conditions, system status, or data collection methodologies (Kontar et al., 2021).
Using a universal tensor decomposition strategy may not accurately capture these distinct data patterns, leading to suboptimal representations. An alternative strategy involves fitting a local tensor decomposition for the data source. However, this does not utilize the rich data available across sources and may excessively overfit the peculiarities of each dataset while neglecting the shared patterns or commonalities among the datasets. More importantly, both strategies overlook the opportunity to model heterogeneity across data sources and exploit this for improved downstream analysis, be it in prediction, clustering,
classification, or anomaly detection.
For example, in the context of one of our case studies on tonnage signal monitoring processes, numerous sensors are employed to monitor the tonnage force at various locations within a production system. The data collected at each location features common patterns of normal signal variations and heterogeneous failure-related patterns. Here, the heterogeneous nature of the data and the presence of diverse failure patterns pose significant challenges for traditional tensor decomposition methods. Therefore, it is essential to develop a personalized tensor decomposition that can effectively capture and represent commonality and peculiarity across the data collected from each location. Consequently, these methods could reveal previously hidden patterns and relationships, allowing more effective data analysis, decision-making, and system optimization.
Inspired by a recent personalization technique for vector datasets coined as personalized principal component analysis (PCA) (Shi and Kontar, 2022), we propose the personalized Tucker decomposition (perTucker) to decompose tensor data collected from different sources into shared global components and personalized components to capture the heterogeneity of the data. Global components model the common patterns shared across the datasets, while local components model the unique features of a specific dataset. At the heart of our approach is (i) a mode orthogonality constraint to distinguish global and local features and (ii) a proximal gradient-regularized block coordinate descent algorithm that operates within feasible regions of the constraint and can provably recover stationary solutions.
Using two case studies and simulated data, we highlight the ability of perTucker to benefit (i) anomaly detection as one can monitor changes in the local features to better (and faster) detect anomalies in data collected over time and (ii) classification & clustering as operating on local features may yield better statistical power than the raw data since differences are more explicit when global features are removed.
The remainder of the paper is organized as follows. Sec. 2 reviews relevant literature on tensor decomposition methods. Sec. 3 introduces perTucker, proposes an algorithm to estimate model parameters, proves convergence, and sheds light on potential applications of perTucker. Sec. 4 uses simulated data to highlight the advantageous properties of our model. Two case studies on solar flare detection and tonnage signal classification are then presented in Sec. 5. Finally, Sec. 6 concludes the paper with a discussion about open problems.
We note that hereon we will use data source, client and modality interchangeably to index the origin from which each dataset was created.
Literature Review
Various tensor decomposition methods have been proposed in the literature. Among them, Tucker (Tucker, 1966) and CP decompositions (Hitchcock, 1927) have received the most attention. They have been applied to both supervised and unsupervised learning tasks.
For unsupervised tasks, great emphasis was placed on anomaly detection and clustering. In anomaly detection, starting from the work of Nomikos and MacGregor (1994), tensor-based detection has grown dramatically in the literature. Some examples include Li et al. (2011), where a robust tensor subspace learning algorithm is used for online anomaly detection, and Yan et al. (2014), which studied the relationship between Tucker decomposition, CP decomposition, multilinear principal component analysis, and tensor rank one decomposition and proposed monitoring statistics for each method. Interested readers are referred to Fanaee-T and Gama (2016) for an overview of existing tensor-based methods for anomaly detection.
In clustering, various methods have been proposed to improve the accuracy and efficiency of clustering algorithms. These methods include tensor-based subspace clustering (Fu et al., 2016), multi-view clustering (Li et al., 2023; Zhang et al., 2023), and multi-mode clustering (He and Atia, 2022).
Within these areas, Sun and Li (2019) developed a dynamic tensor clustering method based on CP tensor decomposition, which does not limit the tensor order and can be learned efficiently. Wu et al. (2016) proposed tensor spectral co-clustering based on a stochastic Markov process. This method works for general tensors of any order and can simultaneously cluster all dimensions. Zhou et al. (2019) proposed a tensor low-rank reconstruction technique (TLRR). The reconstruction consists of a low-rank dictionary recovery component and sparse noise. The dictionary is then used to obtain a similarity matrix to cluster the data.
For supervised tasks, such as regression and classification, various tensor-based classification methods have been developed, including logistic tensor regression (Tan et al., 2013), support tensor machine (Hao et al., 2013), and tensor Fisher discriminant analysis (Yan et al., 2005). Furthermore, different forms of tensor regression have been proposed, depending on the dimensionality of the input and output variables. These include scalar-to-tensor regression (Zhou et al., 2013), tensor-to-scalar regression (Yan et al., 2019), and tensor-to-tensor regression (Gahrooei et al., 2021).
Here it is worth noting that a large body of literature has focused on separating noise from a low-rank background, starting from pioneering work on robust PCA (Candes et al., 2011). Subsequently, this separation has been extended to anomaly detection by decomposing the data into three parts: background, anomaly, and noise. For example, Yan et al.
(2017) proposed a smooth sparse decomposition (SSD) by decomposing large-scale image data into a smooth background, a sparse anomaly, and noise and outperformed many other image-based detectors. Similar approaches can be found in crime monitoring (Zhao et al., 2022c), public health surveillance (Dulal et al., 2022; Zhao et al., 2020), and transfer learning applications (Li et al., 2022). Unfortunately, such methods suffer from an inability to learn the data representation as they assume that the basis functions or data representation are known, which limits their ability to handle complex datasets. To mitigate this, recent methods have been proposed to learn the representation of background components, including Bayesian methods (Guo et al., 2022) and deep neural networks (Zhao et al., 2022b). Still, such approaches cannot simultaneously learn the basis functions for the background and anomaly components of the dataset.
Given the above literature, to the best of our knowledge, to date, there are no tensor decomposition methods capable of learning shared and common representations across different datasets. The closest work along this line is personalized PCA perPCA. perPCA introduces a novel technique to perform PCA on data originating from disparate sources or modalities that exhibit heterogeneous trends but also possess common characteristics (Shi and Kontar, 2022). perPCA uses orthogonal global and local components to capture both shared and unique characteristics of each source. The paper offers a sufficient identifiability condition, theoretical guarantees, and competitive empirical results. Unfortunately, perPCA requires vectorization of datasets and cannot directly handle tensor data. The extension of perPCA to tensor data faces fundamental challenges due to the large degree of freedom and nonclosed-form solutions with tensor decompositions, the difficulty in defining tensor-based orthogonal constraints, and computational challenges involving high-order tensors.
Our work aims to bring personalization to a tensor paradigm and address the challenges imposed to make that possible.
## 3 Model Development
In this section, we first set the notation in Sec. 3.1 followed by the motivation and formulation of perTucker in Sec. 3.2. In Sec. 3.3, we propose an efficient algorithm to learn perTucker. Convergence, practical implementation, and potential applications are, respectively, highlighted in Sec. 3.4, Sec. 3.6, and Sec. 3.5. We note that the proof of all propositions, lemmas, and theorems is deferred to the Appendix.
### Preliminary
A tensor can be regarded as a data structure with more than 2 dimensions, also known as modes in tensor analysis (see Fig. 1). For example, in images, we use a vector of length 3 to represent the RGB channel of the pixel. Thus, a picture can be represented by a 3-dimensional tensor with dimensions height\(\times\)width\(\times\)RGB. If we have multiple pictures of the same dimension, a dataset can be represented by a 4-dimensional tensor.
NotationThroughout this paper, real numbers are denoted by letters, e.g., \(N\), \(i\); vectors by bold lowercase letters, e.g., \(\mathbf{c}\); matrices by bold uppercase letters, e.g., \(\mathbf{U}\); sets by script letters, e.g., \(\mathcal{K}\); and tensors by bold script letters, e.g., \(\mathbf{\mathcal{X}}\).
Mode-\(k\) product and tensor unfoldingWe briefly review the notion of a Tucker tensor product. A \(K\)-mode tensor is represented by \(\mathbf{\mathcal{X}}\in\mathbb{R}^{I_{1}\times\cdots\times I_{K}}\), where \(I_{k}\) denotes the mode-\(k\) dimension of \(\mathbf{\mathcal{X}}\) for \(k=1,\cdots,K\). We use \(\mathbf{\mathcal{X}}[i_{1},i_{2},\cdots,i_{K}]\) to denote the \((i_{1},i_{2},\cdots,i_{K})\)-th entry of \(\mathbf{\mathcal{X}}\). The mode-\(k\) product of a tensor \(\mathbf{\mathcal{X}}\) with a matrix \(\mathbf{V}\in\mathbb{R}^{J_{k}\times I_{k}}\) produces a tensor defined by \((\mathbf{\mathcal{X}}\times_{k}\mathbf{V})[i_{1},\cdots,i_{k-1},j_{k},i_{k+1},\cdots, i_{K}]=\sum_{i_{k}}\mathbf{\mathcal{X}}[i_{1},\cdots,i_{k},\cdots,i_{N}]V[j_{k},i_{k}]\). For a tensor \(\mathbf{\mathcal{X}}\) and a specific mode \(k\), we use the subscript with parenthesis \(\mathbf{\mathcal{X}}_{(k)}\in\mathbb{R}^{I_{k}\times\prod_{q=1,q\neq k}^{K}I_{q}}\) to denote the unfolding of \(\mathbf{\mathcal{X}}\) with respect to dimension \(k\), \(\mathbf{\mathcal{X}}_{(k)}[i_{k},j]=\mathbf{\mathcal{X}}[i_{1},i_{2},\cdots,i_{K}]\) where \(j=1+\sum_{q=1,q\neq k}^{K}(i_{q}-1)J_{q}\) and \(J_{q}=\prod_{m=1,m\neq k}^{q-1}I_{m}\). The columns of the \(k\)-mode unfolding \(\mathbf{\mathcal{X}}_{(k)}\) are the n-mode vectors of \(\mathbf{\mathcal{X}}\).
Tucker decompositionTucker decomposition (Tucker, 1966) decomposes a tensor into a core tensor multiplied by a factor matrix along each mode, \(\mathbf{\mathcal{X}}\approx\mathbf{\mathcal{C}}\times_{1}\mathbf{U}_{1}\times_{2}\mathbf{U}_{ 2}\cdots\times_{K}\mathbf{U}_{K}\), where \(\mathbf{U}_{k}\) is an orthonormal \(J_{k}\times I_{k}\) factor matrix typically with \(J_{k}<I_{k}\). \(\mathbf{U}_{k}\) can be regarded as a principal component in mode-\(k\).
Tucker decomposition has an equivalent formulation in terms of the unfolded tensor, that is, \(\mathbf{\mathcal{X}}_{(k)}=\mathbf{U}_{k}\mathbf{\mathcal{C}}_{(k)}(\mathbf{U}_{K}\bigotimes \cdots\bigotimes\mathbf{U}_{k+1}\bigotimes\mathbf{U}_{k-1}\cdots\bigotimes\mathbf{U}_{1})^ {\top}\). Here, \(\bigotimes\) is the Kronecker
Figure 1: Example of Vector, Matrix and Tensor Data
product.
Tensor inner productThe inner product of two tensors of the same shape \(\mathbf{\mathcal{A}},\mathbf{\mathcal{B}}\in\mathbb{R}^{I_{1}\times\cdots\times I_{K}}\), is defined
\[\langle\mathbf{\mathcal{A}},\mathbf{\mathcal{B}}\rangle=\sum_{i_{1},\cdots,i_{K}}\mathbf{A} [i_{1},\cdots,i_{k},\cdots,i_{K}]\mathbf{B}[i_{1},\cdots,i_{k},\cdots,i_{K}].\]
Then the Frobenius norm of a tensor \(\mathbf{\mathcal{A}}\) can be defined as \(\left\|\mathbf{\mathcal{A}}\right\|_{F}^{2}=\langle\mathbf{\mathcal{A}},\mathbf{\mathcal{ A}}\rangle\), which is the sum of squares of all elements.
### Motivation & formulation
Suppose we have tensor data from \(N\) sources. Each source has tensor data of order \(K\). We use \(\mathbf{\mathcal{Y}}_{n}\) to denote the data from source \(n\), and assume that \(\mathbf{\mathcal{Y}}_{n}\) has dimensions \(I_{1}\times I_{2}\times\ldots\times I_{K}\times s_{n}\). Here, all dimensions across sources have the same length except for the last one. In particular, in practical applications, the last dimension \(s_{n}\) denotes the number of samples from the source \(n\), which often differs between sources.
Our approach relies on defining global and local components to model commonality and heterogeneity across different sources. To do so, we let the global components consist of shared global factor matrices \(\mathbf{U}_{G,1},\ldots,\mathbf{U}_{G,K}\) and individual global core tensors \(\mathbf{\mathcal{C}}_{G,1},\ldots,\mathbf{\mathcal{C}}_{G,N}\) for each source. The local components consist of individual core tensors \(\mathbf{\mathcal{C}}_{L,1},\ldots,\mathbf{\mathcal{C}}_{L,N}\) and individual local factor matrices \(\mathbf{V}_{n,1},\ldots,\mathbf{V}_{n,K}\). As such, the reconstructions of the global and local components for source \(n\) are \(\mathbf{\mathcal{C}}_{G,n}\times_{1}\mathbf{U}_{G,1}\ldots\times_{K}\mathbf{U}_{G,K}\) and \(\mathbf{\mathcal{C}}_{L,n}\times_{1}\mathbf{V}_{n,1}\ldots\times_{K}\mathbf{V}_{n,K}\), respectively.
Based on the above definitions, we assume our data-generating process to be
\[\mathbf{\mathcal{Y}}_{n}=\underbrace{\mathbf{\mathcal{C}}_{G,n}\times_{1}\mathbf{U}_{G,1 }\ldots\times_{K}\mathbf{U}_{G,K}}_{\text{global}}+\underbrace{\mathbf{\mathcal{C}}_{ L,n}\times_{1}\mathbf{V}_{n,1}\ldots\times_{K}\mathbf{V}_{n,K}}_{\text{local}\,i}+\mathbf{ \mathcal{E}}_{n}, \tag{1}\]
where \(\mathbf{\mathcal{E}}_{n}\) are tensors that represent additive noise.
Since global and local components should convey different information, they need to be distinguished so that each part can vary independently of each other. To do so, we require the orthogonality of the global and local tensors. Specifically, we assume that:
\[\langle\mathbf{\mathcal{Y}}_{G,n},\mathbf{\mathcal{Y}}_{L,n}\rangle=0,\quad\forall \mathbf{\mathcal{C}}_{G,n},\mathbf{\mathcal{C}}_{L,n},\]
where \(\mathbf{\mathcal{Y}}_{G,n}=\mathbf{\mathcal{C}}_{G,n}\times_{1}\mathbf{U}_{G,1}\ldots \times_{K}\mathbf{U}_{G,K}\) and \(\mathbf{\mathcal{Y}}_{L,n}=\mathbf{\mathcal{C}}_{L,n}\times_{1}\mathbf{V}_{n,1}\ldots \times_{K}\mathbf{V}_{n,K}\). Interestingly, it turns out that this condition is equivalent to having the global and local factor matrices orthogonal in at least one dimension, as stated in Proposition 1.
**Proposition 1**.: _For each \(n=1,\ldots,N\), the following two conditions are equivalent._
* \(\langle\mathbf{\mathcal{Y}}_{G,n},\mathbf{\mathcal{Y}}_{L,n}\rangle=0,\quad\forall\mathbf{ \mathcal{C}}_{G,n},\mathbf{\mathcal{C}}_{L,n}\)_._
* _There exists a mode_ \(k\in\{1,\ldots,K\}\)_, where_ \(\mathbf{U}_{G,k}^{\top}\mathbf{V}_{n,k}=0\)_._
Given Proposition 1, we require local factor matrices to be orthogonal to global factor matrices for all sources in at least one mode. We define the set of such orthogonal modes by \(\mathcal{K}\), \(|\mathcal{K}|\geq 1\). Then our objective is to minimize the reconstruction loss of the data across all \(N\) sources. This is written as
\[\min_{\{\mathbf{\mathcal{C}}_{G,n}\},\{\mathbf{U}_{G,k}\},\{\mathbf{\mathcal{ C}}_{L,n}\},\{\mathbf{V}_{n,k}\}} \sum_{n=1}^{N}\|\mathbf{\mathcal{Y}}_{n}-\mathbf{\mathcal{C}}_{G,n}\times_ {1}\mathbf{U}_{G,1}\ldots\times_{K}\mathbf{U}_{G,K}-\mathbf{\mathcal{C}}_{L,n}\times_{1} \mathbf{V}_{n,1}\ldots\times_{K}\mathbf{V}_{n,K}\|_{F}^{2} \tag{2}\] \[s.t.\ \mathbf{U}_{G,k}^{\top}\mathbf{U}_{G,k}=I,\mathbf{V}_{n,k}^{\top}\mathbf{V}_{ n,k}=I,n=1,\ldots,N,k=1,\ldots,K\] \[\mathbf{U}_{G,k}^{\top}\mathbf{V}_{n,k}=0,n=1,\ldots,N,k\in\mathcal{K}.\]
We assume that the dimension of the global core tensor for all sources is \(\mathbf{\mathcal{C}}_{G,n}\in\mathbb{R}^{g_{1}\times\cdots\times g_{K}}\), and the dimension of the local core tensor for source \(n\) is \(\mathbf{\mathcal{C}}_{L,n}\in\mathbb{R}^{l_{n,1}\times\cdots\times l_{n,K}}\). This also defines the dimension of the global and local factor matrices.
### Personalized Tucker algorithm
A natural algorithm to solve the objective in (2) is block coordinate descent (BCD), where we iteratively optimize each variable. A general framework for BCD in our context is outlined in Algorithm 1.
```
Data:\(\mathbf{\mathcal{Y}}_{n}\), \(n=1,\ldots,N\) Output:\(\{\mathbf{\mathcal{C}}_{G,n}\},\{\mathbf{U}_{G,k}\},\{\mathbf{\mathcal{C}}_{L,n}\},\{\mathbf{V}_{n,k}\}\)
1Initialization:\(\{\mathbf{\mathcal{C}}_{G,n}\},\{\mathbf{U}_{G,k}\},\{\mathbf{\mathcal{C}}_{L,n}\},\{\mathbf{V}_{n,k}\}\)
2foriterationsdo
3for\(k=1,\ldots,K\)do
4 Update global factor matrices \(\{\mathbf{U}_{G,k}\}\)
5for\(n=1,\ldots,N\)do
6 Update global core tensors \(\{\mathbf{\mathcal{C}}_{G,n}\}\)
7 Update local factor matrices \(\{\mathbf{V}_{n,k}\}\).
8 Update core tensors \(\{\mathbf{\mathcal{C}}_{L,n}\}\)
9 end for
10
11 end for
12
13 end for
14Return:\(\{\mathbf{\mathcal{C}}_{G,n}\},\{\mathbf{U}_{G,k}\},\{\mathbf{\mathcal{C}}_{L,n}\},\{\mathbf{V}_{n,k}\}\)
```
**Algorithm 1**Pseudo Code of the Algorithm
In the rest of Sec. 3.3, we explain the update steps in Algorithm 1 in detail. We start with the update of global and local core tensors in Sec. 3.3.1 since the closed-form solution is a direct projection similar to the traditional Tucker decomposition owing to the
orthogonality between global and local components. Then the solution consistently holds in the update of global and local factor matrices we introduced in Sec. 3.3.2 and Sec. 3.3.2. This simplifies the update of the factor matrices without the core tensors. Despite the challenges that pertain to the two distinct decomposition components within (2) and their orthogonality, _a key result is that all updates can be done in closed form owing to the nice properties of the orthogonality constraint imposed on the model_.
#### 3.3.1 Update global and local core tensors
In this section, Proposition 2 provides the closed-form solution to update global and local core tensors, given the global and local factor matrices. The closed-form solution is the direct projection of the data to the global or local factor matrices. When updating the global and local factor matrices, we assume that the global and local core tensors are always the optimal solution. This simplifies the formula to update the global and local factor matrices by removing the core tensors from the optimization problem.
**Proposition 2**.: _(Closed-form solutions to the core tensor) If \(|\mathcal{K}|\geq 1\), when the global factor matrices \(\{\mathbf{U}_{G,k}\}\) and the local factor matrices \(\{\mathbf{V}_{n,k}\}\) are given, the global core tensors \(\mathbf{\mathcal{C}}^{\star}_{G,n}\) that minimize (2) satisfy_
\[\mathbf{\mathcal{C}}^{\star}_{G,n}=\mathbf{\mathcal{Y}}_{n}\times_{1}\mathbf{U}^{\top}_{G,1}\ldots\times_{K}\mathbf{U}^{\top}_{G,K},\]
_and the local core tensors \(\mathbf{\mathcal{C}}^{\star}_{L,n}\) that minimize (2) satisfy_
\[\mathbf{\mathcal{C}}^{\star}_{L,n}=\mathbf{\mathcal{Y}}_{n}\times_{1}\mathbf{V}^{\top}_{n,1}\ldots\times_{K}\mathbf{V}^{\top}_{n,K}.\]
The closed-form solutions presented in Proposition 2 takes advantage of the orthogonality between the two components. As a result, the cross-term is canceled, making the computation of the core tensors for both components efficient and straightforward. In the following sections, \(\mathbf{\mathcal{C}}^{\star}_{G,n}\) and \(\mathbf{\mathcal{C}}^{\star}_{L,n}\) are used to denote optimized global and local core tensors.
#### 3.3.2 Update global and local factor matrices
In this section, we will discuss the closed-form solutions to update the global and local factor matrices. For the simplicity of notation, we define the global residual tensor and local residual tensor from each source \(n=1\ldots,N\) as:
\[\mathbf{\mathcal{R}}_{G,n}=\mathbf{\mathcal{Y}}_{n}-\mathbf{\mathcal{C}}^{\star}_{L,n} \times_{1}\mathbf{V}_{n,1}\ldots\times_{K}\mathbf{V}_{n,K},\]
\[\mathbf{\mathcal{R}}_{L,n}=\mathbf{\mathcal{Y}}_{n}-\mathbf{\mathcal{C}}^{\star}_{G,n} \times_{1}\mathbf{U}_{G,1}\ldots\times_{K}\mathbf{U}_{G,K}.\]
The global residual is the reconstruction error from local components and the local residual is the reconstruction error from global components. Therefore, global reconstruction tends to model the global residual, and local reconstruction tends to model the local residual.
Proximal updateIn practice, when updating the global and local factor matrices, we can incorporate a proximal term into the optimization problem to regulate the update of the factor matrices (Shen et al., 2022). More specifically, we can define \(\varrho\) as the subspace difference between the subspaces expanded by the current factor matrix \(\mathbf{U}_{t}\) and the target factor matrix \(\mathbf{U}\) to be optimized,
\[\varrho(\mathbf{U},\mathbf{U}_{t})=\|\mathbf{U}\mathbf{U}^{\top}-\mathbf{U}_{t}\mathbf{U}_{t}^{\top} \|_{F}^{2}. \tag{3}\]
The proximal penalty term is defined as the subspace difference times some parameter \(\rho\). The proximal gradient algorithm can stabilize the update of factor matrices by regularizing the subspace change. Since the reconstruction of global and local components are to minimize the reconstruction error, we can write the optimization problem to solve for the global factor matrix in mode \(k\) at iteration \(t\) as
\[\mathbf{U}_{G,k,t+1}=\arg\min_{\mathbf{U}_{G,k}}\sum_{n=1}^{N}\left\|\mathbf{\mathcal{R}}_ {G,n}-\mathbf{\mathcal{C}}_{G,n}^{\star}\times_{1}\mathbf{U}_{G,1}\ldots\times_{K}\bm {U}_{G,K}\right\|_{F}^{2}+\rho\|\mathbf{U}_{G,k}\mathbf{U}_{G,k}^{\top}-\mathbf{U}_{G,k,t} \mathbf{U}_{G,k,t}^{\top}\|_{F}^{2}, \tag{4}\]
and the optimization problem to solve for the local factor matrix of source \(n\) is that when \(k\in\mathcal{K}\),
\[\mathbf{V}_{n,k,t+1}=\arg\min_{\mathbf{V}_{n,k}\perp\mathbf{U}_{G,k}}\left\|\mathbf{\mathcal{R }}_{L,n}-\mathbf{\mathcal{C}}_{L,n}^{\star}\times_{1}\mathbf{V}_{n,1}\ldots\times_{K} \mathbf{V}_{n,K}\right\|_{F}^{2}+\rho\left\|\mathbf{V}_{n,k}\mathbf{V}_{n,k}^{\top}-\mathbf{V} _{n,k,t}\mathbf{V}_{n,k,t}^{\top}\right\|_{F}^{2}, \tag{5}\]
and when \(k\not\in\mathcal{K}\),
\[\mathbf{V}_{n,k,t+1}=\arg\min_{\mathbf{V}_{n,k}}\left\|\mathbf{\mathcal{R}}_{L,n}-\mathbf{ \mathcal{C}}_{L,n}^{\star}\times_{1}\mathbf{V}_{n,1}\ldots\times_{K}\mathbf{V}_{n,K} \right\|_{F}^{2}+\rho\left\|\mathbf{V}_{n,k}\mathbf{V}_{n,k}^{\top}-\mathbf{V}_{n,k,t}\bm {V}_{n,k,t}^{\top}\right\|_{F}^{2}, \tag{6}\]
where \(\mathbf{U}_{G,k,t}\) and \(\mathbf{V}_{n,k,t}\) represents the global and local factor matrices for source \(n\), mode \(k\) and iteration \(t\); \(\mathbf{U}_{G,k,t+1}\) and \(\mathbf{V}_{n,k,t+1}\) are the corresponding updated global and local factor matrices. The objectives of (5) and (6) are the same. They consist of a Frobenius norm of the fitting error and a regularization on the change of subspace. The difference is that in (5), we explicitly require \(\mathbf{V}_{n,k,t+1}\) to be orthogonal to \(\mathbf{U}_{G,k,t+1}\), while in (6) we do not add constraints on \(\mathbf{V}_{n,k,t+1}\).
Though the optimization problems (4) to (6) seem complicated, it turns out we can obtain closed-form solutions. To achieve this, we first transform the minimization problem into a maximization problem and remove the core tensors in the optimization by Lemma
3.1 and Lemma 3.2.
**Lemma 3.1**.: _For any orthonormal factor matrices \(\mathbf{U}\) and \(\mathbf{U}_{t}\), the subspace error between \(\mathbf{U}\) and \(\mathbf{U}_{t}\) defined in (3) can be formulated as,_
\[\varrho(\mathbf{U},\mathbf{U}_{t})=2c-2Tr\left[\mathbf{U}^{\top}\mathbf{U}_{t}\mathbf{U}_{t}^{\top} \mathbf{U}\right], \tag{7}\]
_where \(c\) is the number of rows in \(\mathbf{U}\)._
Lemma 3.1 shows that the subspace error is differentiable with respect to \(\mathbf{U}\). This property is useful when we design the update rules for the BCD algorithms and the evaluation metrics. Furthermore, lemma 3.1 put a negative sign in the matrix trace term that can transform the minimization problem into a maximization problem.
Before deriving the solutions to (4) to (6), we introduce the following lemma that significantly simplifies our objective.
**Lemma 3.2**.: _For each \(n=1,\ldots,N\) and \(k=1,\ldots,K\), we have,_
\[\sum_{n=1}^{N}\|\mathbf{\mathcal{R}}_{G,n}-\mathbf{\mathcal{C}}_{G,n}^{*} \times_{1}\mathbf{U}_{G,1}\ldots\times_{K}\mathbf{U}_{G,K}\|_{F}^{2}=-\sum_{n=1}^{N} \|\mathbf{\mathcal{R}}_{G,n}\times_{1}\mathbf{U}_{G,1}^{\top}\ldots\times_{K}\mathbf{U}_ {G,K}^{\top}\|_{F}^{2}+\|\mathbf{\mathcal{R}}_{G,n}\|_{F}^{2}\,, \tag{8}\] \[\|\mathbf{\mathcal{R}}_{L,n}-\mathbf{\mathcal{C}}_{L,n}^{*}\times_{1}\bm {V}_{n,1}\ldots\times_{K}\mathbf{V}_{n,K}\|_{F}^{2}=-\|\mathbf{\mathcal{R}}_{L,n} \times_{1}\mathbf{V}_{n,1}^{\top}\ldots\times_{K}\mathbf{V}_{n,K}^{\top}\|_{F}^{2}+\| \mathbf{\mathcal{R}}_{L,n}\|_{F}^{2}\,. \tag{9}\]
Lemma 3.2 bears two fundamental meanings in the derivation of the closed-form solution to update the global and local factor matrices. First, it also puts a negative sign in the term with the factor matrices, which can transform the minimization problem (4) to (6) into a maximization problem. Second, by plugging in the closed-form solution of global and local core tensors in Proposition 2, it simplifies the optimization problem by reducing the number of decision variables.
Update global factorsWith all the prerequisites, we are now ready to present the closed-form solution of the sub-problem (4) in updating the global factor matrix \(\mathbf{U}_{G,k}\) in a specific mode \(k\) in Proposition 3
**Proposition 3**.: _We use \(\mathbf{W}_{G,n}\) to denote \(\mathbf{W}_{G,n}=\left(\mathbf{\mathcal{R}}_{G,n}\right)_{(k)}(\bigotimes_{q\neq k} \mathbf{U}_{G,q}^{\top})^{\top}\), where \(\bigotimes_{q\neq k}\) is the Kronecker product in reverse order of the factor matrices except \(kth\) factor matrix. If \(\mathbf{U}_{G,k,t+1}\) is the optimal solution to (4), the columns of \(\mathbf{U}_{G,k,t+1}\) are the unit eigenvectors of the matrix \(\sum_{n=1}^{N}\mathbf{W}_{G,n}\mathbf{W}_{G,n}^{\top}+2\rho\mathbf{U}_{G,k,t}\mathbf{U}_{G,k,t} ^{\top}\) corresponding to the largest \(g_{k}\) eigenvalues._
Proposition 3 shows that with proximal regularization, global components can be updated efficiently through singular value decomposition. In practice, we can use the equivalent form of \(\mathbf{W}_{G,n}=(\mathbf{\mathcal{R}}_{G,n}\times_{1}\mathbf{U}_{G,1}\cdots\times_{k-1} \mathbf{U}_{G,k-1}\times_{k+1}\mathbf{U}_{G,k+1}\cdots\times_{K}\mathbf{U}_{G,K})_{(k)}\) to improve
the efficiency of the computation. Note that when updating the global components, we do not impose the orthogonality of the global and local components. The reason is that enforcing the orthogonality between the global components to each of the local components is too restrictive and may leave no feasible space for updating if the number of sources is large. Therefore, we will update the global components freely and enforce the local component to be orthogonal to the global components.
Update local factorsWe provide the closed-form solution to the sub-problem (5) and (6) to update the local factor matrices with or without the orthogonal constraint in Proposition 4. The component optimized in Proposition 4 is the local factor matrix \(\mathbf{V}_{n,k}\) with a specific source \(n\) and mode \(k\). We denote the current local factor matrices at iteration \(t\) by \(\mathbf{V}_{n,k,t}\).
**Proposition 4**.: _Problem (5) and (6) have closed-form solutions. We denote \(\mathbf{W}_{L,n}\) as \(\mathbf{W}_{L,n}=\left(\mathbf{\mathcal{R}}_{L,n}\right)_{(k)}(\bigotimes_{q\neq k}\bm {V}_{n,q}^{\top})^{\top}.\)Then,_
1. _if_ \(k\not\in\mathcal{K}\)_, the updated columns of the local factor matrix_ \(\mathbf{V}_{n,k,t+1}\) _is the unit eigenvectors of_ \(\mathbf{W}_{L,n}\mathbf{W}_{L,n}^{\top}+2\rho\mathbf{V}_{n,k,t}\mathbf{V}_{n,k,t}^{\top}\) _corresponding to top_ \(l_{n,k}\) _eigenvalues._
2. _if_ \(k\in\mathcal{K}\)_, the update of the local factor matrix_ \(\mathbf{V}_{n,k,t+1}\) _is as follows. Denote_ \(\mathbf{S}^{\prime}=(I-\mathbf{U}_{G,k}\mathbf{U}_{G,k}^{\top})[\mathbf{W}_{L,n}\mathbf{W}_{L,n}^{ \top}+2\rho\mathbf{V}_{n,k,t}\mathbf{V}_{n,k,t}^{\top}](I-\mathbf{U}_{G,k}\mathbf{U}_{G,k}^{ \top})\)_. The columns of the local factor matrix_ \(\mathbf{V}_{n,k,t+1}\) _are the eigenvectors of_ \(\mathbf{S}^{\prime}\) _corresponding to top_ \(l_{n,k}\) _eigenvalues._
The proof of Proposition 3 and Proposition 4 is shown in Appendix E and Appendix F. Having completed all the steps required to update each component of the algorithm, we present the complete algorithm in Algorithm 2. In Algorithm 2, we use the subscript \(t\) to denote the current iteration index for the global and local factor matrices. Despite the orthogonality constraint between the local and global components, each update step in Algorithm 2 can be efficiently implemented via a closed-form solution.
### Convergence analysis of Algorithm 2
In this section, we provide the the convergence analysis of Algorithm 2. The special update rule in Proposition 4 brings challenges to the convergence analysis. In Algorithm 2, the update of local factors \(\mathbf{V}_{n,k}\)'s is different from the standard Tucker decomposition update, as \(\mathbf{V}_{n,k}\) is required to be orthogonal to \(\mathbf{U}_{G,k}\). As a result, updating the local factors does not necessarily decrease the objective value in (2). Thus, Algorithm 2 is not a strictly descent algorithm. Despite such subtleties, we can show that, when the proximal parameter \(\rho\) is not too small, our algorithm can converge into stationary solutions.
We will present our theorem on global convergence in the following theorem. Recall that we use \(\mathbf{U}_{G,k,t}\) to denote the \(k\)-th global factor \(\mathbf{U}_{G,k}\) after iteration \(t\), and \(\mathbf{V}_{n,k,t}\) to denote the \(k\)-th local factor of source \(n\) after iteration \(t\).
**Theorem 5**.: _If \(\left|\mathcal{K}\right|\geq 2\) and there exists a constant \(B>0\) such that \(\left\|\mathbf{\mathcal{Y}}_{n}\right\|_{F}\leq B\) for each \(n\), when we choose \(\rho=O(B^{2})\), then Algorithm 2 will converge to stationary points where_
\[\min_{t=1,\cdots,T}\sum_{k=1}^{K}\left\|\mathbf{U}_{G,k,t+1}\mathbf{U}_{G,k,t+1}^{ \top}-\mathbf{U}_{G,k,t}\mathbf{U}_{G,k,t}^{\top}\right\|_{F}^{2}=O\left(\frac{1}{T} \right), \tag{10}\]
_and_
\[\min_{t=1,\cdots,T}\sum_{n=1}^{N}\sum_{k=1}^{K}\left\|\mathbf{V}_{n,k,t+1}\mathbf{V}_{ n,k,t+1}^{\top}-\mathbf{V}_{n,k,t}\mathbf{V}_{n,k,t}^{\top}\right\|_{F}^{2}=O \left(\frac{1}{\sqrt{T}}\right). \tag{11}\]
Theorem 5 provides many key insights. First, it shows that the subspaces spanned by the column vectors of global and local factors all converge into fixed solutions. The result establishes the global convergence of factors, as it does not require careful initialization.
Second, the convergence rates for global and local factors differ. Global factors converge at a rate of \(O(\frac{1}{T})\), which is standard in non-convex optimization. However, since some local factors must be perpendicular to the global factors, they converge at a slightly slower rate of \(O(\frac{1}{\sqrt{T}})\). Third, our result is based on having \(|\mathcal{K}|\geq 2\). This requirement allows orthogonality to be maintained by each mode being updated.
To validate the convergence rates, we provide a proof-of-concept simulation study. Fig. 2 displays an example of the convergence of global and local factor matrices in this simulation. Both subspace errors of the local and global components go to \(0\). This slower rate of local components is primarily a result of the orthogonality requirement, and this result verifies Theorem 5. The detail of this simulation study is relegated to Appendix H
### Model initialization
One simple approach is to initialize all components randomly. Alternatively, one may use a Tucker decomposition on all the data for initialization. To do so, let \(s=\sum_{n=1}^{N}s_{n}\) be the total number of samples from all sources and recall that the data \(\boldsymbol{\mathcal{Y}}_{n}\) from source \(n\) has dimension \(I_{1}\times\ldots\times I_{K}\times s_{n}\). Now the following steps can be taken:
1. Construct a tensor \(\boldsymbol{\mathcal{Y}}\) that encompasses all samples from every source, with dimensions \(I_{1}\times\ldots\times I_{K}\times s\).
2. Use Tucker decomposition \(\boldsymbol{\mathcal{Y}}\approx\boldsymbol{\mathcal{C}}\times_{1}\boldsymbol {U}_{G,1}\ldots\times_{K}\boldsymbol{U}_{G,K}\) to initialize global factors.
3. Use Proposition 2 to initialize the global core tensor for each source.
Figure 2: Subspace error for global component and different local sources
4. For each source \(n\), perform the Tucker decomposition on the local residual tensor \(\mathbf{\mathcal{R}}_{L,n}=\mathbf{\mathcal{Y}}_{n}-\mathbf{\mathcal{C}}_{G,n}^{*}\times_{1 }\mathbf{U}_{G,1}\ldots\times_{K}\mathbf{U}_{G,K}\) to initialize the local core tensor and the local factor matrices.
This initialization does not require orthogonality between global and local components. Therefore, we may observe an increase in the reconstruction error in the first iteration. But we have found that in practice, this method yields faster convergence.
In addition, in our model, one needs to choose the modes in which orthogonality is imposed. At the same time, our theorem suggests that \(|\mathcal{K}|\geq 2\), and we found that the result is also true for \(|\mathcal{K}|=1\) in various simulation studies. In practice, one can simply choose the mode with the largest dimension to impose the orthogonality constraint. Alternatively, cross-validation can be utilized to select the mode.
### Practical usage of perTucker
In this section, we introduce some practical applications of perTucker. Specifically, we shed light on its potential utility for improved classification, anomaly detection, and clustering. The key idea for all applications is to operate only on local components. This may allow for improved clustering, classification, and detection as differences become more explicit when shared knowledge is removed.
#### 3.6.1 Classification via perTucker
To use perTucker for classification, we assume that each source corresponds to a class. Then we perform perTucker and can get the estimated local factor matrices \(\hat{\mathbf{V}}_{n,k},\,k=1,\ldots,K\) for each class. When a new piece of data \(\mathbf{\mathcal{Y}}^{\text{new}}\) is sent, we can use the following decision rule to classify the new data.
\[\hat{n}=\arg\max_{n}\|\mathbf{\mathcal{C}}_{L,n}^{*}\|_{F}^{2}=\arg\max_{n}\|\mathbf{ \mathcal{Y}}^{\text{new}}\times_{1}\hat{\mathbf{V}}_{n,1}^{\top}\ldots\times_{K} \hat{\mathbf{V}}_{n,K}^{\top}\|_{F}^{2}. \tag{12}\]
The decision rule (12) demonstrates that we can efficiently classify the data by selecting the class that maximizes the Frobenius norm of the local core tensor. This is because the largest norm of the core components indicates that the local subspace is most suitable for representing the original data, since the local core tensor is a projection of the original tensor onto the corresponding local subspace. We have found that such a decision rule is equivalent to finding the smallest possible reconstruction error across all classes. The discussion is relegated to Appendix I.
We want to emphasize that such a classification approach differs from traditional tensor-based classifiers (Klus and Gelss, 2019), which directly trained supervised learning models
for tensor classification. Here, we focus on a generative approach, which first trains \(C\) data generation models (i.e., local subspaces) and then utilizes the representation error to decide to which class the data belong. The algorithm will construct global and local subspaces, which is beneficial not only for classification purposes but also for feature interpretation and visualization.
#### 3.6.2 Anomaly detection via perTucker
By monitoring only local components, perTucker can improve anomaly detection methods as the changes in the underlying data become more explicit when common factors are removed. Specifically, we propose using \(\|\mathbf{\mathcal{C}}_{L}\|_{F}^{2}\) as the key monitoring statistic for online anomaly detection.
Here we emphasize that perTucker does not implement a sparsity penalty as often used in tensor-based anomaly detection (Yan et al., 2018). This is a unique benefit of perTucker as we do not assume that the anomaly patterns are sparse, which is too restrictive in some applications. As a result, perTucker can accommodate a wide range of anomalous pattern distributions.
#### 3.6.3 Clustering via perTucker
perTucker provides an alternative approach for client clustering based on local factors. Specifically, we focus on the setting of subspace clustering, which aims to cluster the clients if they are within the same local subspaces. The subspace distance between client \(n_{1}\) and \(n_{2}\), \(\rho_{n_{1},n_{2}}=\|\hat{\mathbf{V}}_{n_{1}}\hat{\mathbf{V}}_{n_{1}}^{\top}-\hat{\bm {V}}_{n_{2}}\hat{\mathbf{V}}_{n_{2}}^{\top}\|_{F}^{2}\), can be calculated, where \(\mathbf{V}_{n}\) is defined by the Kronecker product of the local factor matrices for client \(n\). Then we can use spectral clustering to make clusters of the clients and further use multidimensional scaling to make the clustering plot (Hastie et al., 2009).
## 4 Numerical Studies
Now that we have introduced perTucker and its potential application, we validate its claimed advantages through numerical simulations. Sec. 4.1 introduces the data generation procedure. Sec. 4.2, Sec. 4.3, and Sec. 4.4 evaluates the performance of perTucker in terms of data reconstruction, classification, and clustering.
### Data generation
In this simulation work, each sample of the data is a grayscale image with dimensions 50 by 50. The construction of each sample is low-rank global component, heterogeneous local
component, and i.i.d standard normal noise, as in Eq. (13)
\[\boldsymbol{\mathcal{Y}}_{n}=\boldsymbol{\mathcal{Y}}_{G,n}+\boldsymbol{ \mathcal{Y}}_{L,n}+\boldsymbol{\mathcal{E}}_{n}. \tag{13}\]
We generate \(N=3\) clients defined as 3 patterns for the heterogeneous local component: Swiss pattern, oval pattern, and rectangle pattern, as in Fig. 3. The value of all the patterns is 5 while the rest part is 0. In each pattern, we generate 10 sample images. There is some variability within each pattern, as shown in the left part of Fig. 3. The Swiss can be thin or thick; the oval can be vertical, horizontal, or circular; the rectangle can be wide, tall, or square.
For the global component, we randomly create orthonormal matrices \(\boldsymbol{U}_{G,1}\) and \(\boldsymbol{U}_{G,2}\) with dimension \(50\times 5\) for the 3 clients. Then we randomly generate the global core tensor \(\boldsymbol{\mathcal{C}}_{G,n}\) with dimension \(5\times 5\times 10\) for each client. And each entry of the global core tensors follows i.i.d. \(N(0,100)\). The global components are constructed by \(\boldsymbol{\mathcal{Y}}_{G,n}=\boldsymbol{\mathcal{C}}_{G,n}\times_{1} \boldsymbol{U}_{G,1}\times_{2}\boldsymbol{U}_{G,2}\), \(n=1,2,3\).
Therefore, the full data dimension is \(3\times 50\times 50\times 10\). Some examples of the data generation structure are shown in the right part of Fig. 3. The three rows are for three patterns. The two columns show the global and local components, respectively, and the third column shows the sum of the global and local components along with the error term. With noise and global background, the local pattern can barely be recognized, which makes accurate identification of the local patterns challenging.
Figure 3: **Left**: examples of variability in each pattern. **Right**: examples of different components in each pattern.
### Performance
In the generated data, we apply perTucker to decouple the global and local components. For comparison, we also evaluate the performance of some benchmark algorithms.
1. globalTucker: We first concatenate the samples of all clients into tensor \(\mathbf{\mathcal{Y}}\) and then apply the Tucker decomposition on \(\mathbf{\mathcal{Y}}\).
2. localTucker: We apply a standard Tucker decomposition on each client \(\mathbf{\mathcal{Y}}_{n}\) individually.
3. robustTucker: We first concatenate samples from all clients in \(\mathbf{\mathcal{Y}}\), then apply the method in Lu et al. (2019) to identify the low-rank and sparse components.
4. perPCA: We apply perPCA(Shi and Kontar, 2022) on the vectorized dataset where we vectorize \(50\times 50\) images into vectors of length \(2500\) and use perPCA to find global and local components. Although perPCA is designed for vector datasets, this comparison can highlight the need for personalized Tensor decompositions when data is in tensor form.
Fig. 4 depicts examples of global and local component reconstruction via perTucker. The first row represents the data, and the second row shows the reconstruction from perTucker. The columns represent global and local components. The three examples with different patterns indicate that perTucker can effectively reconstruct the shared and unique data patterns.
Furthermore, to numerically examine the performance and benchmark algorithms, we calculate a few performance metrics. \(\hat{\mathbf{U}}_{G,k}\) and \(\hat{\mathbf{V}}_{n,k}\) represent the estimated global and local factor matrices; \(\hat{\mathbf{\mathcal{Y}}}_{G,n}\) and \(\hat{\mathbf{\mathcal{Y}}}_{L,n}\) represent the estimated reconstruction of global and local components.
Figure 4: Reconstruction result example for three patterns
1. **Global subspace error**: We compute the regularized global subspace error between the ground truth global factors \(\{\mathbf{U}_{G,k}\}\) and the estimated global factors \(\{\hat{\mathbf{U}}_{G,k}\}\) by \(\varrho(\bigotimes_{k=1}^{K}\mathbf{U}_{G,k},\bigotimes_{k=1}^{K}\hat{\mathbf{U}}_{G,k}) /\|\bigotimes_{k=1}^{K}\mathbf{U}_{G,k}\|_{F}^{2}\).
2. **Local subspace error**: We first generate 100 images for each pattern, and use Tucker decomposition to estimate the local factors \(\{\mathbf{V}_{n,k}\}\) for each pattern. Then the regularized local subspace error is calculated by \(\varrho(\bigotimes_{k=1}^{K}\mathbf{V}_{n,k},\bigotimes_{k=1}^{K}\hat{\mathbf{V}}_{n,k })/\|\bigotimes_{k=1}^{K}\mathbf{V}_{n,k}\|_{F}^{2}\), and take the average of 3 patterns.
3. **Global component error**: defined by \(\sum_{n}\|\hat{\mathbf{\mathcal{Y}}}_{G,n}-\mathbf{\mathcal{Y}}_{G,n}\|_{F}^{2}/\sum_ {n}\|\mathbf{\mathcal{Y}}_{G,n}\|_{F}^{2}\).
4. **Local component error**: defined by \(\sum_{n}\|\hat{\mathbf{\mathcal{Y}}}_{L,n}-\mathbf{\mathcal{Y}}_{L,n}\|_{F}^{2}/\sum_ {n}\|\mathbf{\mathcal{Y}}_{L,n}\|_{F}^{2}\).
5. **Denoised error**: defined by \(\sum_{n}\|(\mathbf{\mathcal{Y}}_{G,n}+\mathbf{\mathcal{Y}}_{L,n})-(\hat{\mathbf{\mathcal{ Y}}}_{G,n}+\hat{\mathbf{\mathcal{Y}}}_{L,n})\|_{F}^{2})/\sum_{n}\|\mathbf{\mathcal{Y}}_{G,n}+ \mathbf{\mathcal{Y}}_{L,n}\|_{F}^{2}\)
We run each experiment 10 times from 10 different random seeds and report their mean and standard deviation. The results of the measuring statistics for different methods are summarized in Table 1. From Table 1, we can conclude the following statements.
1) perTucker yields the best results in separating the global and local components due to the utilization of the low-rank tensor structure of both components, from the following observations: (a) Comparing the global component error of perTucker and perPCA, we can conclude that perTucker identifies the global component with better accuracy due to its use of low-rank tensor structures. (b) Comparing the local component error of perTucker and robustTucker, although robustTucker identifies the global subspace with decent accuracy, it yields a much larger error in terms of local component reconstruction due to the fact that the local component does not assume any low-rank structure.
2) The reconstruction error for local components is larger for all methods compared to the global components. Two factors contribute to this: (a) the reconstruction of the local component is generally harder compared to the global components since it is only shared within the same clients, resulting in lower accuracy and larger variance with the use of fewer datasets; (b) the local components are generated by the shape variations of Swiss, oval, and rectangle patterns, which are not exactly low-rank, thus, the true local rank subspace is also an approximation from the dataset.
\begin{table}
\begin{tabular}{c c c c c c} \hline & perTucker & perPCA & globalTucker & localTucker & robustTucker \\ \hline Global subspace error (\(10^{-3}\)) & **2.3**(0.8) & 536(6) & **2.4**(0.8) & N/A & 3.7(0.9) \\ Local subspace error (\(10^{-1}\)) & **6.4**(0.4) & 9.88(0.07) & N/A & 9.34(0.03) & N/A \\ Global component error (\(10^{-3}\)) & **5.7**(1.6) & 372(16) & **5.7**(1.6) & N/A & 264(12) \\ Local component error (\(10^{-1}\)) & **2.8**(0.6) & 23.1(1.3) & N/A & 63(3) & 10(0.09) \\ Denoised error (\(10^{-2}\)) & 4(0.7) & 13.7(0.7) & 14.1(0.7) & **2**(2) & 13.7(0.7) \\ \hline \end{tabular}
\end{table}
Table 1: Component-wise reconstruction error with standard deviation in parenthesis
### Classification
In this section, we evaluate the classification performance of the perTucker algorithm. The training sample size ranges from 10 to 50 with a step of 10 for each pattern. Then we perform the perTucker decomposition and get the local factor matrices. Next, we create 50 new images for each pattern and use the method described in Sec. 3.6.1 to classify the 150 images. This procedure is repeated 100 times. Table 2 displays the mean and standard deviation of the classification accuracy for various parameters. From Table 2 we can see that perTucker exhibits excellent classification performance even when the sample size is small. Increasing the training sample size leads to improved prediction accuracy. In comparison, if we perform local Tucker on the three clients and classify the new figures, the accuracy will be around 33% regardless of the training sample size. This accuracy is close to "guessing" the class. This is because local Tucker will model the commonality and peculiarity simultaneously. Therefore, global features are also considered when making the decision, while the multiplication coefficients are randomly generated.
To visualize the classification process, we use the boxplot to show the summary of the test statistics in (12) in Fig. 5. The sample size is set to 50. The test statistics are centralized by the median statistics within each pattern. The test statistics for each specific true pattern are significantly higher than those for the other two patterns in the corresponding cases, implying that the classification procedure is effective and robust.
### Clustering
In this section, we study the clustering performance of perTucker under a different problem setting. When generating the data, the variability within each pattern is measured by a "ratio" variable. The performance of clustering is evaluated by the ability to cluster similar "ratio" within each pattern. We focus on the Swiss pattern and cut the range of "ratio" from 0.7 to 1.4 into the group of 7 ratio intervals, each with a width of 0.1. Then for each ratio interval, we generate 3 clients, each with a sample size of 100 figures. We make clustering plots for the 21 clients and see the aggregation of the clients with similar ratio, as shown in Fig. 6.
\begin{table}
\begin{tabular}{c c c c c c} \hline Training sample size & 10 & 20 & 30 & 40 & 50 \\ \hline Accuracy & 86.7\% & 91.2\% & 92.7 \% & 93.9\% & 95.0\% \\ SD of accuracy & 6 \% & 4\% & 4\% & 4\% & 3\% \\ \hline \end{tabular}
\end{table}
Table 2: Classification accuracy and the standard deviation for different parameters
## 5 Case Study
In this section, we use two case study experiments to demonstrate the application of the perTucker algorithm. Sec. 5.1 provides an anomaly detection example that illustrates the power of the perTucker algorithm in detecting solar flares. The subsequent section, Sec. 5.2, demonstrates the effectiveness of the perTucker technique in classifying tonnage fault signals.
Figure 5: Box plot for test statistics defined in (12) for different patterns and different classes
Figure 6: Clustering plot for swiss patterns
### Anomaly detection in solar flare data
The first example involves monitoring solar activities and detecting solar flares from a stream of solar images. A solar flare is a significant event in the sun that emits enormous amounts of energetic charged particles that could cause power grids to fail (Marusek, 2007). Therefore, detecting solar flares promptly is critical to implementing preemptive and corrective measures. However, monitoring solar activities is challenging due to the high dimensionality of the solar thermal imaging data and the gradual changes in solar temperature over time. Existing detection methods, which rely on subtracting the functional mean (background) using the sample mean, are insufficient to detect small transient flares in the dynamic system (Zhao et al., 2022a). Other studies focus on the decomposition of solar images into their background and anomaly components (Yan et al., 2018).
This dataset, publicly available in (Zhao et al., 2022a), comprises a sequence of images of size 232 \(\times\) 292 pixels captured by a satellite. We use a sample of 300 frames in this case study. To detect solar flares in real time, we begin by subtracting the mean from each sample and then preprocess the data using the method proposed by (Aharon et al., 2006). Following this, we use a sliding window of \(8\times 8\) to divide each frame into small patches, resulting in a total of 1044 patches. The four right-most columns of pixels are discarded. Each patch or tile is then vectorized, yielding a data dimension of \(300\times 1044\times 64\). After the preprocessing step, we apply the perTucker algorithm to the data to break them down into two components: global components representing the slowly changing background and local components indicating the detected solar flares.
Figure 7: Detection of solar flare at \(t=191\) and \(t=217\)
Fig. 7 shows two frames where there is an abrupt change in the almost-stationary background. On the original images, such changes are not visible in the complicated background. However, after using perTucker to extract global and local components, one can clearly see the location and movement of small and rapid changes in local components. The experiments highlight perTucker's ability to change the signal magnification.
Moreover, since the local component represents the anomaly, we can use the Frobenius norm of the local core tensor \(\left\|\boldsymbol{\mathcal{C}}_{L,n}\right\|_{F}\) as monitoring statistics to detect the anomaly. The results are shown in Fig. 8, where we plot the logarithm of the Frobenius norm of the local core tensor \(\left\|\boldsymbol{\mathcal{C}}_{L,n}\right\|_{F}\) for 300 frames. For the most time, the curve in Fig. 8 remains at a low level, suggesting that no significant changes in solar activities are detected. However, there are 2 clear peaks above the red control line, suggesting that there are two solar flares. Therefore, Fig. 8 provides an intuitive approach for detecting anomalous behaviors.
### Classification for tonnage data
perTucker also applies to monitoring tonnage profiles in a multi-operation forging process that utilizes multiple strain gauge sensors. Specifically, the forging process employs four columns, each equipped with a strain gauge sensor that measures the tonnage force exerted by the press uprights, as illustrated in Fig. 9. Consequently, each operational cycle generates a four-channel tonnage profile. The dataset for this case study consists of 305 in-control profiles, collected under normal production conditions, and 69 out-of-control profiles for each of the four different fault classes. The length of each channel profile is 1201, resulting in a data dimension of \(\boldsymbol{\mathcal{Y}}^{4\times 1201\times 305}\) for the "normal" class and \(\boldsymbol{\mathcal{Y}}^{4\times 1201\times 69}\) for each fault class. Fig. 10 presents examples of profiles for both normal and the four different fault conditions.
Figure 8: The logarithm of Frobenius norm of local core tensors for 300 frames
In this case study, we define the five "clients" as normal and four fault conditions. We select the first 50 normal samples and the first 10 samples from each fault condition to form the training dataset. The test dataset comprises 255 normal samples and 59 samples for each fault condition. We further assume that only the global and local factor matrices along the signal length dimension are orthogonal to each other.
Applying the classification method in Sec. 3.6.1, we are able to achieve 100% accuracy for classifying all fault modes and normal cases. Fig. 11 shows the relative test statistics by computing the difference between the test statistics of the corresponding fault modes and the normal cases. Consequently, normal test statistics have been normalized as \(y=0\) for each sample, as shown in a red line in Fig. 11.
We can see that in every fault mode, the range of the test statistics for the corresponding fault mode is much higher than those of the other three faults and the normal baseline.
Figure 11: Relative decision test statistics compared to normal when the true label is (a) Fault 1, (b) Fault 2, (c) Fault 3, (d) Fault 4, (e) Normal
Figure 10: Tonnage Data
Figure 9: Tonnage Signal Monitoring Figure 10: Tonnage Data
When the true label is normal, the relative test statistics for all fault modes are less than the normal baseline. Thus, perTucker provides informative statistics for anomaly detection and fault diagnostics.
## 6 Conclusion
In this paper, we propose the perTucker method to decompose the tensor data into global components with shared factor matrices and local components that are orthogonal to global components. Global and local components model the commonality and peculiarity of the data. Subsequently, we present an efficient algorithm to solve the model, based on the block coordinate descent algorithm. The inclusion of proximal terms in the step of updating factor matrices allows the convergence to the stationary point. Then, several applications of perTucker, such as anomaly detection, classification, and clustering, are demonstrated by simulation and case studies.
Some future studies will include some special properties of the global and local components. For example, the local components can be sparse to model the anomaly components in different applications. |
2309.06541 | Text Encoders Lack Knowledge: Leveraging Generative LLMs for
Domain-Specific Semantic Textual Similarity | Amidst the sharp rise in the evaluation of large language models (LLMs) on
various tasks, we find that semantic textual similarity (STS) has been
under-explored. In this study, we show that STS can be cast as a text
generation problem while maintaining strong performance on multiple STS
benchmarks. Additionally, we show generative LLMs significantly outperform
existing encoder-based STS models when characterizing the semantic similarity
between two texts with complex semantic relationships dependent on world
knowledge. We validate this claim by evaluating both generative LLMs and
existing encoder-based STS models on three newly collected STS challenge sets
which require world knowledge in the domains of Health, Politics, and Sports.
All newly collected data is sourced from social media content posted after May
2023 to ensure the performance of closed-source models like ChatGPT cannot be
credited to memorization. Our results show that, on average, generative LLMs
outperform the best encoder-only baselines by an average of 22.3% on STS tasks
requiring world knowledge. Our results suggest generative language models with
STS-specific prompting strategies achieve state-of-the-art performance in
complex, domain-specific STS tasks. | Joseph Gatto, Omar Sharif, Parker Seegmiller, Philip Bohlman, Sarah Masud Preum | 2023-09-12T19:32:45Z | http://arxiv.org/abs/2309.06541v1 | _Text Encoders Lack Knowledge_: Leveraging Generative LLMs for Domain-Specific Semantic Textual Similarity
###### Abstract
Amidst the sharp rise in the evaluation of large language models (LLMs) on various tasks, we find that semantic textual similarity (STS) has been under-explored. In this study, we show that STS can be cast as a text generation problem while maintaining strong performance on multiple STS benchmarks. Additionally, we show generative LLMs significantly outperform existing encoder-based STS models when characterizing the semantic similarity between two texts with complex semantic relationships dependent on _world knowledge_. We validate this claim by evaluating both generative LLMs and existing encoder-based STS models on three newly collected STS challenge sets which require world knowledge in the domains of Health, Politics, and Sports. All newly collected data is sourced from social media content posted after May 2023 to ensure the performance of closed-source models like ChatGPT cannot be credited to memorization. Our results show that, on average, generative LLMs outperform the best encoder-only baselines by an average of 22.3% on STS tasks requiring world knowledge. Our results suggest generative language models with STS-specific prompting strategies achieve state-of-the-art performance in complex, domain-specific STS tasks.
## 1 Introduction
The NLP community has seen a rapid advancement in many areas since the onset of large language models (LLMs) trained using Reinforcement Learning with Human Feedback, including text summarization, machine translation, and problem solving, amongst others (Yang et al., 2023). One area that has not been well explored is the applicability of generative LLMs to Semantic Textual Similarity (STS) tasks.
In recent works, it has been explicitly suggested that LLMs are not well-suited for the STS-B task. In (Zhong et al., 2023) they support this claim by showing ChatGPT is inferior to pre-trained RoBERTa models on a small (n=50) set of STS samples. In (Yang et al., 2023), they suggest that STS-B, and more generally regression tasks have "no use case" in the context of LLMs -- citing the extreme misalignment between LLM training and the prediction of a continuous value. In this study, we aim to show that there are two intuitive reasons as to why _LLMs are highly applicable to Semantic Textual Similarity_. 1) **World Knowledge:** LLMs do not rely on human-labeled data, allowing them to be exposed to a broad range of world knowledge. Very little human-annotated domain-specific data exists for direct STS training or contrastive learning of sentence embeddings (Gao et al., 2021), making applications of text encoders to niche domains challenging. Thus, if we can apply LLMs to STS, we may greatly expand the set of problem domains where STS is impactful. 2) **STS Regression May Align with Language Modeling:** The STS task can be formulated such that the output space is constrained to prediction of a continuous value between [0-1]. Such a formulation reduces the task to outputting similarity as a percentage (e.g. Text A and Text B are 60% similar). During pre-training, LLMs are very likely to see many texts that use percentages in various contexts, as humans frequently cite percentages in natural language. Thus, when we combine LLMs strong pairwise textual reasoning capabilities with their predisposition to percentages in natural language -- LLMs appear well-suited to the STS task.
A limitation of using LLMs for STS is they can be highly expensive and inefficient. For example, STS models are often used in information retrieval, where the goal may be to compare a query text to a large number of documents and then rank the documents based on their similarity to the query (Nguyen et al., 2016). It may not be viable to leverage generative LLMs for such a task in production, as text generation can suffer from low throughput
and high cost. However, there are many small-scale tasks in academic settings where the poor efficiency of LLMs for STS are often of lesser concern. In the literature, we find small-scale applications of STS in the fields of psychology (Marjieh et al., 2022), community question answering (Hoogeveen et al., 2018), computational social science (Maldeniya et al., 2017), and propaganda detection (Mohtaj and Moller, 2022) which use generic text encoders for knowledge-intensive/domain-specific problems. In this study, we aim to show that LLMs are more well-suited than generic text encoders for such tasks.
We confirm our intuition that LLMs like ChatGPT are well-suited to perform STS by conducting the first thorough exploration of STS in the context of text generation. We evaluate two LLMs (i.e., ChatGPT, Llama2) for STS in the context of both existing STS benchmarks and domain-specific STS challenge sets. Our work identifies STS-specific prompting strategies that significantly outperform prompts from prior works (Zhong et al., 2023). Specifically, we find that mapping the original [0-5] similarity scale used in STS benchmarks to be between [0-1] significantly improves performance of LLMs on the STS task. In other words, asking LLMs to infer similarity as a percentage improves performance vs. asking LLMs to utilize an arbitrary scale. See Figure 1 for an example STS prompt used in this study.
On existing benchmarks, we find that a 0-Shot ChatGPT pipeline provides SOTA performance on the STS13 and STS15 datasets, with near-SOTA performance on STS14 and SICK-R (i.e. 0.45% and 0.51% difference in correlation respectively) when compared to unsupervised SOTA models. Given the opaque nature of ChatGPT's training data, we confirm our results are not the result of memorization by collecting 3 new STS challenge datasets using texts written after May 2023 across three domains: health, sports, and politics. We develop each dataset such that similarity is difficult to quantify without significant world knowledge and demonstrate that ChatGPT provides SOTA performance for challenging domain-specific STS. A summary of our contributions is as follows:
* We introduce three new domain-specific STS challenge sets in the domains of Health, Politics, and Sports. We show that ChatGPT outperforms the closest text encoder baseline by an average of 22.3% on STS challenge sets.
* We show that with STS-specific prompting strategies, ChatGPT achieves SOTA performance on two STS benchmark datasets and competitive performance in other datasets when compared to SOTA text encoders.
* We analyze errors made by ChatGPT to guide future works on LLMs for STS.
Figure 1: Comparing the performance of ChatGPT vs a RoBERTa-based STS cross encoder on a sample from our STS-Sports challenge set. This sample requires significant world knowledge as proper inference requires knowing 1) that the Cowboys NFL team are often referred to as “America’s Team” and 2) that “recovering” an onside kick is equivalent to “getting the ball back” with an onside kick. The prompt corresponds to our best-performing ChatGPT 0-Shot prompt found in Table 2.
Related Work
### Supervised STS
In the supervised setting, STS is commonly evaluated as a part of the GLUE benchmark -- specifically on the STS-B dataset, where texts can be cross-encoded by an LLM and fine-tuned for regression. Supervised STS is largely limited to training on samples sourced from news headlines and image captions -- making such models limited in scope when applied to new domains. LLMs are well-suited to generalize to domain-specific STS data as they contains vast world knowledge. We compare LLMs to both RoBERTa-base and RoBERTa-large Liu et al. (2019) fine-tuned on the STS-B dataset on our 3 domain-specific datasets.
### Unsupervised STS
Unsupervised STS occurs when two texts are independently encoded and then compared using measures of embedding similarity. A seminal work in the field of unsupervised STS is SBERT Reimers and Gurevych (2019), which displays how NLI samples can be used to teach BERT Devlin et al. (2019) how to pool sequences of token embeddings to provide a single vector representation of a given text. Later improvements on SBERT include SimCSE Gao et al. (2021) which leveraged contrastive learning to produce better sentence representations. Current state-of-the-art models such as GenSE Chen et al. (2022) produces SOTA results on STS tasks via large-scale synthetic generation of contrastive training triplets.
LLMs and unsupervised STS use different approaches for text encoding, making their direct comparison difficult. For example, unsupervised STS models excel at this specific task but have fewer parameters, while LLMs are not designed for regression, but have far more parameters and are trained on large-scale unsupervised data. Nonetheless, evaluating LLMs in the 0-shot setting on unsupervised STS datasets can provide insights into their capabilities for STS.
## 3 Methods
### Experimental Setup
Benchmarking LLMs on 0-Shot STS:We evaluate various STS-specific 0-shot prompting strategies. An example of our 0-shot inference can be found in Figure 1. We compare our approach to three baseline unsupervised STS models, which use encoder-only LMs to evaluate sentence representations. Specifically, we explore SBERT1Reimers and Gurevych (2019), SimCSE Gao et al. (2021), and GenSE+ Chen et al. (2022).
Footnote 1: Huggingface model string: ‘sentence-transformers/all-MiniLM-L6-v2’
Domain-Specific STS:We explore the performance of 0-shot, few-shot, and chain-of-thought (COT) prompting strategies on our domain-specific datasets. Our 0-shot methodology on domain-specific texts follows our best 0-shot prompt as determined by performance on the benchmark STS datasets. For few-shot prompting, we use 5 examples which were manually crafted by the authors. Note, we did no prompt optimization but rather aimed to write a simple prompt that introduced the LLM to the label space as suggested by Min et al. (2022). In each example, we use the same sentence 1, but a different sentence 2, producing evenly spaced similarity scores between 0 and 1, exposing the model to the complete spectrum of label space. Our COT prompting strategy follows a 1-shot paradigm, showing the model one example of how to reason about the solution step-by-step. The authors wrote the COT example and instructed the model to output the score between a set of brackets (e.g. [semantic similarity = 0.3]) to enable easy prediction extraction. All prompts used in this study can be found in Section B.2.
We compare LLMs to both supervised and unsupervised STS models. For supervised models, we use the RoBERTa-base and RoBERTa-large cross-encoders provided by the Sentence-Transformers library2, which are fine-tuned on the STS-B dataset.
Footnote 2: sbert.net
Evaluation Details:The evaluation pipeline follows Gao et al. (2021), which reports the Spearman's rank correlation between all predicted and ground truth similarity scores for all samples in a given dataset. To conduct our experiments, we evaluate two LLMs 1) ChatGPT ('gpt-3.5-turbo-0301') from OpenAI and 2) Llama2-7b Touvron et al. (2023) from Meta3. We choose these two models as they are extremely popular, easy to access, and represent the highest-performing LLMs at their given scales Touvron et al. (2023). Note, we exclude GPT-4 from the experimentation due to its significantly higher cost.
Footnote 3: Huggingface model string: ‘Llama-2-7b-chat-hf’
We report results after a small grid search on the temperature and top-
LLMs. For both models, we use temperature = 0, top-p = 1. Since Llama2 requires a non-zero temperature, we use 0.0001 as our zero temperature parameter. Additional details regarding our hyper-parameter selection can be found in Appendix B.1.
### Extracting Predictions from LLMs
We use a simple string parsing mechanism to extract predictions from generative LLMs. For 0-Shot and Few-Shot models, we simply return the first number outputted by the model. For COT methods, we extract the decimal found in the set of brackets which the LLM is instructed to produce during inference. If a text cannot be parsed (i.e. no number is output by the model) then we default to a prediction of 0 similarity.
We note some qualitative analysis regarding the above design choices. First, our highest performing model, ChatGPT, is very good at following STS prompt instructions and thus almost exclusively outputs a single number, so rarely do we default to 0. For lesser-performing models like Llama2, it can happen more frequently, but is still a rare occurrence.
### Datasets
#### 3.3.1 Benchmark Datasets
Each model is evaluated on the standard 7 STS benchmark datasets: STS 12-16 (Agirre et al., 2012, 2013, 2014, 2015, 2016), STS-B (Cer et al., 2017), and SICK-R (Marelli et al., 2014). All samples in each dataset are annotated on a scale of [0-5], where the mean similarity score across multiple annotators is the final continuous value.
#### 3.3.2 Challenge Datasets
We additionally evaluate each model on 3 newly collected datasets with data collected after May 2023 to ensure ChatGPT's performance is not due to memorization of any information regarding the standard STS benchmarks. Furthermore, this data allows us to evaluate each model's capacity to perform STS when greater world knowledge is required. Our three datasets are **1) STS-Sports:** Reddit headlines about the National Football League (NFL) and National Basketball Association (NBA); **2) STS-Health**: Texts sourced from online discussions on Reddit regarding Long COVID; and **3) STS-News**: A Reddit dataset of recent political headlines. Each dataset has (n=100) text pairs. The data was collected by the authors with the goal of semantic similarity labels being driven by world knowledge relationships.
Each sample in each dataset consists of 1 real sample from a given source and one human-generated sample. Human-generated texts were written by the authors and crafted to contrast with the source sample in a manner that produces a diverse set of scores across the similarity spectrum. Specifically, high-similarity pairs often employ complex variations of the same information, which require world knowledge, while low-similarity pairs are often constructed to have high token overlap but low semantic similarity, requiring the model to focus deeply on the semantics.
We chose to manually construct texts as it is ex
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & STS12 & STS13 & STS14 & STS15 & STS16 & STS-B & SICK-R \\ \hline SBERT & 72.37 & 80.60 & 75.59 & 85.39 & 78.99 & 82.03 & 77.15 \\ SimCSE-BERT-B & 75.30 & 84.67 & 80.19 & 85.40 & 80.82 & 84.26 & 80.39 \\ SimCSE-RoBERTa-L & 77.46 & 87.27 & 82.36 & 86.66 & 83.93 & 86.70 & **81.95** \\ GenSE+ & **80.66** & 88.18 & **84.69** & 89.03 & **85.82** & **87.88** & 80.10 \\ \hline Llama2-7b (Baseline Prompt [0-5]) & 44.05 & 50.27 & 43.03 & 46.02 & 27.23 & 44.37 & 45.33 \\ Llama2-7b (STS Prompt [0-5]) & 42.59 & 41.66 & 30.37 & 33.30 & 26.62 & 35.79 & 39.30 \\ Llama2-7b (STS Prompt [0-1]) & 51.83 & 67.74 & 60.77 & 57.48 & 61.73 & 64.56 & 62.48 \\ \hline ChatGPT (Baseline Prompt [0-5]) & 64.86 & 85.66 & 79.05 & 86.15 & 79.75 & 82.62 & 81.44 \\ ChatGPT (STS Prompt [0-5]) & 64.58 & 86.07 & 80.15 & 85.99 & 79.27 & 81.31 & 78.77 \\ ChatGPT (STS Prompt [0-1]) & 68.97 & **89.09** & 84.24 & **89.11** & 84.54 & 84.73 & 79.84 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results comparing baseline encoder-only LMs to ChatGPT on standard 7 STS datasets based on Spearman correlation. We find that ChatGPT achieves SOTA results on STS13 and STS15 as well as extremely competitive performance on STS14 and SICK-R. Note: [0-5] prompts use the original similarity score scale of [0.0-5.0]. Our results show that mapping the labels to be between [0.0-1.0] provides a significant performance increase.
tremely difficult to collect samples such as those presented in Figure 1, where the texts are on the exact same topic but differ drastically in terms of their presentation. Each pair was annotated by three different researchers at the authors' institution and averaged to produce the final similarity score. Each annotator was ensured to be sufficiently knowledgeable about the domain within which they were annotating. The annotation guidelines provided were identical to those released for the STS13 task. The inter-annotator agreement for each dataset can be found in Appendix A Table 3. Please refer to the appendix A for additional details on data collection, data statistics, and example data.
## 4 Results
### 0-Shot STS
Our 0-shot STS results on benchmark datasets are summarized in Table 1. We find that ChatGPT outperforms text encoders on the STS13 and STS15 datasets. Additionally ChatGPT shows competitive performance on STS14, and SICK-R, where there is only a 0.45% and 0.51% difference between ChatGPT and the best encoder baseline. We find that the only dataset on which encoder models significantly out-perform ChatGPT is on STS12. This is in part due to the large number of linguistically incoherent texts in STS12. We further discuss the limitations of ChatGPT on certain types of texts in Section 5. Llama2, we find, performs poorly on 0-Shot STS on existing benchmarks. This suggests that STS may be an ability emergent at scale for LLMs, as our 7b parameter Llama2 baseline significantly under-performs all other baselines on STS.
We find that the prompts explored in previous works, which prompt ChatGPT to perform STS on the original [0-5] similarity scale, perform significantly worse than when we map the labels between [0-1]. For example, our mapping translates to asking ChatGPT to predict that two texts have 80% similarity instead of 4/5 similarity. As shown in Table 1, "Baseline Prompt [0-5]" (taken from [23]) and "STS Prompt [0-5]" perform worse on 6/7 tasks, often by a large margin. We find it to be intuitive that LLMs have an easier time understanding and representing semantic similarity as a percentage, as percentages are commonly used to describe various phenomena in a variety of texts (thus making them more likely to appear in LLM training data) unlike comparisons which use a Likert scale.
### Domain-Specific STS
In Table 2 we see the results of four different model families on our newly collected STS datasets which heavily depend on world knowledge from three different domains. We find that across all domains, ChatGPT performs significantly better than Llama2 as well as both supervised and unsupervised STS models, beating the next closest model by an average of 22.3%. ChatGPT's competitive performance on the standard STS benchmarks demonstrates it's ability to perform the task, thus it is intuitive that a model with diverse world knowledge should outperform existing off-the-shelf STS models which contain limited current world knowledge. For example, success on STS-Sports requires a model to know Lebron James plays for the Los Angeles Lakers. STS-News requires the model to know that congresswoman Alexandria Ocasio-Cortez is known as AOC. STS-Health requires the model to know that "brain fog" is related to "confusion" and "lack of focus". This sort of niche knowledge seems
\begin{table}
\begin{tabular}{l c c c} \hline \hline Model & Sports & News & Health \\ \hline \multicolumn{4}{c}{_Unsupervised Models_} \\ \hline SimCSE-R-L & 58.87 & 62.47 & 50.98 \\ GenSE+ & 42.88 & 56.03 & 40.67 \\ \hline \multicolumn{4}{c}{_Supervised Models_} \\ \hline RoBERTa-B & 63.17 & 58.29 & 31.56 \\ RoBERTa-L & 63.59 & 65.56 & 50.33 \\ \hline \multicolumn{4}{c}{_Llama2 Experiments_} \\ \hline
0-Shot & 47.34 & 44.58 & 37.10 \\ Few-shot & 66.52 & 58.04 & 46.51 \\ COT & 18.73 & 30.98 & 25.55 \\ \hline \multicolumn{4}{c}{_ChatGPT Experiments_} \\ \hline
0-Shot & 80.99 & 87.21 & **78.11** \\ Few-shot & 82.28 & 80.81 & 68.28 \\ COT & **83.42** & **87.74** & 73.71 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results comparing our two best-unsupervised models (i.e., SimCSE-RoBERTa-Large and GenSE+) and two RoBERTa models fine-tuned on STS-B to LLMs on our three newly collected domain-specific datasets. We find that ChatGPT outperforms encoder-only models on all tasks by a significant margin. Note: All 0-Shot prompts follow the best 0-shot strategy as determined by results in Table 1.
unreasonable for many encoder models to contain -- which is why we argue that ChatGPT is the best option for domain-specific, STS-dependent NLP tasks looking to employ an off-the-shelf model.
We note that while Llama2 under-performs ChatGPT on all experiments, it does get a significant performance increase in the Few-Shot setting when compared to 0-shot. This may suggest that smaller LLMs require more explicit instruction to perform well on the STS task. Future works may explore STS-specific in-context learning strategies that enable the use of smaller-scale LLMs on this task.
## 5 Where Does ChatGPT Fail on STS?
In this section, we analyze the top 500 predicted samples from ChatGPT with the largest absolute difference between prediction and ground truth across five STS datasets in the 0-shot setting (STS 12-16 ). We aim to surface the types of text pairs ill-suited for semantic similarity modeling with ChatGPT.
### Linguistic Acceptability
We qualitatively observed that ChatGPT struggles with samples that are syntactically or grammatically incoherent. We validate this claim by running a RoBERTa-base model fine-tuned on the COLA Warstadt et al. (2018) dataset 4, which tests if a text is linguistically acceptable. We find that **34.6% of highly inaccurate predictions contain a linguistically unacceptable text**. For example, consider the following sample from STS14:
Footnote 4: Huggingface model string: ‘textattack/roberta-base-CoLA’
**Text 1:** what isn 't how what was sold?
**Text 2:** it's not how it was sold, gb.
**Ground Truth Similarity Score:** 0.32
ChatGPT has very little content or semantics to rely on when analyzing two linguistically unacceptable texts. Thus, it outputs a high similarity score of 0.8 potentially due to token overlap.
To further verify our claim, we evaluate ChatGPT on STS12 in two different contexts -- all samples vs. only text pairs that are both linguistically acceptable. We choose STS12 as it has a high number of linguistically unacceptable texts. We find that on the linguistically acceptable subset (2195/3108 samples in STS12), we get a correlation of 75.95%, which is a 6.62% increase in performance compared to evaluation on all samples.
### Numeric Reasoning
It is well-documented that large language models have trouble with numeric reasoning tasks Chen et al. (2023). In this study, we find that ChatGPT's definition of what constitutes a semantically similar text is not very sensitive to differences in numeric quantities. In other words, ChatGPT commonly gives high semantic equivalence to linguistically similar texts with very different numeric quantities. This is in contrast to the annotation of the STS12-16 benchmarks, where similarity scores can be very sensitive to numeric differences.
If we assume that samples with numeric quantities in each text require some numeric comparison, we specifically find that, of the top-500 worst predictions made by ChatGPT, **12.4% require a numeric comparison**. Consider the following example:
**Text 1:** singapore stocks end up 0.26 percent
**Text 2:** singapore stocks end up 0.11 pct
**Ground Truth Similarity Score:** 0.4
ChatGPT is good at recognizing that both texts pertain to Singapore stocks, however ChatGPT's prediction of 0.95 similarity shows little sensitivity to the numeric difference between the texts. Such a prediction by ChatGPT may be considered accurate in different settings, however under the STS12-16 annotation guidelines produced poor results.
## 6 Conclusion
In this study, we show that while smaller LLMs like Llama2 struggle on STS, larger models like ChatGPT are highly capable of performing semantic similarity tasks, as it achieves SOTA performance on 2/7 standard STS datasets. We additionally show that ChatGPT is far superior to existing STS models on world knowledge-dependent comparisons -- as ChatGPT outperforms existing models by an average of 22.3% on domain-specific STS tasks. In conclusion, ChatGPT shows promising results for domain-specific STS tasks.
## 7 Limitations
A limitation of this work is the use of a closed-source model, making it impossible to verify if the model has encountered the data used in our evaluation sets collected prior to September 2021. Also, frequent updates to ChatGPT make it challenging to anticipate how results may change in the future.
Additionally, our STS solution may not be suitable for large-scale pairwise comparison tasks
due to API costs and slow inference speeds. As it stands, our approach is primarily designed for small-scale analysis seeking high-quality outcomes. To demonstrate this, we introduce three new domain-specific challenging STS datasets. The size of the new datasets is limited as it's expensive to scale the annotation process as we want to ensure high-quality data with reliable annotation. However, the number of samples in our domain-specific evaluation sets is on par with other domain-specific STS datasets (Sogancioglu et al., 2017).
Finally, we note that we did not do any prompt optimization as a part of this study, which limits the performance potential of our experiments. Future iterations of this work may find that performance can be increased by employing different few shot/COT examples, or by optimizing the problem description.
## 8 Ethical Considerations
The datasets introduced in this paper collect samples from a total of 6 different subreddits. All of this information was collected manually from the public-facing site. Samples in STS-Sports and STS-News are headlines or texts that are describing public events and thus contain no sensitive information. We note that while samples in STS-Health do contain posts and comments describing personal health experiences, none of the selected samples contain any personally identifying information and are publicly available on the internet. Additionally, this is not human subjects research and thus qualifies for IRB exemption at authors' institution. Reddit was chosen as a data source because it is a suitable platform to collect time-stamped anonymous data in specific domains and on timely topics. However, in the interest of protecting user privacy we plan to provide paraphrased versions of the user-generated samples in STS-Health so that users cannot be identified via internet search of our dataset as suggested in (Benton et al., 2017).
|
2309.16752 | Synchrotron Signatures of Cosmic Ray Transport Physics in Galaxies | Cosmic rays (CRs) may drive outflows and alter the phase structure of the
circumgalactic medium, with potentially important implications on galaxy
formation. However, these effects ultimately depend on the dominant mode of
transport of CRs within and around galaxies, which remains highly uncertain. To
explore potential observable constraints on CR transport, we investigate a set
of cosmological FIRE-2 CR-MHD simulations of L$_{\ast}$ galaxies which evolve
CRs with transport models motivated by self-confinement (SC) and extrinsic
turbulence (ET) paradigms. To first order, the synchrotron properties diverge
between SC and ET models due to a CR physics driven hysteresis. SC models show
a higher tendency to undergo `ejective' feedback events due to a runaway
buildup of CR pressure in dense gas due to the behavior of SC transport
scalings at extremal CR energy densities. The corresponding CR wind-driven
hysteresis results in brighter, smoother, and more extended synchrotron
emission in SC runs relative to ET and constant diffusion runs. The differences
in synchrotron arise from different morphology, ISM gas and \textbf{B}
properties, potentially ruling out SC as the dominant mode of CR transport in
typical star-forming L$_{\ast}$ galaxies, and indicating the potential for
non-thermal radio continuum observations to constrain CR transport physics. | Sam B. Ponnada, Iryna S. Butsky, Raphael Skalidis, Philip F. Hopkins, Georgia V. Panopoulou, Cameron Hummels, Dušan Kereš, Eliot Quataert, Claude-André Faucher-Giguère, Kung-Yi Su | 2023-09-28T18:00:01Z | http://arxiv.org/abs/2309.16752v2 | # Synchrotron Signatures of Cosmic Ray Transport Physics in Galaxies
###### Abstract
Cosmic rays (CRs) may drive outflows and alter the phase structure of the circumgalactic medium, with potentially important implications on galaxy formation. However, these effects ultimately depend on the dominant mode of transport of CRs within and around galaxies, which remains highly uncertain. To explore potential observable constraints on CR transport, we investigate a set of cosmological FIRE-2 CR-MHD simulations of L\({}_{*}\) galaxies which evolve CRs with transport models motivated by self-confinement (SC) and extrinsic turbulence (ET) paradigms. To first order, the synchrotron properties diverge between SC and ET models due to a CR physics driven hysteresis. SC models show a higher tendency to undergo 'ejective' feedback events due to a runaway buildup of CR pressure in dense gas due to the behavior of SC transport scalings at extremal CR energy densities. The corresponding CR wind-driven hysteresis results in brighter, smoother, and more extended synchrotron emission in SC runs relative to ET and constant diffusion runs. The differences in synchrotron arise from different morphology, ISM gas and **B** properties, potentially ruling out SC as the dominant mode of CR transport in typical star-forming L\({}_{*}\) galaxies, and indicating the potential for non-thermal radio continuum observations to constrain CR transport physics.
keywords: ISM: cosmic rays - ISM: magnetic fields - galaxies: formation - methods: numerical
## 1 Introduction
Relativistic charged particles, or cosmic rays (CRs), are ubiquitous in the Universe. Injected and accelerated at supernovae (SNe), stellar winds, and associated shocks fronts, CRs are known to be a considerable component of the Milky Way (MW) interstellar medium (ISM) (Boulares and Cox, 1990; Bell, 1978) and are observed in other L\({}_{*}\) galaxies via their \(\gamma\)-ray and non-thermal synchrotron radiation (Lacki et al., 2011; Tang et al., 2014).
In the past decade, the importance of CRs as a source of feedback in galaxies has come to be appreciated (for recent reviews, see Owen et al., 2023; Ruszkowski and Pfrommer, 2023). A host of theoretical studies employing varied numerical and physical prescriptions have established that CRs can play an important role in driving and altering the structure of winds (Booth et al., 2013; Girichidis et al., 2016; Huang and Davis, 2022; Huang et al., 2022; Thomas et al., 2023; Modak et al., 2023) and providing a potentially key source of non-thermal pressure support in the circum-galactic medium (CGM) (Butsky and Quinn, 2018; Chan et al., 2019; Buck et al., 2020; Hopkins et al., 2020; Farcy et al., 2022).
These effects can manifestly change the star formation histories of L\({}_{*}\) galaxies by preventing cool gas from precipitating onto the disk, altering the dynamics of gas in the tenuous inner CGM (Butsky et al., 2022) or 'disk-halo interface' (Chan et al., 2021) with potential implications on the amplification of magnetic fields (Ponnada et al., 2022) as well as the phase structure and ionization state of halo gas (Ji et al., 2020; Butsky et al., 2020; Tsung et al., 2023).
However, a major caveat remains that all of the aforementioned effects depend sensitively on the dominant mode of transport of CRs through the ISM and into the CGM, which is highly uncertain with elusive observational constraints (Hopkins et al., 2021). An understanding of CR transport is thus crucial to contextualize the importance of CRs for galaxy formation and evolution, as CR effects in the ISM and CGM are heavily dependent on the macroscopic transport speed, often parameterized through the diffusion coefficient \(\kappa\) (more specifically, \(\kappa_{\parallel}\)), or streaming speed v\({}_{\rm st}\).
The transport of CRs on \(\sim\)kpc-Mpc galactic scales is fundamentally tied to the scattering of CRs on orders-of-magnitude smaller gyro-resonant scales (\(\sim\) 0.1 AU for \(\sim\)GeV CRs). Thus, there has been increasing theoretical interest in understanding the macro-physical transport properties of CRs motivated by models of plasma-scale CR transport (Jokipii, 1966; Skilling, 1975) and how their predicted observables compare to observations (Hopkins et al., 2021, 2022; Kempski and Quataert, 2022; Butsky et al., 2023).
Despite some constraining power of existing observations, there is a dire need for further observational comparison to narrow the broad theoretical parameter space, which radio-continuum synchrotron observations may provide. In this Letter, we forward-model synchrotron emission from cosmological, zoom-in simulations of galaxy formation including CRs with different physically-motivated CR transport models from the Feedback in Realistic Environments (FIRE) suite1(Hopkins et al., 2021, 2021) and explore the physical basis for corresponding observable differences which emerge owing to CR physics. In Section 2, we briefly describe the simulations and our methods. Then, we present our results for models with varied CR transport physics in Section 3. Lastly, we discuss our conclusions in Section 4.
Footnote 1: [https://fire.northwestern.edu/](https://fire.northwestern.edu/)
## 2 Simulations and Methods
In this study, we utilize a subset of the simulations presented in (Hopkins et al., 2021, 2021) which evolve a'single-bin' of 1-10 GeV CRs and utilize FIRE-2 (Hopkins et al., 2018) physics. We summarize the most pertinent aspects here, but refer the reader to the aforementioned papers for a more in-depth discussion of numerical details.
The simulations are all fully cosmological, magnetohydrodynamic (Hopkins and Raines, 2016; Hopkins, 2016) simulations of galaxy formation which include baryons and dark matter, fully anisotropic Spitzer-Braginskii conduction and viscosity (Hopkins, 2017) at a Lagrangian mass resolution of 56000 M\({}_{\odot}\). Prescriptions for explicit stellar feedback and gas cooling (for T \(\sim\) 10-10\({}^{10}\) K) follow (Hopkins et al., 2018); stars form in dense (n \(>\) 1000 cm\({}^{-3}\)), self-shielded, Jeans unstable gas with multi-band radiation, mass-loss, and explosive feedback from Types Ia and II SNe (evolved self-consistently following stellar evolution models) coupled to gas.
Cosmic rays are injected from SNe and OB/WR stellar winds with an energy efficiency of \(\epsilon_{\rm CR}=0.1\) of the inital ejecta kinetic energy. In these'single-bin' simulations, we solely evolve the \(\sim\)1-10 GeV CR energy density (\(\epsilon_{\rm CR}\)), or equivalently a constant spectral distribution, as a relativistic fluid with \(\gamma_{\rm CR}=4/3\). The CR dynamics are coupled to the gas and evolve self-consistently, with transport coupled to magnetic field lines according to the CR transport equations and loss terms (collisional, streaming) computed in-code (again, see details in Hopkins et al., 2021).
These simulations invoke scalings for \(\nu\) with various plasma properties motivated by micro-physical scenarios of CR scattering. One such model class includes "extrinsic turbulence" (ET) scenarios (Jokipii, 1966), where CRs are scattered off of gyro-resonant fluctuations in \(\mathbf{B}\) on scales of order the CR gyro-radius that arise from a turbulent cascade down to those (small) scales. Model variants in this general class vary widely (as shown in Hopkins et al., 2021) according to uncertainties in the shape of the turbulent cascade at small scales, which turbulent modes are of primary importance for scattering on these scales, the importance of certain damping terms, and geometric considerations of the (an)isotropy of said turbulent modes. But broadly speaking, the assumption for our purposes is that the scattering rate \(\nu\) varies with the local Alfven scale (\(\ell_{\rm A}\)) and Alfven Mach number (\(\mathcal{M}_{\rm A}\)) of turbulence on _resolved_ simulation scales as \(\nu\propto\mathcal{M}_{\rm A}^{2}/\ell_{\rm A}\). The normalization of \(\nu\) for these models at \(\sim\)1 GeV is fitted by Hopkins et al. (2021) to the Voyager, AMS-02, and Fermi data.
The second primary class of models are "self-confinement" scenarios (Skilling, 1975), in which CRs excite Alfven waves as they stream down their pressure gradients, which dominates the generation of gyro-resonant fluctuations in \(\mathbf{B}\) which subsequently scatter CRs. The CR scattering is determined by the balance of the growth and damping of these gyro-resonant Alfven waves and so model variants within this class are sensitive to the choice of Alfven speed, assumptions regarding the wave damping and growth terms, and uncertainties in the turbulent dissipation timescales. The key scaling here is \(\nu\propto(\frac{\epsilon_{\rm CR}}{\epsilon_{\rm R}})\left(\frac{\gamma_{\rm A} \gamma_{\rm CR}}{\mathcal{M}_{\rm CR}\mathcal{M}_{\rm I}}\right)\) in terms of the magnetic and CR energy densities \(\epsilon_{\rm R}\), \(\epsilon_{\rm CR}^{\rm T}\); Alfven speed \(\nu_{\rm A}\); gradient scale length \(\ell_{\rm CR}\); gyro radius \(r_{\rm L}\); and plasma damping terms \(\Gamma\). These are again re-normalized in Hopkins et al. (2021) to fit the aforementioned \(\sim\)1-10 GeV observations.
The subset of model variants from Hopkins et al. (2021) explored here were shown to reasonably reproduce observables of \(\gamma\)-ray emission, effective isotropic diffusivities, and cosmic ray energy densities at the "Solar circle", though we will also describe results for simulations which were not consistent with the above constraints to illustrate qualitative differences lying the physics of the model class to the synchrotron properties.
We also compare these model variants to a FIRE-2 simulation that uses a spatially and temporally constant scattering rate (hereafter called the 'constant diffusivity' or CD run) presented in Hopkins et al. (2020), and whose magnetic field properties were detailed extensively in Ponnada et al. (2022). This run's constant parallel diffusivity is \(\kappa_{\parallel}=3\)e29 cm\({}^{2}\)/s, which was chosen to be consistent with the aforementioned constraints Chan et al. (2019).
To generate our synchrotron predictions, we follow the procedure outlined in Ponnada et al. (2023), with the caveat that as these are'single-bin' simulations, we assume a constant CR electron spectral shape of Bisschoff et al. (2019) and scale the spectrum by the ratio of each gas cell's self-consistenly evolved \(\epsilon_{\rm CR}\) to the local ISM value. Subsequently, the following analysis cannot capture the effects of potential spectral variation owing to the varied CR transport models and their coupling to gas properties.
## 3 Synchrotron Emission and the Physics of Cosmic Ray Transport
We examine the synchrotron emission and magnetic field structure from two representative model variants in the ET and SC model classes in Figure 1 and characterize key differences in the properties of the gas giving rise to the emission.
There appears to be a dichotomy, on average, in the physical morphologies of the galaxies in the two model classes. ET runs exhibit more typical spiral structure and SC runs have a more central bulge-dominated, lenticular-like appearance. The SC runs tend to show brighter, smoother, and more extended emission and have more ordered magnetic field structure relative to the ET runs; ET runs look qualitatively similar to the constant diffusivity run, with brighter emission coincident with the spiral arms and neutral gas structures in the galactic center. The physical differences underpinning the visual differences between the ET and SC runs become clear in the intensity weighted histograms (Figure 1, bottom panels). Figure 1 shows that the extended emission in the ET runs is primarily arising from the denser cool and warm neutral gas while the SC runs have emission mostly arising from warmer and more diffuse gas.
In Figure 2, we examine these differences more quantitatively with radial profiles of the forward-modeled synchrotron emission for all the CR physics model variants simulated in Hopkins et al. (2021) that met their reasonable observational \(\gamma\)-ray and \(\epsilon_{\rm CR}\) constraints. We see significant variation in the profiles depending on CR transport
physics. We see a separation between the ET and SC model variants: SC runs typically exhibit brighter emission averaged at a given radius by a factor of \(\sim\)3-10 relative to ET runs, despite brighter clumped peaks in the spiral arms of ET runs. The SC runs also exhibit smoother emission that falls off more gradually with radius relative to ET and constant diffusivity runs. We stress that the correlation is _not_ one-to-one; we can see many earlier (higher-redshift) snapshots where the SC models look more like ET. And some simulations with very low constant diffusivity (2 dex lower than observationally allowed) look similar to the SC runs. We discuss this below.
While the radial profiles for the SC runs appear to be qualitatively more similar to a couple of the known observational profiles (Basu & Roy, 2013; Beck, 2015), the apparent morphological features of the galaxies look markedly different. We defer a comprehensive observational comparison to future work using spectrally-resolved runs. The variation in the synchrotron profiles between classes of CR transport models indicate the potential for the comparison of larger samples of spatially resolved synchrotron images to models predictions to constrain deeply uncertain CR transport physics.
The shape, normalization, and scatter in the profiles is a function of the phase of the ISM dominating the galaxy. The smoothness of the SC profiles is induced by the emission arising mostly from the warm neutral/warm ionized media (WNM/WIM), while on the other hand, the synchrotron intensity profiles of the ET and CD runs are dominated by emission coming from the WNM and denser cold neutral medium (CNM). This key physical difference appears to be driven by differences in the CR transport physics between the SC and ET models, as we will describe in the next section.
### A Cosmic Ray Physics Driven Hysteresis
The striking differences between the observables and properties of the CD, ET and SC models boil down to some crucial differences in the physics of CR transport. One of the main features of SC models is the (general) scaling of the scattering rate (see Section 2) as \(\nu\propto e_{\rm CR}\), i.e., the effective/emergent diffusion coefficient is inversely proportional to \(e_{CR}\) (\(\kappa_{\parallel}\propto e_{\rm CR}^{-1}\) in SC model variants, which is the defining characteristic of these types of models; for exact scalings of the models variants considered, see Hopkins et al., 2021). This scaling is true when the linear damping term dominates the gyro-resonant Alfven waves, and the CR flux is in approximate local steady state. This inverse scaling of the diffusion coefficient with the CR energy density can lead to scenarios in which regions of high \(e_{\rm CR}\) are prone to more efficient trapping of CRs. This trapping
Figure 1: _Visualisations of the synchrotron emission at 0.33 GHz; and intensity-weighted phase diagrams for FIRE-2 simulations of \(\blacksquare\)121 with varied CR transport physics at \(z=0\)._**Row 1:** Specific intensity maps with superimposed lines showing the orientation of the mass-averaged components of the magnetic field. A model variant with spatially and temporally constant \(\kappa_{\parallel}\) \(\times\) \(2\sigma\) is shown on the left, a variant within the ET class of models (‘Alfvén-Max’) is shown in the middle, and a SC model (‘feas-50’) on the right. CD and ET models generally exhibit more turbulent structure in the magnetic fields, weaker emission, and more variation in brightness contrast to highly ordered **B** and brighter and smoother emission in the SC models. **Row 2:** Intensity-weighted histograms for \(2<R\)/kpc \(<10\) and \(|z|<3\) kpc for the CD, ET and SC runs above. We exclude the central 2 kpc in order to characterize the extended emission properties rather than the bright central cores. In SC models, the synchrotron emission primarily arises from the WIM/WNM compared to the CNM/WNM dominated scenario in the CD and ET runs.
of CRs then leads to the limit of increasing \(\rm e_{CR}\), therefore increasing \(\nu\) and so on until \(\nu\rightarrow\infty\), and the CRs are trapped to move strictly with Alfven wave packets in the gas. This means a large CR pressure has built up and been "trapped" in the dense ISM gas. This build-up of CR pressure eventually blows apart ISM gas, and thus the galaxy is largely filled with warm/hot and diffuse phases, with dense, magnetized, CR-laden gas spread via these outflows into a much larger, smoother distribution. In contrast, regions of high \(\rm e_{CR}\) in ET runs would rapidly diffuse/escape, and due to high \(\rm e_{CR}\) compressive modes can be effectively damped, even further "de-confining" CRs locally.
This difference in the behavior of CRs especially at high \(\rm e_{CR}\) seems to underpin a CR physics driven hysteresis between the SC model variants and the rest. In SC runs, at \(z=0\) we typically see a warmer and more diffuse phase structure, lower gas surface densities outside R \(\sim\)4 kpc, stronger and more ordered **B** at a given \(n_{\rm gas}\) and at a given radius, and a steeper \(\rm e_{CR}\) - \(n_{\rm gas}\) relation. These differences primarily appear to arise after a non-linear feedback event owing to the SC-runaway in which CRs expel most of the cool and neutral gas outside of R \(\sim\)4 kpc. At the earlier snapshots this has not yet occurred; it is of course possible that no runaway occurs, but it happens eventually more often than not.
To see this in more detail, in Figure 3, we show PDFs of the vertical component of velocity (\(|v_{z}|\)) weighted by \(\rm up\) and \(\rm e_{CR}\) for two snapshots \(\sim\)820 Myr apart of the SC run 'fcas-50' at displacements of 0.5-3 kpc from the disk mid-plane. The later snapshot has clear signatures of a feedback event, with the \(\rm e_{CR}\)-weighted velocity PDF shifting to having many gas cells with \(|v_{z}|>\) 100 km/s, and the magnetic energy density-weighted PDF shifting similarly, though with lower magnitude. The presence of these \(\rm e_{CR}\)-loaded winds corresponds directly with a transition in these SC runs from morphological spirals with relatively similar gas distributions, ISM phase structure, and magnetic field properties to the ET and CD runs.
While we show only the velocity PDFs for 'fcas-50', this general picture of \(\rm e_{CR}\)-loaded winds, which drive substantial changes in the galaxy properties and synchrotron observables appears to emerge for the other SC models explored in this paper as well. As further confirmation of this process, we note that we see a similar effect of CR and \(\rm u_{B}\)-loaded winds from "trapped" CRs in runs not shown here but run in Hopkins et al. (2021) where they adopted a constant but extremely large scattering rate (very low diffusivity, factors \(>\) 100 lower than the observationally-allowed values). As noted by those authors, those particular runs were strongly ruled out by CR spectra, primary-to-secondary ratios, and \(\gamma\)-ray emission in the Galaxy, hence our not comparing them further here. But, by definition, they produce efficient CR trapping, so it should not be surprising that they can produce a similar "blowout" event to the SC runs here. This demonstrates a new prediction for variations of CR transport models in the SC regime: if CR transport at 1-10 GeV is dominated by modulation from self-excited, gyro-resonant Alfven waves, galaxies may be more conducive to 'ejective feedback' scenarios through CR-driven winds.
Figure 3: _PDFs of the gas velocity \(log_{10}(|v_{z}|)\) weighted by \(\rm u_{B}\) (pink) and \(\rm e_{CR}\) (black) at two snapshots 820 Myr apart (filled and unfilled) for R \(<\) 14 kpc at heights from the mid-plane of 0.5-3 kpc for a SC run (”fcas-50’)._ Runs with SC model variants for CR transport appear to be more likely to undergo extreme feedback scenarios in which a build-up of \(\rm e_{CR}\) runs away until expelling highly magnetized and \(\rm e_{CR}\)-loaded winds from the galaxy. These winds carry away cool, neutral gas and transform the phase structure and corresponding observable properties of the synchrotron emission.
Figure 2: _Azimuthally averaged, face-on radial profiles of synchrotron specific intensity for FIRE-2 simulations of n121 with varied CR transport physics at \(z=0\). Lines show simulations with ET (dot-dashed) and SC (dashed) model variants of CR transport. Shaded regions show the 5-95 percent range at a given radial bin. Our predictions show significant differences in the shape and normalization of the synchrotron emission profiles, with pathologically different behaviors exhibited between model classes. SC models tend to show brighter, smoother, and more extended profiles in comparison to ET and CD models. The difference in the profiles arises qualitative differences in the phase structure, magnetic field properties, and gas distribution modulated by a CR-physics driven hysteresis._
## 4 Discussion and Conclusions
In this work, we explore the effects of different physically-motivated models for the CR scattering rate \(\nu\) which allow it to vary dynamically as function of local plasma properties, heuristically motivated by self-confinement (SC) and extrinsic turbulence (ET) models, in "single-bin" simulations (not evolving the full CR spectrum) calibrated to give reasonable mean \(\langle\nu\rangle\) at \(\sim\)GeV energies in Hopkins et al. (2021).
Simulated galaxies with SC models of CR transport tend to have brighter, more spatially extended and smoother synchrotron emission than ET and CD models. The brighter emission in the SC models corresponds with a relatively featureless, warm-hot phase dominated ISM, elevated **B**-n\({}_{\rm gas}\) relation, and a more ordered and mean-field dominated **B**.This apparent hysteresis seems to be CR physics driven, as SC runs have the potential for a runaway at high e\({}_{\rm CR}\) which leads to CR energy concentrating until cold and dense gas is blown out via cCR and ug loaded winds, resulting in the stark morphological and physical differences between SC and ET/CD runs.
Already, the sheer lack of detailed cold, neutral phase structure diverges from typical \(\sim\) L\({}_{*}\) spiral galaxies, which may indicate that SC is not the dominant mode of CR transport in these types of galaxies, though it may operate more so within galaxies with a lenticular-like morphology with a more featureless gas/dust distribution. These differences may be probed in a spatially resolved manner with larger samples with future radio instruments like the DSA-2000 (Hallinan et al., 2019), ngVLA (Murphy et al., 2018), and Square Kilometer Array (Dewdney et al., 2009) and with already existing and future HI 21 cm surveys (Walter et al., 2008).
We emphasize also that the differences seen in the model variations here are highly nonlinear, and do not indicate that SC models of CR transport will _always_ exhibit these differences relative to ET/CD models. Rather, the predictions made here are for SC transport models which have undergone the 'SC runaway,' and simulations which do not undergo this nonlinear process do not exhibit the same characteristic synchrotron properties. And we stress that, as shown in more detail in Hopkins et al. (2021, 2022), qualitative and order-of-magnitude uncertainties remain in first-principles models for the CR scattering rate \(\nu\) and indeed _no_ first-principles model has been demonstrated to predict the correct CR spectra and primary-to-secondary ratios at \(\sim\)MeV-TeV energies (Hopkins et al., 2022).
And although the differences explored here appear to be driven by the CR physics, there are several other interrelated factors that may be important. Notably, the non-linear interplay of our stellar feedback model, the coupling of CR feedback, and the physics of gas cooling altogether influence the corresponding gas properties and are not cleanly separable i.e., these are the predictions of these CR transport models _given_ the FIRE-2 feedback and cooling physics and numerics. Changing the feedback and cooling prescriptions might lead to different results for the effect of the CR transport models on the synchrotron emission properties of simulated galaxies. The exact timing and prominence of these "blowout" events may also potentially depend on the gas resolution, which we will increase in future studies to \(\sim\) 7000 M\({}_{\odot}\), though we have checked the same CR transport variants for an intermediate-mass simulated galaxy (m11f in Hopkins et al. (2021)), factor of \(\sim\)2 lower in halo mass than the simulations presented here) at a higher Lagrangian mass resolution of 12000 M\({}_{\odot}\) and found similar results. The dynamical interaction of CRs again highlights the need for explicit evolution of CRs in galaxy formation simulations, as tracer particle or post-processing approaches to CR transport, for instance, popular methods like those of GALPROP (Strong & Moskalenko, 1998) would by construction fail to capture these important effects.
Future work will include the exploration of more FIRE-3 simulations which vary CR transport and explicitly evolve CR(e) spectra beyond the "single-bin" simulations explored in this work. These FIRE-3 simulations will allow for the generation of more robust synchrotron predictions (i.e., spectral variation) that may generate new predictions for conducting observational tests of CR transport models. In a similar vein, multi-wavelength analysis of varied CR transport models, for example with spatial cross-correlations, may prove fruitful in generating more predictive constraints that can be tested against observations.
## Acknowledgements
We wish to recognize and acknowledge the past and present Gabrielino-Tongva people and their Indigenous lands upon which this research was conducted. Additionally, we thank the staff at our institutes, without whose endless efforts this work would not be possible during the ongoing pandemic. Support for SP and PFH was provided by NSF Research Grants 1911233, 20009234, 2108318, NSF CAREER grant 1455342, NASA grants 80NSSC18K0562, HST-AR-15800. GVP acknowledges support by NASA through the NASA Hubble Fellowship grant #HST-HF2-51444.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555. CBH is supported by NSF grant AAG-1911233 and NASA grants HST-AR-15800, HST-AR-16633, and HST-GO-16703. Numerical calculations were run on the Caltech compute cluster "Wheeler," allocation AST21010 supported by the NSF and TACC, and NASA HEC SMD-16-7592. The Flatiron Institute is supported by the Simons Foundation. CAFG was supported by NSF through grants AST-2108230 and CAREER award AST-1652522; by NASA through grants 17-ATP17-0067 and 21-ATP21-0036; by STScI through grant HST-GO-16730.016-A; by CXO through grant TM2-23005X; and by the Research Corporation for Science Advancement through a Cottrell Scholar Award. ISB was supported by the DuBridge Postdoctoral Fellowship at Caltech. DK was supported by NSF grant AST2108314. KS acknowledges support from the Black Hole Initiative at Harvard University, which is funded by grants from the John Templeton Foundation and the Gordon and Betty Moore Foundation. This work was supported by NSF grant AST-2109127.
## Data Availability
The data supporting the plots within this article are available on reasonable request to the corresponding author. A public version of the GLZMO code is available at [http://www.tapir.caltech.edu/~phophkins/Site/GLZMO.html](http://www.tapir.caltech.edu/~phophkins/Site/GLZMO.html). FIRE-2 simulations are publicly available (Wetzel et al., 2022) at [http://flathub.flatiironinstitute.org/fire](http://flathub.flatiironinstitute.org/fire), though simulations including the physics of MHD and cosmic rays like those analyzed in this study are not yet publicly available. Additional data, including initial conditions and derived data products, are available at [https://fire.northwestern.edu/data/](https://fire.northwestern.edu/data/). |
2302.14237 | Towards Surgical Context Inference and Translation to Gestures | Manual labeling of gestures in robot-assisted surgery is labor intensive,
prone to errors, and requires expertise or training. We propose a method for
automated and explainable generation of gesture transcripts that leverages the
abundance of data for image segmentation. Surgical context is detected using
segmentation masks by examining the distances and intersections between the
tools and objects. Next, context labels are translated into gesture transcripts
using knowledge-based Finite State Machine (FSM) and data-driven Long Short
Term Memory (LSTM) models. We evaluate the performance of each stage of our
method by comparing the results with the ground truth segmentation masks, the
consensus context labels, and the gesture labels in the JIGSAWS dataset. Our
results show that our segmentation models achieve state-of-the-art performance
in recognizing needle and thread in Suturing and we can automatically detect
important surgical states with high agreement with crowd-sourced labels (e.g.,
contact between graspers and objects in Suturing). We also find that the FSM
models are more robust to poor segmentation and labeling performance than
LSTMs. Our proposed method can significantly shorten the gesture labeling
process (~2.8 times). | Kay Hutchinson, Zongyu Li, Ian Reyes, Homa Alemzadeh | 2023-02-28T01:39:36Z | http://arxiv.org/abs/2302.14237v2 | # Towards Surgical Context Inference and Translation to Gestures
###### Abstract
Manual labeling of gestures in robot-assisted surgery is labor intensive, prone to errors, and requires expertise or training. We propose a method for automated and explainable generation of gesture transcripts that leverages the abundance of data for image segmentation. Surgical context is detected using segmentation masks by examining the distances and intersections between the tools and objects. Next, context labels are translated into gesture transcripts using knowledge-based Finite State Machine (FSM) and data-driven Long Short Term Memory (LSTM) models. We evaluate the performance of each stage of our method by comparing the results with the ground truth segmentation masks, the consensus context labels, and the gesture labels in the JIGSAWS dataset. Our results show that our segmentation models achieve state-of-the-art performance in recognizing needle and thread in Suturing and we can automatically detect important surgical states with high agreement with crowd-sourced labels (e.g., contact between graysers and objects in Suturing). We also find that the FSM models are more robust to poor segmentation and labeling performance than LSTMs. Our proposed method can significantly shorten the gesture labeling process (\(\sim\)2.8 times).
## I Introduction
Surgical robots for minimally invasive surgery (MIS) enable surgeons to operate with greater flexibility and precision, thus reducing incision size, recovery time, and scarring. Their widespread adoption into surgical specialties such as urology, gynecology, and general surgery has opened up new fields of interdisciplinary research. Gesture segmentation and classification has been one of those research areas where both supervised [1, 2, 3, 4, 5, 6] and unsupervised learning [7, 8, 9, 10, 11] approaches have been developed for gesture recognition. However, these approaches either rely on black-box deep learning models that are hard to verify and need extensive training data or do not capture the human interpretable contextual information of the gestures.
The JIGSAWS dataset [12] with its surgical gesture labels has been the foundation of many advancements in surgical gesture recognition [13], surgical process modeling [1], skill assessment [14, 15], error detection [16, 17], and autonomy [18]. However, unlike annotations for surgical instrument segmentation, annotations for surgical workflow such as gestures need guidance from surgeons [19]. Labeling using descriptive gesture definitions is tedious and subjective, leaving uncertainty as to exactly when gestures start and end, and can have annotation errors that can adversely impact machine learning models and analyses [13, 20]. Recent studies using the JIGSAWS dataset have found errors in \(\sim\)2-10% of the gesture labels [21, 20]. As emphasized in [13], larger labeled datasets using a common surgical language are needed to support collaboration and comparative analysis.
Some recent works have focused on finer-grained surgical actions such as action triplets [22, 23, 24] and motion primitives [25] based on the interactions between robotic tools and objects in the surgical environment. [25] presented a formal framework for modeling surgical tasks with a unified set of motion primitives that cause changes in surgical context captured from the physical environment. These motion primitives were shown to be generalizable across different surgical tasks and can be used to combine data from different datasets. [25] suggests a relation between context and existing gesture labels, but does not define direct relations between the two.
Furthermore, despite limited availability of datasets that include kinematic data from surgical robots, datasets for instrument and object segmentation in MIS procedures are plentiful and have been the subject of imaging competitions [26, 27]. We propose methods that leverage the abundance of data with image annotations for surgical instruments and important surgical objects to address the challenges of manual labeling and relate surgical context to gestures. Our goal is to develop an automated, independent, and explainable way of generating gesture transcripts based on video data that does not rely on expensive training data on gestures. Such a method would be easier to verify by humans/experts and can be used as the ground truth for evaluating the black-box gesture recognition models that directly detect gestures from kinematic data.
The main contributions of the paper are as follows:
* We present a method for the automated inference of surgical context based on detecting important surgical tool and object interactions using image segmentation.
* We propose two methods for automated translation of context labels to gesture labels based on a knowledge-based finite state machine model and a data-driven machine learning model.
* We use the JIGSAWS dataset as a case study to demonstrate that our proposed approach results in shorter labeling time using the segmentation masks.
## II Preliminaries
### _Surgical Process Modeling_
Surgical process modeling [28] defines how surgical procedures can be decomposed into steps, tasks, and gestures
as shown in Figure (a)a. Gestures are defined as actions with semantic meaning for a specific intent and involve particular tools and objects. Thus, they explicitly include the surgical context, capturing important states and interactions in the physical environment. The formal framework in [25] extended this hierarchy to further include the finer-grained motion primitives (or verbs in action triplets [24, 29]) as the atomic units of surgical activity (e.g., grasp, push) that lead to changes in context, without explicitly including the semantics of physical context (e.g. needle through tissue).
### _Surgical Context_
Surgical context is defined as a set of state variables describing the status of a task and interactions among the surgical instruments, objects, and anatomical structures in the physical environment [30, 16, 25]. As shown in Figure (b)b, the first four state variables represent objects held by or in contact with the surgical instruments and are the general state variables for all tasks. The fifth state variable is task-specific and represents task progress; i.e., the needle's relation to the fabric or ring in the Suturing and Needle Passing tasks, or the knot's status in the Knot Tying task. Figure (b)b shows the general and task-specific state variables with their possible values in the Suturing and Knot Tying tasks of the JIGSAWS dataset. In Figure (b)b, the example context of \(00202\) in the Suturing task means that the right grasper is holding the needle and the needle is in the fabric.
The COMPASS dataset [25] has context labels for all three tasks in the JIGSAWS dataset based on consensus among three annotators. But, it does not provide translations from context or motion primitives to gestures which limits comparisons to existing works. Manual labeling was needed to create the context labels which is still subjective and time consuming, despite achieving near-perfect agreement with expert surgeons. However, [25] showed that high quality surgical workflow labels can be generated by examining state variables that comprise the context. With recent improvements in surgical scene segmentation, we show that context can be detected automatically from video data.
### _Surgical Scene Segmentation_
To advance analysis on video data and provide insights on surgeon performance, the 2017 and 2018 EndoVis workshops at MICCAI introduced a challenge to perform robotic instrument and scene segmentation using images from a da Vinci Xi robot in porcine procedures [26]. Various models have been proposed in the challenge, but segmenting all objects in a surgical scene has been challenging. The DeepLab V3+ model [31] achieved the best overall performance in [27] (see Table I). Other DeepLab models [32, 31] have also shown promise in surgical tool and object segmentation.
Most existing works on robot instrument or surgical scene segmentation were based on real surgery videos using publicly available datasets such as MICCAI EndoVis 17 [26], MICCAI EndoVis 18 [27] and Cata7 [33]. Popular frameworks include UNet [34], TernausNet [35], and LinkNet [36]. Surgical scene segmentation in the dry-lab settings with the JIGSAWS dataset was done in [37] and [38], but we go further by segmenting additional objects and using tool and object segmentation for context inference. Although surgical scene segmentation and instrument tracking can be used for skill assessment [39], they have not yet been used for automatic context and gesture inference. Hence, our approach could be used as an independent source to evaluate context or gesture segmentation models trained using kinematic data.
Further, we aim to integrate data-driven segmentation with knowledge-driven context inference and context to gesture translation to perform gesture recognition. Compared to the above deep learning approaches for gesture recognition, this approach enables improvements by integrating human input. Our method also benefits from the availability of large open source image segmentation datasets that provide pretrained weights for segmentation models and could also improve segmentation performance via fine-tuning on smaller datasets.
Fig. 1: (a)a Surgical hierarchy and relation between gestures and context in a suturing task. (b)b State variables and object encodings that comprise context for the JIGSAWS tasks (see Figure 2). In the Suturing and Needle Passing tasks, a needle is used to throw four sutures through the fabric and rings, respectively, while two knots are tied in the Knot Tying task.
## III Methods
This section presents our overall pipeline for the automated inference of surgical context and translation to gesture labels based on the video data as depicted in Figure 2. Surgical context can be inferred from the video or kinematic data by estimating the values of the state variables. In this work, we specifically focus on context inference solely based on video data as an independent method to verify gestures predicted from kinematic data or when kinematic data is not available. Our methods are presented for a case study of the JIGSAWS dataset [40] using the context labels from [25], but are applicable to other datasets and sets of gestures.
### _Tool and Object Segmentation_
The detection of general and task-specific state variables for surgical context requires identifying the status and relative distance of the instruments and the objects of interest in a task. As shown in Figure 0(b) for the JIGSAWS tasks, these include the left and right graspers, needle, thread, and rings.
We modified the Deeplab V3 model [32] to perform binary segmentation that classifies the background vs. one object class in the video frames of a task trial. Specifically, we train separate binary classification models to classify background vs. left grasper, background vs. right grasper, background vs. needle, background vs. thread, and background vs. ring. The input to each model is a matrix \(A_{H\times W\times 3}\) representing an RGB image of a video frame with Height (H) and Width (W). The output is a binary matrix \(M_{H\times W}\) representing the segmentation mask with 0 for the background class, and 1 for the segmented object class. We need to infer the intersections between objects for generating context, which cannot be done with the existing multi-class segmentation models that classify each pixel to a single object class. Binary segmentation models for each object class enable the analysis of intersections and overlaps among separate object masks to infer interactions between objects.
For each object, we combine the data from all tasks to train a single model to classify that object in all tasks. We leveraged transfer learning by initializing the model with a ResNet-50 [41] backbone pre-trained on the COCO dataset [42]. We obtained tool and object annotations for the JIGSAWS dataset and used a subset of 70 videos for fine-tuning the model. However, the test set for the whole pipeline was significantly limited since much of the data from JIGSAWS was needed to train the image segmentation models. We trained our models for up to 20 epochs using Adam optimization [43] with a learning rate of \(10^{-5}\).
### _Automated Context Inference_
The masks from the segmentation models provide us with information about the area and position of the instruments and objects which can enable state variable estimation at each frame. By calculating intersections and distances between the object masks in a given frame, we can detect interactions such as _contact_ and _hold_ as shown in Figure 0(b).
In the mask matrices \(M_{H\times W}\) generated by the segmentation models, each element \(m_{hw}\in\{0,1\}\) indicates if the pixel \((h,w)\) belongs to an object mask. We first perform a pre-processing step on \(M\) to eliminate the noise around masks such as the needles and threads. Contour extraction is done to help eliminate the rough edges of the masks and improve intersection detection. This step uses the OpenCV library [44] to iteratively construct contours around every element \(m_{hw}\in M\), thus reducing the input matrix to a list of points \(p\in C\subset M\) for each instrument class where \(C\) is the boundary of \(M\). Using simplified polygons instead of binary masks greatly reduces the time needed to calculate intersections and distances between objects for each frame. We experimentally determined that dropping polygons with areas under 15 pixel units squared and smoothing the polygons using the Ramer-Douglas-Peucker (RDP) algorithm [45, 46] results in better accuracy based on training set.
Next, we detect overlaps between masks by taking a list of valid polygons and calculating a feature vector \(v\) of distances (\(D\)) and intersection areas (\(Inter\)) between pairs of input masks. The input polygons Left Grasper \((LG)\), Right Grasper \((RG)\), Thread \((T)\) are common for all tasks. Task-specific objects are the Needle \((N)\) appearing in Needle Passing and Suturing, the manually labeled Tissue Points \((Ts)\) representing the markings on the tissue where the needle makes contact in Suturing, and the Rings \(R\) in Needle Passing.
Fig. 2: Pipeline for automatic context inference based on segmentation of video data and context to gesture translation.
We define the distance functions \(D(I,J)\) and \(d(i,j)\) and the intersection function \(Inter(I,J)\) to, respectively, calculate the pixel distance between two object masks \(I\) and \(J\), the pixel distance between the individual polygons \(i_{1},j_{1},...\) that constitute an object mask, and the area of intersection between two object masks \(I\) and \(J\). For any object polygon \(I\) which is comprised of several polygon segments \(i_{1},i_{2},...,i_{n}\), the distance to any other object \(J\) can be calculated as: \(D(I,J)=\text{average}([d(i,j)\text{ for }i\in I\text{ and }j\in J])\). The intersection function \(Inter(I,J)\) is implemented using a geometric intersection algorithm from the Shapely [47] library. We also define the components \(I.x,I.y\) for an object I as the horizontal and vertical coordinates of the midpoint of its polygon \(I\), calculated as the average of every point in \(I\). In order to determine the Boolean function \((\alpha)\) for each grasper, if the distance between the manually labeled pixel coordinates of the grasper jaw ends was less than 18 pixels, then the grasper was closed (\(\neg\alpha\)), else it was open (\(\alpha\)).
\[\text{Left Hold}\begin{cases}2&\text{if }D(LG,N)<1\land\neg\alpha\\ 3&\text{if }Inter(LG,T)>0\land\neg\alpha\\ 0&\text{otherwise}\end{cases} \tag{1}\]
\[\text{Left Contact}\begin{cases}2&\text{if }D(LG,N)<1\land\alpha\\ 3&\text{if }Inter(LG,T)>0\land\alpha\\ 0&\text{otherwise}\end{cases} \tag{2}\]
\[\text{Needle}\begin{cases}2&\text{if}(Inter(Ts,N)>0\land N.x<Ts.x)\\ 1&\text{if}(Inter(Ts,N)=0\lor N.x\geq Ts.x)\land\\ &(D(RG,T)>1\lor D(LG,N)>1)\\ 0&\text{otherwise}\end{cases} \tag{3}\]
The feature vector \(v=<D(LG,N),Inter(LG,T),...>\) (see Figure 2) is then used to estimate the values of different state variables using a set of task-specific functions. An example set of functions is shown in Equations 1-3 for the state variables relating to the left robot arm and needle in Suturing task. A similar set of functions are used for the right arm. For example, if the distance between the left grasper and needle is less than one pixel (\(D(LG,N)<1\)) and the grasper is closed (\(\neg\alpha\)), then a value of 2 is estimated for the _Left Hold_ variable. Or the _Needle_ state is detected as touching (2) when the relative horizontal distance of the needle polygon \((N.x)\) is less than the average (midpoint) of the tissue points \((Ts.x)\) and these two objects intersect (\(Inter>0\)).
The input sample rate of the context to gesture translation was 3Hz, so the final estimated variables were downsampled from 30Hz to 3Hz using a rolling mode for each state variable with a window of 10 frames.
### _Context to Gesture Translation_
The last step in our pipeline translates the automatically generated context labels into gesture labels. The input to the translation model is a 2-dimensional time series matrix \(\chi_{State\times n}\), where \(State\) represents the 5 state variables describing the context (see Figure 1b) and \(n\) represents the total number of samples in the trial. We map each time step \(State_{t}\) to a corresponding gesture \(G_{i}\) in the JIGSAWS dataset. The translation output is a 1-dimensional time series \(Y_{n}\in\{\mathbb{G}\}\) with each time step mapped to a gesture. We present two approaches based on domain knowledge and data.
#### Iii-C1 Finite State Machine Model
Our first approach relies on a finite state machine (FSM) defined based on the knowledge of surgical tasks which directly relates context to gestures and is more explainable than deep learning models. The grammar graphs from [1] for each task were overlaid on top of the ideal context models from [25] so that each gesture could be mapped into the groups of contextual changes that happen as the result of executing the gesture (see Figure 3 for the Suturing task). For example, G2 (positioning needle) corresponds to a change from a '0' to a '1' in the fifth state variable. Or G4 (transferring needle from left to right) is the context sequence \(20000\to 20020\to 20200\to 02200\to 00200\) which means the needle is initially held in the left grasper, then touched and grasped by the right grasper, and released by the left grasper. In Figure 3, the G4 and G8 groupings overlap since G8 (orienting needle) is performed by passing the needle from the right to the left grasper and back to the right grasper while changing its orientation.
Given the context transcript of a trial, the FSM is evaluated for each context and a transition to the next gesture is detected if the input context is part of the next gesture. The FSM for each task was initialized in the 'Start' state since not all of the trials started with G1. Also, G11 was assumed to be last and so it was appended to the gestures following the last detected gesture. In addition, in the Suturing and Needle Passing tasks, G9 and G10 had low rates of occurrence and were not included in the final translation. This allowed us to focus only on state changes involving the needle and thus ignore grasps and touches of the thread and rings with the added benefit of simplifying the FSMs and limiting the total number of valid context changes.
We also consider gesture duration as a trigger for transitions between gestures. If the current gesture's duration exceeds a certain threshold based on the average duration of that gesture class, a transition to the next gesture is enforced. This is to address the cases where a gesture transition does not happen due to inaccuracies in context detection. For example, the segmentation models tend to have lower accuracy in detecting the needle and thread states, leading to not detecting transitions that are dependent on those states.
Fig. 3: Grouping and mapping of context to gestures in the grammar graph of the Suturing task. * denotes transitions due to duration limits as follows: G2\(>\)6.0 s \(\rightarrow\) G3, G3\(>\)11.1 s \(\rightarrow\) G6, G4\(>\)5.2 s \(\rightarrow\) G2, G6\(>\)6.1 s \(\rightarrow\) G4.
#### Iii-A2 LSTM Model
Our second approach for translation of context to gesture transcripts relies on sequential deep learning methods to learn relationships in the data that are not captured by the FSM models. We trained an LSTM model to perform automated context to gesture translation for each task. We chose the LSTM model for its ability to learn temporal features. Specifically, we used a simple double layer LSTM network with 64 hidden units for the Suturing and Needle Passing tasks and 256 hidden units for the Knot Tying task. We used Adam optimization [43] and the cross entropy loss function to train the models. The hidden layers, number of hidden units and learning rates were determined by hyperparameter tuning. The final models were trained with the best model configurations and used to perform inference on the automatically generated context labels using the segmentation masks in the test set. Note that the LSTM model is a black box model and does not provide transparency like the FSM model in the previous section.
## IV Experimental Evaluation
### _Experimental Setup_
We use an 80/20 train/test split of the JIGSAWS dataset for evaluating our pipeline. The original videos are 30Hz and we obtained binary masks for the tools and objects at 2Hz which we then used to train/test the segmentation models. The LSTM networks are trained with the 3Hz context labels from [25]. We evaluate both the FSM and LSTM for context to gesture translation with the test set context labels.
The experiments were conducted on a PC with an Intel Core i7 CPU 3.60GHz, 32GB RAM, and an NVIDIA GeForce RTX 2080 Ti GPU running Ubuntu 18.04.2 LTS.
### _Metrics_
The following metrics were used to evaluate the pipeline.
**Accuracy:** Accuracy is the ratio of samples with correct labels divided by the total number of samples in a trial.
**Edit Score:** Edit score is calculated using Equation 4 from [2] where the normalized Levenshtein edit distance, \(edit(G,P)\), quantifies the number of insertions, deletions, and replacements needed to transform the sequence of predicted labels \(P\) to match the ground truth sequence of labels \(G\). This is then normalized by the maximum length of the sequences so that a higher edit score is better.
\[\text{Edit Score}=(1-\frac{edit(G,P)}{max(len(G),len(P))})\times 100 \tag{4}\]
**Intersection over Union (IOU):** Mean IOU, as calculated in Equation 5, is the standard for assessing the segmentation and translation models [42].
\[IOU=TP/(TP+FP+FN) \tag{5}\]
Each predicted segment is matched to a corresponding segment in the ground truth. Then, the average IOU for each class is calculated and the mean of class IOUs is returned.
### _Results_
#### Iv-C1 Tool and Object Segmentation
Table I shows the performance of our segmentation models in comparison to the related work. Although the MICCAI 18 challenge [27] dataset is from real porcine procedures, and differs from the JIGSAWS dataset collected from dry-lab experiments, it has similar objects including the clasper (similar to the graspers in JIGSAWS), needle and thread. The Deeplab V3+ model achieved the best performance on the thread class. The top models from MICCAI 18 do not perform as well as our binary models on the needle and thread classes in the Suturing task. However, the Mobile-U-Net [37] achieved the highest performance for grasper and needle segmentation in the JIGSAWS Suturing task. [38] reported tool segmentation IOUs for all the JIGSAWS tasks with up to 0.8 for KT using a Trained LinkNet34, but did not do object segmentation. Among the JIGSAWS tasks, we achieved the best performance in Suturing for the right grasper, needle and thread, while the model performance on the Needle Passing task was the worst. This is likely due to Needle Passing's background having less contrast with the foreground compared to the other two tasks as shown in Figure 2). We can also see that the needle and thread masks are thinner compared to the grasper masks. So, the mask boundary errors could contribute to a lower score for the needle and thread classes. The estimated time for segmenting the whole JIGSAWS dataset is 8.6 hours.
#### Iv-C2 Automated Context Inference
Table II shows the performance of the context inference method in terms of IOU achieved for each state variable with the predicted segmentation masks and the ground truth masks from crowd-sourcing.
The left column of Table II shows that left and right contact have higher IOUs compared to left and right hold, and the needle or knot state has the lowest IOU. This is because errors in estimating the position of the grasper jaw ends affect accurate inference of the hold state, while contact is relatively simple by finding if the two masks intersect. Better performance in detecting contact compared to hold states is also observed in the right column of Table II, where ground truth segmentation masks are used. Hence, the lower performance of the left hold and right hold could primarily be due to the difficulty in detecting these states.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Data} & Graspers & \multicolumn{3}{c}{Objects} \\ & & Left & Right & Needle & Thread & Ring \\ \hline Deeplab V3+ [27] & M [27] & 0.78 & 0.014 & **0.48** & N/A \\ U-net [27] & 0.72 & 0.02 & 0.33 & N/A \\ \hline Mobile-U-Net [37] & S & **0.82** & **0.56** & N/A & N/A \\ \hline Trained UNet [38] & S & 0.69 & N/A & N/A & N/A \\
\begin{tabular}{c} NP \\ \end{tabular} & 0.66 & N/A & N/A & N/A & N/A \\ Trained LinkNet34 & KT & 0.80 & N/A & N/A & N/A \\ \hline \multirow{2}{*}{Deeplab V3 (ours)} & S & 0.71 & **0.64** & **0.19** & **0.52** & N/A \\ & NP & 0.61 & 0.49 & 0.09 & 0.25 & **0.37** \\ \cline{1-1} & KT & **0.74** & 0.61 & N/A & 0.44 & N/A \\ \hline \hline \end{tabular}
\end{table} TABLE I: Tool and object segmentation performance on the test set (mean IOU for each object class) on the MICCAI18 (M) and JIGSAWS Suturing (S), Needle Passing (NP), and Knot Tying (KT) tasks.
For the needle/knot state, we need to detect if the needle is in the fabric/tissue for the Suturing task, in/out of the ring for the Needle Passing task, and if the knot is loose or tight in the Knot Tying task. Detecting the state of the needle and knot is difficult even with the ground truth segmentation masks in the right column of Table II. This is because the needle and thread have the lowest segmentation performance compared to graspers as shown in Table I. The total time to perform automatic context inference is estimated to be about 30 seconds for the whole JIGSAWS dataset.
#### Iv-B3 Context to Gesture Translation
The right column of Table III shows the performance of the FSM and LSTM methods in translating ground truth context labels to gestures. The FSM model achieves higher accuracies and edit scores than the LSTM. The left column of Table III shows the performance of the overall pipeline with automated context labels. We see that using automated context from predicted masks degrades the performance of both models because the segmentation models perform poorly at generating masks for the needle and for all tools and objects in Needle Passing. This effect is propagated through the pipeline, resulting in low accuracies and IOUs. The FSM generally outperforms the LSTM likely due to its knowledge-based structure and setting limits on gesture durations that prevent the model from becoming stuck in any one gesture even with degraded context labels. The FSM pipeline achieves accuracies lower than unsupervised models from [10] and [11] for Suturing, but outperforms them in terms of edit score. These observations suggest that there are benefits to incorporating knowledge into context to gesture translation that can make the model more robust to degraded context labels. However, the FSM is manually developed based on domain knowledge and relies on defined inputs and transitions while the LSTM requires labeled data for training. The time to generate the entire JIGSAWS gesture translation from context is less than 3 minutes for both models.
## V Discussion and Conclusions
Our proposed pipeline for automated inference of surgical context and translation to gesture labels can perform automatic and explainable gesture inference given video segmentation masks. It can be used as an efficient and fast inference method by significantly shortening manual gesture labeling time (\(\sim\)9 hours vs. \(\sim\)26 hours for the case study of the JIGSAWS dataset). We rely on models pre-trained on general images and publicly-available datasets which lowers the cost of manually labeling video data and makes our model generalizable to other datasets and tasks.
For the case study of JIGSAWS, our binary segmentation models achieve comparable performance to state-of-the-art models on the grasper and thread classes, and better performance on the needle class. However, they do not perform well enough for the needle and thread classes which are important for accurate context inference. Our context inference method also does not perform equally well for all the states. Given the ground truth segmentation masks, it achieves \(\sim\)85% IOU for states such as left/right contact, but only \(\sim\)45% IOU for the needle/knot state. The FSM and LSTM models for context to gesture translation have better performance given ground truth context labels compared to predicted context which may be due to imperfect models at each stage of the pipeline and error propagation.
Manual annotations for the grasper end points and tissue points were used for context inference. Also, our method relies on 2D images to infer context from a 3D environment which can particularly complicate detecting the contact states. Future work will focus on addressing these limitations and improving the performance and robustness of the overall pipeline to apply it to runtime error detection [16, 17].
## Acknowledgment
This work was supported in part by the National Science Foundation grants DGE-1829004 and CNS-2146295.
\begin{table}
\begin{tabular}{c |
2309.00130 | Missing digits points near manifolds | We consider a problem concerning the distribution of points with missing
digits coordinates that are close to non-degenerate analytic submanifolds. We
show that large enough (to be specified in the paper) sets of points with
missing digits coordinates distribute 'equally' around non-degenerate
submanifolds. As a consequence, we show that intersecting those missing digits
sets with non-degenerate submanifolds always achieve the optimal dimension
reduction. On the other hand, we also prove that there is no lack of points
with missing digits that are contained in non-degenerate submanifolds. Among
the other results,
1. we prove that the pinned distance sets of those missing digits sets
contain non-trivial intervals regardless of where the pin is.
2. we prove that for each $\epsilon>0,$ for missing digits sets $K$ with
large bases, simple digit sets (to be specified in the paper), and $\dim_{H}
K>3/4+\epsilon,$ the arithmetic product sets $K\cdot K$ contains non-trivial
intervals. | Han Yu | 2023-08-31T20:42:51Z | http://arxiv.org/abs/2309.00130v1 | # Missing digits points near manifolds
###### Abstract.
We consider a problem concerning the distribution of points with missing digits coordinates that are close to non-degenerate analytic submanifolds. We show that large enough (to be specified in the paper) sets of points with missing digits coordinates distribute 'equally' around non-degenerate submanifolds. As a consequence, we show that intersecting those missing digits sets with non-degenerate submanifolds always achieve the optimal dimension reduction. On the other hand, we also prove that there is no lack of points with missing digits that are contained in non-degenerate submanifolds. Among the other results,
1. we prove that the pinned distance sets of those missing digits sets contain non-trivial intervals regardless of where the pin is.
2. we prove that for each \(\epsilon>0,\) for missing digits sets \(K\) with large bases, simple digit sets (to be specified in the paper), and \(\dim_{\rm H}K>3/4+\epsilon,\) the arithmetic product sets \(K\cdot K\) contains non-trivial intervals.
2010 Mathematics Subject Classification: 11Z05, 11J83, 28A80
## 1. Introduction
We discuss a problem concerning the distribution of missing digits points around as well as on manifolds. Before we state the most general results, we list three special cases with some mixtures of number theory and geometric measure theory.
In what follows, let \(K_{1}\) be the set of points on \([0,1]\) whose base \(10^{9000}\) expansions1 contain only digits in \(\{0,\ldots,10^{8100}-1\}.\) Let \(K_{2}\) be the set of points on \([0,1]\) whose base \(10^{9000}\) expansions contain only digits in \(\{0,\ldots,10^{7000}-1\}.\) We see that \(\dim_{\rm H}K_{1}=9/10\) and \(\dim_{\rm H}K_{2}=7/9.\)
**Theorem A**.: _Let \((t,t^{2},t^{3})_{t\in\mathbb{R}}\) be the Veronese curve in \(\mathbb{R}^{3}.\) There is an integer \(l\geq 0\) such that there are infinitely many \(t>0\) such that the fractional parts of_
\[10^{9000l}t,10^{9000l}t^{2},10^{9000l}t^{3}\]
_are contained in \(K_{1}.\) Loosely speaking, there are many points on the Veronese curve whose coordinates have special expansions in base \(10^{9000}.\) Moreover, the upper box dimension of the set consisting such numbers \(t\in[0,1]\) is in \([1/30,7/10].\)_
**Remark**.: _We expect that the (upper) box dimension of such \(t\) should be exactly equal to \(7/10.\) This is obtained via_
\[7/10=3*9/10-(3-1).\]
_Here \(3*9/10\) is the dimension of the missing digits set \(K_{1}\times K_{1}\times K_{1}\) in \(\mathbb{R}^{3}\) considered in this theorem, \(1\) is the dimension of the Veronese curve. For more discussions, see Theorem 2.8._
**Theorem B**.: _Let \(K=K_{2}\times K_{2}\subset[0,1]^{2}\) be the twofold Cartesian product of \(K_{2}.\) Then for each \(x\in\mathbb{R}^{2},\) the pinned distance set_
\[\Delta_{x}(K)=\{|x-y|:y\in K\}\]
_contains non-trivial interior and in particular, positive Lebesgue measure. Moreover, for each circle, \(C\subset\mathbb{R}^{2},\) we have_
\[\overline{\dim}_{\mathrm{B}}C\cap K\leq\dim_{\mathrm{H}}K-(2-\dim C)=\frac{5} {9}.\]
_As a consequence, we can take \(x=(0,0)\) and see that the set_
\[K_{2}^{2}+K_{2}^{2}=\{x^{2}+y^{2}:x,y\in K_{2}\}\]
_contains intervals._
**Remark**.: _Previously, it is possible to show that:_
_1. \(\Delta_{x}(K)\) has positive Lebesgue measure for many choices of \(x\in\mathbb{R}^{2}.\) Notice that the set \(K\) has Hausdorff dimension larger than \(1.5\). This statement follows from a classical result of Falconer. See [31, Section 7]._
_2. \(\Delta_{x}(K)\) has full Hausdorff dimension for all \(x\in\mathbb{R}^{2}.\) This follows by adapting the arguments in [19]._
_This theorem pushes the result further, i.e. \(\Delta_{x}(K)\) has a positive Lebesgue measure for all \(x\in\mathbb{R}^{2}.\) This result holds in \(\mathbb{R}^{n},n\geq 3\) as well. Also, instead of a
missing digits set in \(\mathbb{R}^{2},\)\(K\) can be also chosen to be the Cartesian product of two missing digits sets with possibly different bases on \(\mathbb{R}.\) See Section 8._
**Theorem C**.: _Consider the set \(K_{2}.\) The arithmetic product set_
\[K_{2}\cdot K_{2}=\{xy:(x,y)\in K_{2}\times K_{2}\}\]
_has non-trivial interior._
**Remark**.: _The sum set \(K_{2}+K_{2}\) does not contain non-trivial intervals because it is contained in the set of numbers whose base \(10^{9000}\) expansions contain only digits in \(\{0,\ldots,2\times 10^{7000}-2\}.\) Thus the nonlinearity of the multiplication map makes a significant difference._
The role of missing digits sets in this paper is by no means restrictive. It really just depends on Theorem 2.15. In fact, it is possible to extend Theorem 2.15 to homogeneous self-similar measures with rational scaling ratios, finite rotation groups and rational translation vectors. However, in this paper, we anyway restrict the discussions to missing digits measures because of their intrinsic connections with number theory.
The roles of the specific missing digits sets \(K_{1},K_{2}\) with base \(10^{9000}\) are also not restrictive. The precise conditions are on their Fourier \(l^{1}\)-dimensions, see Section 2.1. We need that \(\dim_{l^{1}}K_{1}>8/9\) and \(\dim_{l^{1}}K_{2}>3/4.\) These conditions are in some sense optimal. See Theorem 2.12 and Remark 2.13.
For missing digits sets with simple digit sets (e.g. with consecutive digits), their Fourier \(l^{1}\)-dimensions are almost equal to their Hausdorff dimensions, see Theorem 2.15. Theorem C can be seen as a partial answer to the following folklore conjecture.
**Conjecture 1.1**.: _Let \(K\) be a missing digits set with \(\dim_{\rm H}K>1/2,\) then \(K\cdot K\) contains intervals._
**Remark 1.2**.: _Let's see why this conjecture is plausible. By adapting the arguments in [19], it can be shown that \(\dim_{\rm H}K\cdot K=1\). Due to the nonlinearity of the multiplication map, it is expected that \(K\cdot K\) has a positive measure. Finally, due to the results in [34], it is expected that \(K\cdot K\) contains intervals._
**Remark 1.3**.: _If \(K\) has Newhouse thickness (see [24, Definition 3.6]) \(\tau(K)\) larger than \(1\), then \(K\cdot K\) contains intervals. In fact, with the help of Newhouse thickness
of Cantor set ([22],[46],[24],[41]), it is possible to prove some of the topological results in Theorems A, B, C for Cantor sets with thickness at least one. To see where \(K_{1},K_{2}\) are in the thickness story: \(K_{1}\) at the beginning has Newhouse thicknesses \(1/(10^{900}-1-10^{-900});\)\(K_{2}\) has Newhouse thickness \(1/(10^{2000}-1-10^{-2000}).\) Thus, in the sense of Newhouse thickness, they are very thin._
**Remark 1.4**.: _By [35], it is possible to prove that for each missing digits set \(K\), there is an integer \(N\) so that the \(N\)-fold multiplication set \(K\cdot K\cdot\dots\cdot K\) contains intervals. In fact, it is even possible to show that the corresponding measure \(m_{*}((\mu_{K})^{N})\) is a smooth function where \(m:(x_{1},\dots,x_{N})\to x_{1}\dots x_{N}\) is the multiplication map and \(\mu_{K}\) is the natural missing digits measure supported on \(K.\)_
To motivate the general results in this paper, we first introduce three open problems (Questions 1.5, 1.10, 1.11). They are all linked together with a generalized version of a difficult conjecture of E. Borel. See [8]. At the end of this paper, we will provide some more specific applications of our general results. See Sections 8, 9.
### Missing digits solutions for algebraic equations
The first problem is related to the consideration of numbers of missing digits satisfying algebraic equations. Arithmetic properties of numbers with restricted digits have been studied for example in [7], [6], [13], [14], [32] as well as [33].
Consider the equation
\[x^{3}+y^{3}=1,\]
and we want to ask whether or not there are irrational solutions \((x,y)\) such that both \(x\) and \(y\) do not have digit \(1\) in their ternary expansions. More generally, we introduce the notion of missing digits sets: Let \(n\geq 1\) be an integer. Let \(p\geq 3\) be an integer. Let \(D\subset\{0,\dots,p-1\}^{n}.\) Consider the set
\[K_{p,D}=\mathrm{cl}\{x\in\mathbb{R}^{n}:[p\{p^{k}x\}]\in D,k\geq 0\},\]
where \(\{x\},[x]\) are the component wise fractional part, integer part respectively of \(x\in\mathbb{R}^{n}\) and \(\mathrm{cl}A\) is the closure of \(A\subset\mathbb{R}^{n}\) under the standard topology. More precisely, \(\{x\}\) is the unique \(y\) point in \([0,1)^{n}\) with \(y-x\in\mathbb{Z}^{n},\) and
For convenience, we use \(\hat{D}\) for the complement \(\{0,\ldots,p-1\}^{n}\setminus D.\) For example, if \(n=2\), then \(K_{3,\{(\hat{1,1})\}}\cap[0,1]^{2}\) is the Sierpinski carpet with base \(3\). Later on, we will call such a set \(K_{p,D}\) to be a missing digits set.
**Question 1.5**.: _Let \(n\geq 1\) be an integer. Let \(M\) be a non-degenerate analytic manifold. Let \(p>2\) be an integer and \(D\subset\{0,\ldots,p-1\}^{n}\) be a choice of digits. Determine whether or not \(M\cap K_{p,D}\) is infinite._
To guide our intuitions, we formulate the following perhaps ambitious conjecture.
**Conjecture 1.6**.: _Let \(n\geq 1\) be an integer. Let \(M\) be a strongly non-degenerate analytic manifold. Let \(p>2\) be an integer and \(D\subsetneq\{0,\ldots,p-1\}^{n}\) be a choice of at least two digits. For \(\delta>0,\) let \(M^{\delta}\) be the \(\delta\)-neighbourhood of \(M\) in \(\mathbb{R}^{n}.\) Then \((M\cap K_{p,D})^{\delta}\) can be covered by_
\[\ll\left(\frac{1}{\delta}\right)^{\max\{0,\dim_{\mathrm{H}}K_{p,D}-(n-\dim_{ \mathrm{H}}M)\}}\]
_many \(\delta\)-balls._
**Remark 1.7**.: _Strongly non-degeneracy is not a common notion and we will not use it anywhere in this paper. If \(M\) is strongly non-degenerate, it means that \(M\) is non-degenerate and for each affine subspace \(L\subset\mathbb{R}^{n},\)_
\[\dim(M\cap L)\leq\max\{0,\dim M-(n-\dim L)\}.\]
_Intuitively, this condition says that \(M\) is'sufficiently twisted'. This condition is to avoid the cases when \(K_{p,D}\) is contained in an affine subspace \(L\) and \(M\cap L\) has a larger dimension than expected so that \(M\cap K_{p,D}\) can be larger than what is stated in the conjecture. Simple examples of strongly non-degenerate manifolds include unit spheres, Veronese curves, etc._
**Remark 1.8**.: _If the exponent of \((1/\delta)\) is equal to zero, this conjecture says that \(M\cap K_{p,D}\) is a finite set. We can push a bit further. If in addition, \(M\) is also an algebraic variety over \(\mathbb{Q}\) and \(\dim_{\mathrm{H}}K_{p,D}-(n-\dim_{\mathrm{H}}M)<0\), then we expect that \(M\cap K_{p,D}\) consists of only rational points. For example, when \(n=1\), this falls in the range of the aforementioned conjecture of E. Borel which says that all algebraic irrational numbers cannot miss digits in any base._
**Remark 1.9**.: _We can formulate a slightly weaker conjecture with the covering number being_
\[\ll(1/\delta)^{\max\{0,\dim_{\rm H}K_{p,D}-(n-\dim_{\rm H}M)\}+\epsilon}\]
_for each \(\epsilon>0\). This weaker conjecture is also open in general._
One of the results we will prove is that Conjecture 1.6 holds when \(K_{p,D}\) is large enough. In that case, we also provide a natural lower counting estimate for missing digits points in \(M\), see Theorem 2.8. It is of interest to find the'smallest possible' missing digits set for the above conjecture to hold. As long as \(M\) is fixed, we are able to provide a positive number \(\sigma(M)>0\) and examples of \(K_{p,D}\) with \(\dim_{\rm H}K_{p,D}\) being larger than and arbitrarily close to \(n-\sigma(M)\). Thus the missing digits sets need not be too large in the sense of Hausdorff dimension. In Section 5.8, we demonstrate a particularly subtle difficulty in Conjecture 1.6 for small missing digits sets.
### Intersecting manifolds with fractals
We now discuss the number theoretic problem from a slightly different point of view. Let \(n\geq 2\) and \(M\subset\mathbb{R}^{n}\) be a manifold. Let \(F\subset\mathbb{R}^{n}\) be a fractal set, e.g. a Sierpinski sponge. We are interested in considering the intersection \(M\cap F.\) In view of the classical Marstrand slicing theorem (see [31, Theorem 6.9 and Section 7]), we see that for a 'generic' translation vector \(a\in\mathbb{R}^{n},\)
(Dimension Reduction)
\[\dim_{\rm H}((M+a)\cap F)\leq\max\{0,\dim_{\rm H}F-(n-\dim_{\rm H}M)\}.\]
Of course, it is possible to quantify the word 'generic' in a precise way. Loosely speaking, there is a small (in terms of the Lebesgue measure or Hausdorff dimension) exceptional set \(E\) such that the above holds for all \(a\notin E.\) In this direction, a great amount of efforts have been made to discover the occasions in which one can reduce the exceptional set from a small set to the empty set. For \(M\) being affine subspaces, see [17], [20], [38], [43].
Intuitively speaking, the only chance for the above (Dimension Reduction) not to hold would be that \(M\) and \(F\) share similar structures in small scales. For example, if \(M\) is a line parallel to one of the coordinate axis, then it is possible that
\[\dim_{\rm H}(M\cap F)>\max\{0,\dim_{\rm H}F-(n-\dim_{\rm H}M)\}.\]
This phenomenon happens in \(\mathbb{R}^{2}\) already: just consider \(F\) being the twofold Cartesian product of the middle third Cantor set and \(M\) being the \(X\) coordinate line. In [38], Shmerkin showed that for \(M\) being lines, those are essentially the only cases for (Dimension Reduction) not to hold: (Dimension Reduction) holds for all lines with irrational slopes.
Now, suppose that \(M\) has some curved structures. Then intuitively, we can think that \(M\) cannot share any structures with \(F\) in small scales and (Dimension Reduction) should hold without exceptions. Towards this direction, we pose the following question.
**Question 1.10**.: _Let \(M\subset\mathbb{R}^{n}\) be a 'curved' submanifold. Let \(F\) be the Sierpinski sponge (based in 3, say). Then we have_
\[\dim_{\mathrm{H}}M\cap F\leq\max\{0,\dim_{\mathrm{H}}F-n+\dim_{\mathrm{H}}M\}.\]
Although both of the arguments in [43] and [38] do not work directly to gain information about intersections between 'curved' manifolds and fractals, it is perhaps possible to adapt the arguments to work out the case when the manifold is a hypersurface with non-vanishing curvatures.2 Here we do not fix the notion of a submanifold (possibly with codimension larger than one) being 'curved'. We leave the interpretation open. One of the results in this paper is to answer this question with a specific meaning to the word 'curved'.
Footnote 2: We thank P. Shmerkin for explaining the details.
### Counting missing digits points near manifolds
It turns out that the intersection problem above is closely related to a special lattice counting problem that we now introduce.
Recall the notion of missing digits sets \(K_{p,D}.\) It is possible to introduce a natural Lebesgue measure \(\lambda_{p,D}\) on \(K_{p,D}\) which will we discuss later. For now, we just think \(\lambda=\lambda_{p,D}\) to be a probability measure supported on \(K=K_{p,D}.\)
Let \(M\subset\mathbb{R}^{n}\) be a submanifold. Let \(\delta>0.\) Consider the \(\delta\)-neighbourhood \(M^{\delta}\) of \(M.\) We want to study the quantity
\[\lambda(M^{\delta})\]
for \(\delta\to 0\). Heuristically, if \(\lambda\) and \(M\) are somehow 'independent', we expect that
(Independence) \[\lambda(M^{\delta})\asymp\delta^{n-\dim M}.\]
Here \(\dim M\) is the standard notion of the dimension of the submanifold \(M\). In our situation, \(\dim M=\dim_{\rm H}M.\) The value \(n-\dim M\) is usually called the codimension of \(M.\) The value \(\delta^{n-\dim M}\) is roughly the Lebesgue measure of \(M^{\delta}.\) Now, assuming (Independence), we see that \(M^{\delta}\cap K\) can be covered with
\[\ll\delta^{n-\dim M}/\delta^{\dim_{\rm H}K}\]
many balls with radius \(\delta.\) For this, we need to assume that \(\lambda\) is AD-regular with exponent \(\dim_{\rm H}K.\) See Section 4.2. From here, we directly deduce that
\[\dim_{\rm H}(M\cap K)\leq\dim_{\rm H}K-(n-\dim M)\]
if \(\dim_{\rm H}K>n-\dim M.\) Otherwise, we simply have
(Finiteness) \[\#M\cap K<\infty.\]
The conclusion (Finiteness) is rather strong, it says that if \(K\) is a small enough missing digits set then \(K\cap M\) has only finitely many points. Of course, we would have to assume the asymptotic bound \(\lambda(M^{\delta})\asymp\delta^{n-\dim M}\) which is not easy to be tested. A particular special case (of Question 1.5) in this direction can be formulated as follows.
**Question**.: _Consider the circle \(x^{2}+y^{2}=1\) in \(\mathbb{R}^{2}.\) Can we find infinitely many points \((x,y)\) on the circle with \(x,y\in K_{5,\{0,4\}}\)?_
This is an open problem. Methods in [38] or [43] can be probably used to deduce that the points under consideration form a set with zero Hausdorff dimension but this is not enough to deduce the finiteness. More generally, we shall consider the following problem.
**Question 1.11**.: _Find examples of \(M\) and \(K_{p,D}\) (and \(\lambda_{p,D}\)) for which the asymptotic estimate holds_
\[\lambda_{p,D}(M^{\delta})\asymp\delta^{n-\dim M}.\]
## 2. results in this paper
To state the results, we first introduce some terminologies. The reader can skip to Section 2.4 and return here later when it is necessary.
### Fourier norm dimensions
We need some notions of Fourier norm dimensions. They are useful in e.g. [45] where a problem of counting rational points near missing digits sets was considered. In an ongoing project, [2], an algorithm for computing the Fourier \(l^{1}\) dimensions of missing digits sets is developed together with many applications to metric Diophantine approximation. In this paper, we do not need to consider the precise values of the Fourier \(l^{1}\)-dimensions. We only provide some bounds which are rather crude but enough for the discussions in this paper. See Theorem 2.15.
Let \(n\geq 1\) be an integer. Let \(\mu\) be a compactly supported Borel probability measure on \(\mathbb{R}^{n}.\) Consider the Fourier transform
\[\hat{\mu}(\xi)=\int_{\mathbb{R}^{n}}e^{-2\pi i(x,\xi)}d\mu(x),\]
where \((.,.)\) is the standard Euclidean bilinear form.
**Definition 2.1**.: _Let \(p>0.\) We define_
\[\dim_{l^{p}}\mu=\sup\left\{s>0:\sup_{\theta\in[0,1]^{n}}\sum_{|\xi|\leq R,\xi \in\mathbb{Z}^{n}}|\hat{\mu}(\xi+\theta)|^{p}\ll R^{n-s}\right\}.\]
With the help of the Cauchy-Schwarz inequality, it is possible to show that
\[\frac{\dim_{l^{2}}\mu}{2}\leq\dim_{l^{1}}\mu.\]
Moreover, we have for each AD-regular (see Section 4.2) measure \(\mu\)
\[\dim_{l^{2}}\mu=\dim_{\rm H}\mu=\dim_{\rm H}\text{supp}(\mu).\]
Furthermore, let \(n\geq 1\) be an integer. Let \(\mu_{1},\dots,\mu_{n}\) be a Borel probability measure on \(\mathbb{R}.\) The \(n\)-fold Cartesian product \(\mu^{\prime}=\mu_{1}\times\dots\times\mu_{n}\) satisfies
\[\dim_{l^{1}}\mu^{\prime}\geq\dim_{l^{1}}\mu_{1}+\dots+\dim_{l^{1}}\mu_{n}.\]
In fact, the equality holds when \(\mu_{1},\dots,\mu_{n}\) are missing digits measures but we do not need this fact.
We have seen above that \(\dim_{l^{2}}\mu\) is closely related to \(\dim_{\rm H}\mu.\) The reason that we also study \(\dim_{l^{1}}\mu\) is that it gauges, in some sense, how 'close' is \(\mu\) from being a continuous function. Observe that if the exponent in the definition of \(\dim_{l^{1}}\mu\) can be chosen to be negative, then \(\mu\) has an absolutely integrable Fourier transform.
This says that \(\mu\) can be chosen to be the distribution associated with a continuous density function. In this case, \(\text{supp}\mu\) can be seen as a topological manifold.
For computations, it is often not very convenient to have two sup's. For this reason, we also introduce the following two additional definitions.
**Definition 2.2** (Integral).: _Let \(p>0.\) We define_
\[\dim_{lp}^{I}\mu=\sup\left\{s>0:\int_{|\xi|\leq R}|\hat{\mu}(\xi)|^{p}d\xi \ll R^{n-s}\right\}.\]
**Definition 2.3** (Sum).: _Let \(p>0.\) We define_
\[\dim_{lp}^{S}\mu=\sup\left\{s>0:\sum_{|\xi|\leq R,\xi\in\mathbb{Z}^{n}}|\hat{ \mu}(\xi)|^{p}\ll R^{n-s}\right\}.\]
In most of the situations, \(\dim_{lp}^{I},\dim_{lp}^{S}\) can provide useful information individually. Notice that in general,
\[\dim_{lp}\mu\leq\min\{\dim_{lp}^{I}\mu,\dim_{lp}^{S}\mu\}.\]
This is because
\[\int_{|\xi|\leq R}|\hat{\mu}(\xi)|^{p}d\xi\leq\sum_{\xi\in\mathbb{Z}^{n},|\xi |\leq 1.5R}\int_{\theta\in[0,1]^{n}}|\hat{\mu}(\xi+\theta)|^{p}d\theta\]
\[=\int_{\theta\in[0,1]^{n}}\sum_{\xi\in\mathbb{Z}^{n},|\xi|\leq 1.5R}|\hat{\mu}( \xi+\theta)|^{p}d\xi\]
\[\leq\sup_{\theta\in[0,1]^{n}}\sum_{|\xi|\leq 2R,\xi\in\mathbb{Z}^{n}}|\hat{\mu}( \xi+\theta)|^{p}\]
and
\[\sum_{|\xi|\leq R,\xi\in\mathbb{Z}^{n}}|\hat{\mu}(\xi)|^{p}\leq\sup_{\theta\in [0,1]^{n}}\sum_{|\xi|\leq R,\xi\in\mathbb{Z}^{n}}|\hat{\mu}(\xi+\theta)|^{p}.\]
We also suspect that for missing digits measures, the three notions
\[\dim_{lp},\dim_{lp}^{I},\dim_{lp}^{S}\]
are identical.
### Manifolds of finite type
Let \(n\geq 2\) be an integer. A smooth submanifold \(M\subset\mathbb{R}^{n}\) is of finite type if for each \(x\in M,\) the manifold \(M\) only has finite order contacts with affine hyperplanes at \(x\). For more details, see [40, Chapter VIII, Section 3]. We do not repeat the definition here and only illustrate some of the examples.
(1). Let \(M\) be an analytic submanifold. Suppose that \(M\) is not contained in any affine hyperplane, then \(M\) is of finite type.
(2). As a particular example, consider the Veronese curve \((t,t^{2},t^{3},\ldots,t^{n}),t\in\mathbb{R}.\) This curve is analytic and it is not contained in any affine hyperplane. Therefore it is of finite type.
(3) If \(M\) is a smooth hypersurface with non-vanishing Gaussian curvature, then \(M\) is of finite type.
### Missing digits sets and measures
We recall the notion of missing digits sets. Let \(n\geq 1\) be an integer. Let \(p\geq 3\) be an integer. Let \(D\subset\{0,\ldots,p-1\}^{n}.\) Consider the set
\[K_{p,D}=\operatorname{cl}\{x\in\mathbb{R}^{n}:[p\{p^{k}x\}]\in D,k\geq 0\},\]
where \(\{x\},[x]\) are the component wise fractional part, integer part respectively of \(x\in\mathbb{R}^{n}.\) Let \(p_{1},\ldots,p_{\#D}\) be a probability vector, i.e. they are non-negative and sum to one. We can then assign each element in \(D\) a probability weight. To be specific, one can first introduce an ordering on \(D\) and assign the probabilities accordingly. We can now construct the random sum
\[S=\sum_{i\geq 1}p^{-i}\mathbf{d}_{i}\]
where \(\mathbf{d}_{i}\in D,i\geq 1\) are randomly and independently chosen from the set \(D\) with the assigned probabilities.
If \(p_{1}=\cdots=p_{\#D}=1/\#D,\) the distribution of \(S\) is a Borel probability measure supported on \([0,1]^{n}.\) We call this measure to be \(\lambda_{p,D}.\) It is a Borel probability measure supported on \(K_{p,D}\cap[0,1]^{n}.\) Moreover, it is AD-regular with exponent \(\dim_{\mathrm{H}}K_{p,D}.\) We also write
\[\dim_{l^{1}}K_{p,D}=\dim_{l^{1}}\lambda_{p,D}.\]
We provide some examples of missing digits sets. Recall that \(\hat{D}=\{0,\ldots,p-1\}^{n}\setminus D.\)
(1) If \(n=1\), then \(K_{3,\{0,2\}}\cap[0,1]\) is the middle third Cantor set and \(\lambda_{3,\{0,2\}}\) is the natural middle third Cantor-Lebesgue measure.
(2) If \(n=2\), then \(K_{3,\{(1,\hat{1})\}}\cap[0,1]^{2}\) is the Sierpinski Carpet with base \(3.\) In general, if \(n\geq 3\), \(K_{3,\{(1,\hat{...},1)\}}\cap[0,1]^{n}\) are Sierpinski sponges.
**Remark 2.4**.: _It is often interesting to consider Cartesian products of missing-digit sets (measures) which are not necessarily missing-digit sets (measures) themselves. In fact, they are not self-similar in general. Although we do only consider missing-digit sets (measures) in this paper, many results can be extended to deal with Cartesian products of missing-digit sets (measures). We will discuss this in Section 8. Before that, one can ignore this technical consideration._
### Results
Towards Conjecture 1.6, we prove the following theorem. Later on, we will provide examples that fall into the range of this theorem. The conditions are related to the \(l^{1}\)-dimensions of missing digits sets. This is not a common notion of dimension. Loosely speaking, missing digits sets with large bases have almost equal Hausdorff and \(l^{1}\)-dimensions. So it is helpful to think \(\dim_{l^{1}}\) below just as \(\dim_{\rm H}.\)3
Footnote 3: In Theorem 2.15(2) we will see a precise version of this ’loose’ statement.
**Theorem 2.5**.: _Let \(n\geq 2\) be an integer. Let \(M\subset\mathbb{R}^{n}\) be a manifold of finite type. Then there is a number \(\sigma=\sigma(M)>0\) such that for each compactly supported Borel probability measure \(\lambda\) with \(\dim_{l^{1}}\lambda>n-\sigma,\)_
\[\lambda(M^{\delta})\ll\delta^{n-\dim M}.\]
This proves Conjecture 1.6 for large missing digits sets. The number \(\sigma(M)\) can be explicitly determined once we know \(M.\) It is related to the Fourier decay properties of smooth surface measures carried by \(M.\) This number is always \(\leq\dim M/2.\) In the case when \(M\) is a hypersurface with non-vanishing Gaussian curvature, it can be chosen to be \(\dim M/2=(n-1)/2.\) The condition we have in this theorem is in some sense sharp. We postpone the discussion to Theorem 2.12.
We will provide some crude bounds for \(\dim_{l^{1}}\lambda_{p,D},\) see Theorem 2.15. In particular, the following result can be deduced.
**Corollary 2.6** (Missing digits points near manifolds).: _Let \(n\geq 2\) be an integer. Let \(M\subset\mathbb{R}^{n}\) be a manifold of finite type. Let \(k\geq 1\) be an integer. There is a number \(p_{0}(M,k)\) so that for each integer \(p>p_{0}(M,k)\), Theorem 2.5 holds for \(\lambda_{p,D}\) where the digit set \(D\) satisfies \(\#D\geq p^{n}-k.\)_
The number \(p_{0}(M,k)\) can be explicitly determined once we know \(\sigma(M)\) and \(k\). Theorem 2.5 leads to an intersection result.
**Corollary 2.7** (Intersections).: _Let \(n\geq 2\) be an integer. Let \(M\subset\mathbb{R}^{n}\) be a manifold of finite type. Then there is a number \(\sigma=\sigma(M)>0\) such that for each compactly supported Borel probability measure \(\mu\) which is AD-regular and \(\dim_{l^{1}}\mu>n-\sigma,\)_
\[\overline{\dim}_{\mathrm{B}}M\cap supp(\mu)\leq\max\{0,\dim_{\mathrm{H}} \operatorname{supp}(\mu)-(n-\dim M)\}.\]
On the other hand, we are also interested in having a lower estimate for \(M\cap K_{p,D}.\) However, this problem is not always sensible to ask because \(K_{p,D}\) has holes and it can happen that \(M\) is contained in one of the holes. This phenomenon is certainly not capturing the real nature between \(M\) and \(K.\) In order to unveil the underlying truth. We propose two compromises by modifying \(K\) or \(M\) in 'acceptable' manners that will be clarified soon.
First, we can render this problem by relaxing our requirement for missing digits sets. Let \(l\geq 0\) be an integer. We define
\[K_{p,D,l}=\operatorname{cl}\{x\in\mathbb{R}^{n}:[p\{p^{k}x\}]\in D,k\geq l\}.\]
So \(K_{p,D,l}\) fills some of the holes of \(K_{p,D}.\) The purpose of doing this is to get rid of the special cases when the manifold just happens to be contained in the holes of the missing digits set. We do not relax the situation too much by requiring that \(l\) has to be a fixed integer. For example, the digital representations of \(x\in K_{p,D,l}\cap[0,1]^{n}\) are of no restriction only for the first finitely many digits. This is the 'acceptable' modification for \(K\) we mentioned earlier.
**Theorem 2.8**.: _[Compromise 1] Let \(M\) be a manifold of finite type. Let \(K=K_{p,D}\) be a missing digits set. Suppose that \(\dim_{l^{1}}K_{p,D}+\sigma(M)-n>0\). Then there is an \(l\geq 0\) such that the set_
\[M\cap K_{p,D,l}\]
_is infinite. In fact, there is an \(l\geq 0\) such that for all small enough \(\delta>0,\) to cover \(M^{\delta}\cap K^{\delta}_{p,D,l},\) at least \(\gg(1/\delta)^{\dim_{\rm H}K+\dim M-n}\) many \(\delta\)-balls are needed. Moreover, we have_
\[\underline{\dim}_{\rm B}M\cap K_{p,D,l}\geq\dim_{\rm H}K-(n-\sigma(M)).\]
_Finally, \(\bigcup_{l\geq 0}K_{p,D,l}\cap M\) is dense in \(M.\)_
**Remark 2.9**.: _It is tempting to believe that_
\[\dim_{\rm B}K_{p,D,l}\cap M=\dim_{\rm H}K-(n-\dim M).\]
_So far, we can conclude that_
\[\dim_{\rm H}K-n+\sigma(M)\leq\underline{\dim}_{\rm B}M\cap K_{p,D,l}\leq \overline{\dim}_{\rm B}M\cap K_{p,D,l}\leq\dim_{\rm H}K-(n-\dim M).\]
_Since \(\sigma(M)\leq\dim M/2,\) the above chain of inequalities will never be close to being optimal. Some other ideas need to be used in order to fill in this gap._
_The last statement seems trivial. The set \(\cup_{l\geq 0}K_{p,D,l}\) is dense in \(\mathbb{R}^{n}.\) However, without the previous statement, it is even unclear whether or not \(\cup_{l\geq 0}M\cap K_{p,D,l}\) is empty._
Another possible way to compromise the fact that \(K=K_{p,D}\) has holes is to consider a family of transformed manifolds of \(M.\) In this paper, we deal with the group \(E_{n}=(0,\infty)\times\mathbb{R}^{n}\times\mathbb{O}(n).\) For \((t,v,g)\in E_{n},\) it acts on \(\mathbb{R}^{n}\) by first applying the rotation \(g,\) then applying the scaling \(t,\) then applying the translation \(v\):
\[x\in\mathbb{R}^{n}\to T_{t,v,g}(x)=tg(x)+v.\]
Observe that \(E_{n}\) also acts on the space of non-degenerate manifolds and it will not distort Fourier transforms too much. Thus we expect that
(Dimension Conservation) \[\dim_{\rm B}T_{t,v,g}(M)\cap K=\dim_{\rm H}K-(n-\dim M)\]
and in particular \(T_{t,v,g}(M)\cap K\neq\emptyset\) for many choices of \((t,v,g)\in E_{n}.\) In view of Mattila's intersection theorem ([31, Section 7]), we already know that the above holds for'many' choices of \((t,v,g)\) in a metric sense. We now upgrade the metric result at a cost of dropping the (Dimension Conservation).
**Theorem 2.10**.: _[Compromise 2] Let \(n\geq 2\) be an integer. Let \(M\) be a manifold of finite type. Suppose further that \(\lambda\) is a missing digits measure supported on \(K.\)_
_The function_
\[f:(t,v,g)\to\limsup_{\delta\to 0}\frac{\lambda((T_{t,v,g}(M))^{\delta})}{\delta^{n- \dim M}}\]
_is well-defined and non-negative valued. If \(M\) is compact, then there are a constant \(c>1\) and a continuous and non-vanishing function \(h:E_{n}\to[0,\infty)\) such that_
\[c^{-1}h\leq f\leq ch.\]
**Remark 2.11**.: _(a). The compactness of \(M\) in the last statement is not strictly necessary. In fact, if \(M\) is not compact, one can replace \(M\) with a compactly supported and smooth surface measure on \(M\)._
_(b). This result is likely to hold for other classes of possibly nonlinear transformations of manifolds other than the scalings, translations and rotations, for example, the evolution of \(M\) under a smooth vector field with some regularity conditions. As \(E_{n}\) is already enough for most of our arithmetic applications we do not pursue this degree of generality.4_
Footnote 4: This paragraph is not mathematically precise. We only want to illustrate an idea rather than a rigorous definition. One possible way of seeing the idea in this paragraph is to consider a light and soft tissue floating in the air.
_(c). We are curious about whether or not the function \(f\) can be defined via \(\mathcal{H}^{s}\) for \(s=\dim_{\rm H}K-(n-\dim M).\) More precisely, let_
\[h:(t,v,g)\to\mathcal{H}^{s}(T_{t,v,g}(M)\cap K).\]
_Is \(h\) continuous? From Theorem 2.5, it is not hard to show that \(h\) takes real values (not \(\infty\)). In this way, one can the understand the Hausdorff dimension of \(T_{t,v,g}(M)\cap K.\)_
_(d). Notice that whenever \(f(t,v,g)>0,\) it follows that_
\[T_{t,v,g}(M)\cap K\neq\emptyset.\]
_However, this is not enough to conclude that \(\dim_{\rm B}\) or \(\dim_{\rm H}T_{t,v,g}(M)\cap K\geq s.\) In this case, we have the weaker result_
\[\underline{\dim}_{\rm B}T_{t,v,g}(M)\cap K\geq\dim_{\rm H}K-(n-\sigma(M))\]
_as in Theorem 2.8._
The above results also hold when \(M\) is replaced with other sets supporting measures with polynomial Fourier decay. We will discuss this part in Section 9.
Finally, we emphasize that the condition \(\dim_{l^{1}}\lambda>n-\sigma(M)\) cannot be dropped although we believe that it might not be the optimal condition to look at. See Section 10.
**Theorem 2.12** (Sharpness).: _Let \(M\subset\mathbb{R}^{2}\) be the curve_
\[M:|x|^{k}+|y-1|^{k}=1\]
_where \(k\geq 2\) is an integer. Let \(p>3\) be an integer and let \(D_{1}=\{0,\ldots,l_{1}\}\), \(D_{2}=\{0,\ldots,l_{2}\}\) for some integers \(0<l_{1},l_{2}<p-1.\) Let \(K=K_{p,D_{1}}\times K_{p,D_{2}}\cap[0,1]^{2}.\) Then for \(\delta>0,\)\(M^{\delta}\cap K\) cannot be covered with_
\[\ll\left(\frac{1}{\delta}\right)^{(k-1)s/k}\]
_many \(\delta\)-balls where \(s=\max\{\dim_{\mathrm{H}}K_{p,D_{1}},\dim_{\mathrm{H}}K_{p,D_{2}}\}.\)_
**Remark 2.13**.: _In particular, if \(k=2,\) then \(M\) is a circle. In this case, the threshold for \(\dim_{l^{1}}\) in Theorem 2.5 is \(n-\sigma(M)=2-1/2=3/2.\) It is possible to choose \(D_{1},D_{2}\) such that \(\dim_{\mathrm{H}}K_{p,D_{1}}\) is close to one and \(\dim_{\mathrm{H}}K_{p,D_{2}}\) is close to \(1/2.\) Then \(\dim_{\mathrm{H}}K_{p,D_{1}}\times K_{p,D_{2}}\) is close to \(3/2.\) Moreover, if \(p\) is large enough, then \(\dim_{l^{1}}K_{p,D_{1}}\times K_{p,D_{2}}\) is also close to \(3/2\). See Theorem 2.15. In this case we see that \(M^{\delta}\cap K\) cannot be covered with_
\[\ll\left(\frac{1}{\delta}\right)^{s/2}\]
_many \(\delta\)-balls. Notice that \(s/2\) can be made arbitrarily close to \(1/2\). On the other hand, if the conclusion of Theorem 2.5 would hold, then \(M^{\delta}\cap K\) can be covered by_
\[\ll\left(\frac{1}{\delta}\right)^{\dim_{\mathrm{H}}K-1}\]
_many \(\delta\)-balls. Therefore if \(\dim_{\mathrm{H}}K-1<s/2,\) then the conclusion of Theorem 2.5 cannot hold. Since \(s\) can be chosen to be close to one, we are able to find examples of missing digits measures \(\lambda\) with \(\dim_{l^{1}}\lambda\) smaller but arbitrarily close \(3/2\) such that the conclusion of Theorem 2.5 does not hold for \(\lambda\) and \(M.\)_
_In general, the above discussion works for \(k\geq 3\) as well. In this case, \(M\) is a 'flatter' curve than the circle. We have \(n-\sigma(M)=2-(1/k).\) This is because \(M\) has \(k\)th order contact with the \(X\)-axis. See Section 5.8 for more details. On the other hand, as above we see that if \(\dim_{\rm H}K-1<(k-1)s/k,\) then Theorem 2.5 cannot hold for the missing digit measure on \(K.\) Thus we are able to find examples of missing digits measures \(\lambda\) with \(\dim_{l^{1}}\lambda\) smaller but close to \(2-(1/k)\) such that the conclusion of Theorem 2.5 does not hold for \(\lambda\) and \(M.\)_
**Remark 2.14**.: _Theorem 2.12 does not disprove Conjecture 1.1. However, it does show that it is perhaps a challenging task to reduce the \(3/4\) threshold. Theorem 2.12 does not disprove Conjecture 1.6 either, because in the statement of Conjecture 1.6, what we are interested in is to cover the set \((M\cap K)^{\delta},\) which is in general smaller than \(M^{\delta}\cap K^{\delta}.\)_
### Fourier norm dimensions of missing digits measures
Missing digits sets and measures provide us with a large class of examples of fractal sets and measures for the results in the previous section to be applied. In this section, we list a few general results regarding the \(l^{1}\)-dimensions of missing digits sets whose proofs are provided to make this paper self-contained5.
Footnote 5: In [2], a much more precise method is developed.
**Theorem 2.15**.: _Let \(n\geq 1\) be an integer. The following results hold._
* _1 Let_ \(t\geq 1\) _be an integer. We have_ \[\liminf_{p\to\infty,\#D\geq p^{n}-t}\dim_{l^{1}}\lambda_{p,D}=n.\] _In particular, for each number_ \(\epsilon>0,\) _as long as_ \(p\) _is large enough,_ \(\dim_{l^{1}}\lambda_{p,D}>n-\epsilon\) _holds for each_ \(D\) _with_ \(\#D=p^{n}-1.\)__
* _2 For each integer_ \(p\geq 4,\) _let_ \(D\subset\{0,\ldots,p-1\}^{n}\) _be a'rectangle', i.e. a set of form_ \([a_{1},b_{1}]\times[a_{2},b_{2}]\ldots[a_{n},b_{n}]\cap\{0,\ldots,p-1\}^{n}.\) _Then we have_ 6__ Footnote 6: The base of \(\log\) in this paper is \(e.\)__ \[\dim_{l^{1}}\lambda_{p,D}\geq\dim_{\rm H}\lambda_{p,D}-\frac{n\log\log p^{2}} {\log p}.\] (T) _The significance of this result is to show that missing digits sets can have almost equal Hausdorff and_ \(l^{1}\)_-dimensions._
**Remark 2.16**.: _The digit set structure is rather special for part 2. We emphasize that it can be a little bit more complicated. More precisely, part 2 of this theorem also works for rectangles with gaps larger than \(1.\) For example, instead of being a product set of sets of consecutive integers, it can be a product set of sets of consecutive even numbers. Also, one can consider unions of rectangles as long as the number of rectangles in the union is not too large compared with \(p,\) e.g. \(O(p^{\epsilon})\). We are interested in finding the most general condition for the digit sets so that a lower bound like (T) holds, i.e. as \(p\to\infty,\)_
\[|\dim_{l^{1}}\lambda_{p,D}-\dim_{\rm H}\lambda_{p,D}|=o(1).\]
Towards the question in this remark, we pose the following possibly too optimistic conjecture.
**Conjecture 2.17**.: _Let \(p>2\) be a prime number. Let \(\epsilon\in(0,1).\) Then we have_
\[|\dim_{l^{1}}\lambda_{p,D}-\dim_{\rm H}\lambda_{p,D}|=o(1),\]
_where the \(o(1)\) term is uniform across all digit set \(D\) with_
\[\#D\geq p^{\epsilon}.\]
The condition that \(p\) is a prime number cannot be completely dropped. In fact, if \(p=3^{n}\) where \(n\) is an integer. Then it is possible to choose \(D\) such that \(K_{p,D}\) is the middle third Cantor set. The statement of this conjecture is obviously not true in this case.
## 3. Other related works
The problem in this paper can be generally described as "understanding the distribution of a special class of points near another set." Consider the following three classes of sets:
1. Manifolds
2. Rational numbers with bounded heights (denominators).
3. Missing digits sets.
We studied the distribution of 3 near 1 in this paper. On the other hand, it has been of great interest in considering the distribution of 2 near 1 (see [3], [5],
[4], [27], [21], [36]) as well as 2 near 3 (see [1], [2], [9], [10], [15], [25], [26], [28], [37], [39], [42], [45]).7
Footnote 7: Those are just a small fraction of all the works on those topics.
The Fourier analysis method in this paper can be found in many of the above references. Other than this Fourier analysis method, another major tool for considering the above counting problems is by using the theory of dynamics on homogeneous spaces, e.g. [27] (2 near 1), [25] (2 near 3). A dynamical approach for the problems considered in this paper (3 near 1) is likely to be found.
## 4. Preliminaries
Before we prove Theorem 2.5, we need to introduce some more definitions as well as some results which will be used without proofs. See [16], [30] for more details on the notions of dimensions and regularities of measures.
### Hausdorff dimension, box dimensions
Let \(n\geq 1\) be an integer. Let \(F\subset\mathbb{R}^{n}\) be a Borel set. Let \(g:[0,1)\to[0,\infty)\) be a continuous function such that \(g(0)=0\). Then for all \(\delta>0\) we define the quantity
\[\mathcal{H}^{g}_{\delta}(F)=\inf\left\{\sum_{i=1}^{\infty}g(\operatorname{ diam}(U_{i})):\bigcup_{i}U_{i}\supset F,\operatorname{diam}(U_{i})<\delta \right\}.\]
The \(g\)-Hausdorff measure of \(F\) is
\[\mathcal{H}^{g}(F)=\lim_{\delta\to 0}\mathcal{H}^{g}_{\delta}(F).\]
When \(g(x)=x^{s}\) then \(\mathcal{H}^{g}=\mathcal{H}^{s}\) is the \(s\)-Hausdorff measure and Hausdorff dimension of \(F\) is
\[\operatorname{dim}_{\mathrm{H}}F=\inf\{s\geq 0:\mathcal{H}^{s}(F)=0\}=\sup\{s \geq 0:\mathcal{H}^{s}(F)=\infty\}.\]
Next, let \(F\subset\mathbb{R}^{n}\) be a Borel set. Let \(\delta>0\) and \(N_{\delta}(F)\) be the minimum amount of \(\delta\)-balls needed to cover \(F.\) Then the upper box dimension of \(F\) is
\[\overline{\operatorname{dim}_{\mathrm{B}}}F=\limsup_{\delta\to\infty}\frac{- \log N_{\delta}(F)}{\log\delta}.\]
The lower box dimension is
\[\underline{\operatorname{dim}_{\mathrm{B}}}F=\liminf_{\delta\to\infty}\frac{ -\log N_{\delta}(F)}{\log\delta}.\]
If the upper and lower box dimensions of \(F\) are equal, we call the comment value to be the box dimension of \(F\)
\[\dim_{\mathrm{B}}F=\underline{\dim}_{\mathrm{B}}F=\overline{\dim}_{\mathrm{B}}F.\]
If \(F\) is compact, then we have the general result
\[\dim_{\mathrm{H}}F\leq\underline{\dim}_{\mathrm{B}}F\leq\overline{\dim}_{ \mathrm{B}}F.\]
### AD-regularity
Let \(n\geq 1\) be an integer. Let \(\mu\) be a Borel measure. Let \(s>0\) be a number. We say that \(\mu\) is \(s\)-regular, or AD-regular with exponent \(s\) if there is a constant \(C>1\) such that for all \(x\in supp(\mu)\) and all small enough \(r>0\)
\[C^{-1}r^{s}\leq\mu(B_{r}(x))\leq Cr^{s},\]
where \(B_{r}(x)\) is the Euclidean ball of radius \(r\) and centre \(x\). For an AD-regular measure \(\mu\), the exponent can be seen as
\[s=\dim_{\mathrm{H}}supp(\mu).\]
For this reason, we simply define for AD-regular measure \(\mu\),
\[\dim_{\mathrm{H}}\mu=\dim_{\mathrm{H}}supp(\mu).\]
Missing digits measure \(\lambda_{p,D}\) in \(\mathbb{R}^{n}\) are AD-regular measures with exponent
\[s=\dim_{\mathrm{H}}\lambda_{p,D}=\dim_{\mathrm{H}}K_{p,D}=\frac{\log\#D}{\log p ^{n}}.\]
### Fourier transform of surface measures on manifolds of finite type
Discussions in this section follow [40, Chapter VIII, Section 3].
Let \(n\geq 2\) be an integer and \(M\subset\mathbb{R}^{n}\) be a manifold of finite type. Let \(\mu\) be the surface measure, i.e. the natural Lebesgue measure carried by the manifold. If \(M\) is compact, we can normalize \(\mu\) to be a probability measure. Otherwise, we shall truncate the measure to vanish at infinity by choosing a smooth, compactly supported function \(\phi\) and denote
\[\mu^{\prime}=c\phi\mu\]
where \(c\) is the normalization factor so that \(\mu^{\prime}\) is a probability measure.
We will always perform the truncating procedure. This will cause no loss of generality for the problems considered in this paper.
A standard result of smooth surface measures carried by manifolds of finite type is that the Fourier transforms decays polynomially,
\[\hat{\phi\mu}(\xi)=O(|\xi|^{-\sigma}).\]
Where \(\phi\) is a smooth, compactly supported function on \(\mathbb{R}^{n}.\) Of course, if \(M\) is compact, one can also let \(\phi=1.\)
Here lower bounds for \(\sigma\) can be effectively determined. It is related to the type of \(M.\) Roughly speaking, \(\sigma=1/k\) where \(k\) is the smallest integer such that \(M\) does not have \(k\)-th order contact with affine hyperplanes. For the Veronese curve, one can choose \(\sigma=1/n.\) If \(M\) is a hypersurface with non-vanishing curvatures, then the Fourier transform has a much better decay: one can choose \(\sigma=(n-1)/2=2^{-1}\dim M.\)
The choice of \(\sigma(M)\) for non-degenerate manifolds is a challenging topic in harmonic analysis and differential geometry. Apart from some general results in Stein's book [40], the article [11] contains some more technical results that are useful to provide estimates of \(\sigma(M)\) for some specially defined \(M\), e.g. joint graphs of analytic maps.
### Fourier transform of missing digits measures
The discussion in this subsection works for all self-similar measures with a uniform contraction ratio. We nonetheless focus only on missing digits measures.
Let \(n\geq 1\) be an integer. Let \(p>2\) be an integer and \(D\subset\{0,\ldots,p-1\}^{n}.\) Consider the missing digit measure \(\lambda_{p,D}.\) In this case, \(\hat{\lambda}_{p,D}(\xi)\) can be computed with the help of the formula
\[\hat{\lambda}_{p,D}(\xi)=\prod_{j\geq 0}\frac{1}{\#D}\sum_{d\in D}e^{-2\pi i(d, \xi)/p^{j}}.\]
### Asymptotic notations
We use both the Vinogradov \((\ll,\gg,\asymp)\) as well as Bachmann-Landau \((O(),o())\) notations:
Let \(f(\delta),g(\delta)\) be two real valued quantities depending on \(\delta>0.\) Then
* \(f\ll g\) or \(f=O(g)\) if \(|f(\delta)|\leq C|g(\delta)|\) for a constant \(C>0\) and all \(\delta>0.\)
* \(f=o(g)\) if for each \(\epsilon>0,\) there is a \(\delta_{0}>0\) such that for all \(\delta<\delta_{0},\) \[|f(\delta)|\leq\epsilon|g(\delta)|.\]
* \(f\asymp g\) if \(f\ll g\) and \(g\ll f.\)
## 5. Proof of the results
### Thickening the surface measure
Let \(n\geq 2\) be an integer. Let \(M\subset\mathbb{R}^{n}\) be a smooth submanifold of finite type. Let \(\mu\) be a probability measure supported on \(M.\) We assume that \(\mu\) is compactly supported and smooth. Those conditions are not necessary for the following arguments, we only assume them for convenience.
For later considerations, we need to thicken the measure \(\mu.\) Let \(\delta>0.\) If \(\delta\) is small enough, the neighbourhood \(M^{\delta}\) containing points in \(\mathbb{R}^{n}\) that are \(\delta\) close to \(M\) provides us with a good approximation of \(M.\) Let \(\phi\) be a compactly supported and smooth function which equals to one on \(B_{1/2}(0)\) and vanish outside of \(B_{1}(0).\) We also arrange that \(\phi\) and \(\hat{\phi}\) are spherically symmetric positive valued functions. See [44]. Furthermore, we shall assume that \(\int\phi=1.\) Let \(\phi_{\delta}(x)=\delta^{-n}\phi(\delta x).\)
We see that the convoluted measure \(\mu_{\delta}=\phi_{\delta}\ast\mu\) is a probability measure and it is also a smooth function supported on \(M^{\delta}.\) It can be also checked that on \(M^{\delta/2},\)\(\mu_{\delta}\asymp\delta^{-(n-\dim M)}\) and in general
\[\mu_{\delta}(x)\ll\delta^{-(n-\dim M)}\]
uniformly across \(x\in\mathbb{R}^{n}.\)
Observe that
\[\hat{\phi_{\delta}}\ast\mu(\xi)=\hat{\phi}_{\delta}(\xi)\hat{\mu}(\xi).\]
The function \(\hat{\phi}\) decays fast at infinity. In fact, for each number \(N>0,\) we have
\[|\hat{\phi}(\xi)|=O(|\xi|^{-N}).\]
Intuitively (after scaling by \(\delta\)), this tells that \(\hat{\phi}_{\delta}\) is a smooth function which is essentially supported on \(B_{1/\delta}(0).\) Within the ball \(B_{\delta}(0)\) we have
\[\phi_{\delta}\asymp\delta^{-n},\]
that is, the value of \(\phi_{\delta}/\delta^{-n}\) on \(B_{\delta}(0)\) is bounded and strictly positive (in a manner than does not depend on \(\delta\)). Similarly, on \(B_{1/\delta}(0),\) we have
\[\hat{\phi}_{\delta}\asymp 1.\]
### Littlewood-Paley decomposition, proof of Theorem 2.5
Let \(\lambda\) be a Borel probability measure, by Plancherel's theorem, we have
\[\int\mu_{\delta}(x)d\lambda(x)=\int\hat{\mu}_{\delta}(\xi)\overline{\hat{ \lambda}(\xi)}d\xi.\]
For each \(r>0,\) let \(B_{r}\subset\mathbb{R}^{n}\) be the metric ball with radius \(r\) centred at the origin. For each number \(q\geq 0,\) let \(S_{q}=B_{2^{q+1}}\setminus B_{2^{q}}.\) Then we see that,
(I) \[\int|\hat{\phi}_{\delta}(\xi)\hat{\mu}(\xi)\overline{\hat{\lambda}(\xi)}|d\xi =\int_{B_{1}}|\hat{\phi}_{\delta}(\xi)\hat{\mu}(\xi)\overline{\hat{ \lambda}(\xi)}|d\xi+\sum_{j\geq 0}\int_{S_{j}}|\hat{\phi}_{\delta}(\xi)\hat{\mu}( \xi)\overline{\hat{\lambda}(\xi)}|d\xi.\]
Let \(\kappa_{1}<\dim_{l^{1}}\lambda\leq\dim_{l^{1}}^{I}\lambda.\) Observe that
(II) \[\int_{S_{j}}|\hat{\phi}_{\delta}(\xi)\hat{\mu}(\xi)\overline{\hat{\lambda}( \xi)}|d\xi\ll\frac{1}{2^{j\sigma(M)}}\int_{S_{j}}|\overline{\hat{\lambda}(\xi) }|d\xi\ll\frac{2^{(j+1)(n-\kappa_{1})}}{2^{j\sigma(M)}},\]
where \(\sigma(M)>0\) can be chosen according to the discussions in Section 4.3. Thus, as long as \(\sigma(M)+\dim_{l^{1}}\lambda>n,\) we can choose \(\kappa_{1}\) such that
\[n-\kappa_{1}<\sigma(M).\]
This implies that (we do not take the norm)
\[\int\hat{\phi}_{\delta}(\xi)\hat{\mu}(\xi)\overline{\hat{\lambda}(\xi)}d\xi=O (1).\]
Next, observe that
\[\lambda(M^{\delta/2})=\int_{M^{\delta/2}}d\lambda(x)\ll\delta^{n-\dim M}\int \mu_{\delta}(x)d\lambda(x)\ll\delta^{n-\dim M}.\]
This finishes the proof of Theorem 2.5.
### From the counting estimate to an intersection estimate
Now let \(\lambda\) be an AD-regular measure. Suppose that \(\dim_{l^{1}}\lambda\) is large enough so that Theorem 2.5 applies.
We then see that
\[\lambda(M^{\delta})\ll\delta^{n-\dim M}.\]
Since \(\lambda\) is AD-regular, we see that \(M^{\delta}\cap\operatorname{supp}(\lambda)\) can be covered by
\[\ll\delta^{n-\dim M}/\delta^{\dim_{\operatorname{H}}\lambda}\]
many \(\delta\)-balls. By letting \(\delta\to 0\) we see that
\[\overline{\dim_{\mathrm{B}}}(M\cap\operatorname{supp}(\lambda))\leq\dim_{ \mathrm{H}}\lambda-(n-\dim M)\]
if \(\dim_{\mathrm{H}}\lambda-(n-\dim M)>0.\) Otherwise we have
\[\overline{\dim_{\mathrm{B}}}(M\cap\operatorname{supp}(\lambda))=0.\]
This proves Corollary 2.7.
### Lower estimate, proof of Theorem 2.8
We first reduce the problem a little bit. First, we pick a compact subset of \(M\) by performing the smooth truncation discussed in Section 4.3. This step is not necessary if \(M\) is already compact. Let \(\mu\) be the chosen smooth compactly supported measure on \(M\). Next, we choose an integer \(l\geq 0.\) We consider the measure
\[\mu_{l}=\frac{1}{p^{nl}}\sum_{d\in\{0,\ldots,p^{l}-1\}^{n}/p^{l}} \mu(.+d).\]
It is an average of \(p^{nl}\) many translated copies of \(\mu.\) Next, we perform the \(\mod\mathbb{Z}^{n}\) action. We use \(\mu_{l}\) to denote also the image of \(\mu_{l}\) under the action \(\mod\mathbb{Z}^{n}.\) The mod \(\mathbb{Z}^{n}\) can be performed because \(K_{p,D}\) is already \(\mathbb{Z}^{n}\) periodic and \(\operatorname{supp}(\mu_{l})\) is compact.8 This will let us concentrate on the unit cube \([0,1]^{n}.\)
Footnote 8: It is important that \(\operatorname{supp}(\mu_{l})\) is compact. Otherwise the image of \(\mu_{l}\) under \(\mod\mathbb{Z}^{n}\) might be supported on a dense subset.
Now we view the whole situation on \(\mathbb{R}^{n}/\mathbb{Z}^{n}\approx[0,1)^{n}.\) It can be checked that for \(\xi\in\mathbb{Z}^{n}\)
\[\hat{\mu}_{l}(\xi)=\Delta_{p^{l}|\xi}\hat{\mu}(\xi),\]
where \(\Delta_{p^{l}|\xi}=1\) if \(p^{l}\) divides all the components of \(\xi\) or else \(\Delta_{p^{l}|\xi}=0.\)
Let \(\delta>0\) be a small number. We consider the thickened measure \(\lambda_{\delta}=\phi_{\delta}\ast\lambda_{p,D}.\) Just as \(\mu_{\delta},\) we see that \(\lambda_{\delta}\asymp\delta^{-(n-\dim_{\mathrm{H}}K_{p,D})}\) on \(K_{p,D}^{\delta/2}.\) We see that
(III) \[\mu_{l}(K_{p,D}^{\delta})\gg\delta^{n-\dim_{\mathrm{H}}K_{p,D}} \int_{\mathbb{R}^{n}/\mathbb{Z}^{n}}\lambda_{\delta}(x)d\mu_{l}(x)=\delta^{n- \dim_{\mathrm{H}}K_{p,D}}\sum_{\xi\in\mathbb{Z}^{n}}\hat{\lambda}_{\delta}( \xi)\hat{\mu}_{l}(-\xi).\]
The above sum converges because \(\lambda_{\delta}\) is a Schwartz function. We can now perform the arguments in Section 5.2. First observe that
\[\sum_{\xi\in\mathbb{Z}^{n}}\hat{\lambda}_{\delta}(\xi)\hat{\mu}_{l}(-\xi)=\hat{ \lambda}_{\delta}(0)\hat{\mu}_{l}(0)+\sum_{p^{l}|\xi,\xi\neq 0}\hat{\lambda}_{ \delta}(\xi)\hat{\mu}_{l}(-\xi).\]
For the second sum above, we see that \(|\xi|\geq p^{l}\) because at least one of the components of \(\xi\) is non-zero and divisible by \(p^{l}.\)
We can perform the summation version of the argument in (I), (II) in Section 5.2. The effect of considering \(\mu_{l}\) instead of \(\mu\) is to push the off-zero \(L^{2}\)-sum in (III) away from the origin. Since \(\dim_{l^{1}}\lambda_{p,D}\leq\dim_{l^{1}}^{S}\lambda_{p,D}.\) We see that
\[\dim_{l^{1}}^{S}\lambda_{p,D}+\sigma(M)\geq\dim_{l^{1}}\lambda_{p,D}+\sigma(M) >n.\]
Then we see that (similar to (II)) as \(l\to\infty\)
\[\sum_{p^{l}|\xi,\xi\neq 0}|\hat{\lambda}_{\delta}(\xi)\hat{\mu}_{l} (-\xi)|\] \[\leq\sum_{\xi\in\mathbb{Z}^{n},|\xi|\geq p^{l}}|\hat{\lambda}_{ \delta}(\xi)\hat{\mu}(-\xi)|\] \[\leq\sum_{j\geq l}\sum_{\xi\in\mathbb{Z}^{n},p^{j-1}\leq|\xi|<p^{ j}}|\hat{\lambda}_{\delta}(\xi)\hat{\mu}(-\xi)|\] \[\ll\sum_{j\geq l}p^{-j\sigma(M)}p^{j(n-\kappa_{1})},\]
where on the last line \(\kappa_{1}\) is a positive number that can be chosen to be arbitrarily close to \(\dim_{l^{1}}\lambda_{p,D}.\) The implicit constant in \(\ll\) depends on \(\kappa_{1}\) and does not depend on \(l,\delta\). In particular, \(\kappa_{1}\) can be chosen such that
\[\kappa_{1}+\sigma(M)>n.\]
For this fixed \(\kappa_{1}\), we see that
\[\sum_{j\geq l}p^{-j\sigma(M)}p^{j(n-\kappa_{1})}\leq p^{l(n-\sigma(M)-\kappa_ {1})}\frac{1}{1-p^{n-\sigma(M)-\kappa_{1}}}.\]
Thus we see that as \(l\to\infty,\)
\[\sum_{p^{l}|\xi,\xi\neq 0}|\hat{\lambda}_{\delta}(\xi)\hat{\mu}(-\xi)|\to 0\]
uniformly across \(\delta>0.\) Now observe that
\[\hat{\lambda}_{\delta}(0)\hat{\mu}_{l}(0)=1\]
because \(\lambda_{\delta},\mu_{l}\) are all probability measures. Thus as long as \(l\) is large enough we have for all small enough \(\delta,\)
(Lower) \[\mu_{l}(K_{p,D}^{\delta})>c\delta^{n-\dim_{\rm H}K_{p,D}}\]
for a constant \(c>0\) which depends on the choice of the bump function \(\phi.\)
Observe that \(\mu_{l}\) is essentially a smooth surface measure carried by a manifold. This is not exactly the case because \(\mu_{l}\) is actually carried by a finite union of manifolds. We denote the finite union of manifolds to be \(\tilde{M}.\) The estimate
\[\mu_{l}(B_{\delta}(x))\asymp\delta^{\dim M}\]
holds uniformly for all \(x\in\operatorname{supp}(\mu_{l})\) and all small enough \(\delta>0.\) Of course, the implicit constants depend on \(l.\) From here we see that (Lower) implies that in order to cover \(K_{p,D}^{\delta}\cap\tilde{M}\) with \(\delta\)-balls, one needs
\[\gg\delta^{n-\dim_{\rm H}K_{p,D}-\dim M}=\delta^{-s}\]
many of them where \(s=\dim_{\rm H}K_{p,D}+\dim M-n>0.\)
As a simple observation, from (Lower), we see that \(\tilde{M}\cap K\) cannot be empty. Indeed, if \(\tilde{M}\cap K=\emptyset,\) then there is a \(\delta_{0}>0\) such that \(d(\tilde{M},K)>\delta_{0}.\) This is because \(M\) and \(K\) are closed and compact (we already reduced the whole situation to \([0,1]^{n}\)). This means that \(\mu_{l}(K_{p,D}^{\delta})=0\) as long as \(\delta<\delta_{0}/10.\) This contradicts (Lower).
Having shown that \(\tilde{M}\cap K\) is not empty, we now show that it has positive dimension. Let \(\epsilon>0.\) Let \(\delta>0\) be a power of \(p^{-1}\) and let \(\mathcal{C}\) be any collection of \(\delta\)-balls with
\[\#\mathcal{C}\leq(1/\delta)^{\epsilon}.\]
We want to show that \(\mathcal{C}\) cannot cover \(\tilde{M}\cap K\) as long as \(\epsilon\) is small enough. Let \(C\in\mathcal{C}.\) Denote \(10C\) by the ball co-centered with \(C\) of radius \(10\delta.\) Then for each
small enough \(\delta_{C}>0\), there is a number \(\eta>0\) such that
\[\tilde{M}\cap K^{\delta_{C}}\cap 10C\]
can be covered with at most
\[\delta^{\eta}\left(\frac{1}{\delta_{C}}\right)^{s}\]
many \(\delta_{C}\)-balls. This can be seen by rescaling the whole situation by a factor \(1/(10\delta).\) Such a zoom action does not change the Fourier decay properties for \(K\) or for \(M\). Of course, some explicit constants are changed. This results the additional scaling factor \(\delta^{\eta}\) for some \(\eta>0.\) We will discuss about this later.
We apply the above argument for each \(C\in\mathcal{C}\) and find a small enough number \(\delta^{\prime}>0\) such that each \(10C\cap\tilde{M}\cap K\) can be covered by at most
\[\delta^{\eta}\left(\frac{1}{\delta^{\prime}}\right)^{s}\]
many \(\delta^{\prime}\)-balls. Thus in total, one needs at most
\[\delta^{\eta}\left(\frac{1}{\delta}\right)^{\epsilon}\left(\frac{1}{\delta^{ \prime}}\right)^{s}=\delta^{-\epsilon+\eta}\left(\frac{1}{\delta^{\prime}} \right)^{s}.\]
many \(\delta^{\prime}\)-balls to cover \(\bigcup_{C\in\mathcal{C}}10C\cap\tilde{M}\cap K^{\delta^{\prime}}.\) If \(0<\epsilon<\eta\) we can make \(\delta^{-\epsilon+\eta}\) arbitrarily small by choosing \(\delta\) to be small. Now we use (Lower) for \(\delta^{\prime}.\) We need at least
\[\gg\left(\frac{1}{\delta^{\prime}}\right)^{s}\]
many \(\delta^{\prime}\)-balls to cover
\[\tilde{M}\cap K^{\delta^{\prime}}.\]
We thus see that for each small enough \(\delta>0\), as long as \(\delta^{\prime}\) is small enough, there must exist possibly many \(\delta^{\prime}\)-balls intersecting \(\tilde{M}\cap K\) and they are not intersecting \(\cup_{C\in\mathcal{C}}2C.\) Thus, we apply the above result with \(\delta^{\prime}\to 0\). As a result, we find balls \(B_{\delta^{\prime}}\) of radius \(\delta^{\prime}\to 0\),
\[B_{\delta^{\prime}}\cap K\cap\tilde{M}\neq\emptyset\]
and
\[d\left(B_{\delta^{\prime}},\bigcup_{C\in\mathcal{C}}C\right)>\delta.\]
Since \(K\) and \(\tilde{M}\) are compact, we see that there is a point \(x\in K\cap\tilde{M}\) with
\[d\left(x,\bigcup_{C\in\mathcal{C}}C\right)\geq\delta>0.\]
Thus \(\mathcal{C}\) cannot cover \(K\cap\tilde{M}.\) We have seen that \(K\cap\tilde{M}\) can not be covered by \((1/\delta)^{\epsilon}\) many \(\delta\)-balls as long as \(\delta\) is small enough. This shows that
\[\underline{\dim}_{\mathrm{B}}K\cap\tilde{M}\geq\epsilon.\]
Now we discuss how we can choose \(\epsilon,\eta\) in (Counting Down). Let \(\delta>0\) be a power of \(p^{-1}.\) We consider a \(\delta\)-branch of \(K,\) say \(K^{\prime}.\) Notice that \(K\) is self-similar and the \(\delta\)-branch \(K^{\prime}\) is just \(K\) scaled down by the factor \(\delta.\) This scaling procedure affects the Fourier transform as follows. Let \(\lambda^{\prime}\) be the natural missing digits measure supported on \(K^{\prime},\) i.e. \(\lambda^{\prime}=(\lambda(K^{\prime}))^{-1}\lambda_{|K^{\prime}}\). Denote \(\kappa_{2}=\dim_{\mathrm{H}}K_{p,D}.\) Then we see that
(Rescale) \[|\hat{\lambda}^{\prime}(\xi)|=|\hat{\lambda}(\delta\xi)|.\]
We can now argue as in Section 5.2 with \(\lambda^{\prime}\) in the place of \(\lambda,\)
\[\int_{\mathbb{R}^{n}}|\hat{\mu}(\xi)\hat{\lambda}^{\prime}(\xi)| d\xi=\int_{\mathbb{R}^{n}}|\hat{\mu}(\xi)||\hat{\lambda}(\delta\xi)|d\xi\] \[=\int_{\mathbb{R}^{n}}|\hat{\mu}(\xi/\delta)||\hat{\lambda}(\xi) |\delta^{-n}d\xi\] \[\ll\delta^{-n}\int_{\mathbb{R}^{n}}|\xi/\delta|^{-\sigma(M)}| \hat{\lambda}(\xi)|d\xi\] \[=\delta^{-n+\sigma(M)}\int_{\mathbb{R}^{n}}|\xi|^{-\sigma(M)}| \hat{\lambda}(\xi)|d\xi.\]
From here we can use the inequalities in (II) to deduce that for \(\delta^{\prime}\to 0,\)
\[\lambda^{\prime}(M^{\delta^{\prime}})\ll(1/\delta)^{n-\sigma(M)}\delta^{\prime n -\dim M},\]
where the multiplicative constant in \(\ll\) is the same as at the end of Section 5.2. Since
\[\lambda(K^{\prime})\asymp\delta^{\kappa_{2}},\]
we see that \(K^{\prime}\cap M^{\delta^{\prime}}\) can be covered by at most
\[\ll\frac{(1/\delta)^{n-\sigma}\delta^{\prime n-\dim M}}{\frac{\delta^{\prime} \kappa_{2}}{\delta^{\kappa_{2}}}}\ll\delta^{\kappa_{2}-(n-\sigma)}\left(\frac{1 }{\delta^{\prime}}\right)^{\kappa_{2}+\dim M-n}\]
many balls of radius \(\delta^{\prime}.\) Thus we can choose \(\eta=\kappa_{2}-(n-\sigma)>0.\)9 Finally, we can choose any \(\epsilon<\eta.\) This shows that
Footnote 9: Notice that \(\kappa_{2}\geq\dim_{l^{1}}K\) and by assumption we already have \(\dim_{l^{1}}K+\sigma>n.\)
\[\underline{\dim}_{\rm B}K\cap\tilde{M}\geq\kappa_{2}-(n-\sigma).\]
Finally, if \(x\in\tilde{M}\cap K\), then there is a translation vector \(d\in\{0,\ldots,p^{l}-1\}^{n}/p^{l}\) such that
\[x+d\in M\cap K_{p,D,l}.\]
Thus \(M\cap K_{p,D,l}\neq\emptyset\) and
\[\underline{\dim}_{\rm B}K_{p,D,l}\cap M\geq\kappa_{2}-(n-\sigma).\]
To see the latter, observe that
\[\tilde{M}\cap K\subset K_{p,D,l}\cap\bigcup_{d\in\{0,\ldots,p-1\}^{n}/p^{l}}( M+d).\]
For different \(d,\) the sets
\[M_{d}=K_{p,D,l}\cap(M+d)\]
are translation copies of each other because \(K_{p,D,l}\) is invariant with respect to such translations \(d.\) Thus \(M_{d}\) for different \(d\) all have the same box covering properties. From here we conclude the lower bound for the lower box dimension of \(K_{p,D,l}\cap M.\)
For the last statement of Theorem 2.8, it is enough to observe that for each \(\delta>0\) and \(\delta\)-cube \(C\) with \(C\cap M\neq\emptyset.\) From the previous argument in this section. There is some possibly large integer \(l\geq 0\) depending on \(C\) such that
\[C\cap M\cap K_{p,D,l}\neq\emptyset.\]
### A slightly more generalized argument for 'pushing away' the non-zero coefficients
This section is not needed outside of Section 8. The reader can skip it for now and come back later. In the previous section, we replaced a measure \(\mu\) with
\[\mu_{l}=\frac{1}{p^{nl}}\sum_{d\in\{0,\ldots,p^{l}-1\}^{n}/p^{l}}\mu(.+d).\]
The effect of the above averaging process is that for \(\xi\in\mathbb{Z}^{n}\), \(\hat{\mu}_{l}(\xi)\) is not zero only when the components of \(\xi\) are all divisible by \(p^{l}\).
Now, we can formulate a slightly more generalized way of performing the above averaging process. Let \(p_{1},\ldots,p_{n}\) be integers larger than \(1.\) Consider the measure
\[\mu^{\prime}=\mu_{p_{1},\ldots,p_{n}}=\frac{1}{p_{1}\ldots p_{n}}\sum_{d\in\{0,\ldots,p_{1}-1\}p_{1}^{-1}\times\cdots\times\{0,\ldots,p_{n}-1\}p_{n}^{-1}} \mu(.+d).\]
This measure \(\mu^{\prime}\) is an average of \(p_{1}\ldots p_{n}\) translated copies of \(\mu.\) The Fourier coefficients \(\hat{\mu}^{\prime}(\xi)\) at \(\xi\in\mathbb{Z}^{n}\) is not zero only when \(\xi=(\xi_{1},\ldots,\xi_{n})\) satisfies that for each \(i\in\{1,\ldots,n\}\), \(p_{i}|\xi_{i}.\) By choosing \(p_{1},\ldots,p_{n}\) to be all large enough, we again achieve the goal of 'pushing away' the non-zero coefficients as in the previous section.
### Proof of Theorem 2.10 part 1
Theorem 2.5 has another consequence. Let \(M\) be a non-degenerate manifold (or a manifold of finite type) and \(K\) be a missing digits set with a large enough \(l^{1}\)-dimension such that Theorem 2.5 applies. Let \(\lambda\) be the corresponding missing digits measure.
From Theorem 2.5, we know that
\[\limsup_{\delta\to 0}\frac{\lambda(M^{\delta})}{\delta^{n-\dim M}}<\infty.\]
In fact, with extra technical argument, it is likely that the above \(\limsup\) can be replaced with \(\lim\) but we do not need this. A much easier observation is that one can replace \(\limsup\) with \(\liminf\) in all the later arguments. We need to use this fact to prove the last assertion of this theorem.
Now we can define a function from \([0,\infty)\times\mathbb{R}^{n}\times\mathbb{O}(n)\) to \([0,\infty)\),
\[f_{K,M}(t,v,g)=\limsup_{\delta\to 0}\frac{\lambda(T_{t,v,g}(M)^{\delta})}{ \delta^{n-\dim M}}\]
where \(T_{t,v,g}(M)=t\times g(M)+v\), i.e., it is the image of \(M\) under the rotation \(g\), then scaled by \(t\) and then translated by \(v\). From here the first part of Theorem 2.10 follows.
Let \(\mu\) be a smooth and compactly supported surface measure on \(M\). Then we replace \(M^{\delta}\) with the Schwartz function \(\mu_{\delta}\) (as in Section 5.2) and define the function
\[f_{K,\mu}(t,v,g)=\limsup_{\delta\to 0}\int T_{t,v,g}(\mu_{\delta})(x)d\lambda(x).\]
For each fixed \(\delta>0\), the quantity in the lim symbol is continuous viewed as a function with variables \(t,v,g\). Observe that \(\mu_{\delta}\) is a smooth function with \(\text{supp}(\mu_{\delta})\subset\text{supp}(\mu)^{\delta}\) and \(\mu_{\delta}\asymp\delta^{-(n-\dim M)}\) on \(\text{supp}(\mu)^{\delta/2}\). From here, we see that
(*) \[T_{t,v,g}(\mu_{\delta})(x)\asymp\delta^{-(n-\dim M)}\]
uniformly for \(x\in T_{t,v,g}(\text{supp}(\mu))^{t\delta/2}\). From here we see that for all \(t,v,g\) and all small enough \(\delta>0\),
\[\frac{\lambda((T_{t,v,g}(M))^{t\delta/2})}{\delta^{n-\dim M}}\ll\int T_{t,v,g }(\mu_{\delta})(x)d\lambda(x)\ll\frac{\lambda((T_{t,v,g}(M))^{t\delta})}{ \delta^{n-\dim M}}.\]
Thus there is a constant \(c>1\) such that
\[c^{-1}t^{n-\dim M}f_{K,M}(t,v,g)\leq f_{K,\mu}(t,v,g)\leq ct^{n-\dim M}f_{K,M} (t,v,g).\]
We want to show that \(f_{K,\mu}\) is continuous. Then \(h=t^{-(n-\dim M)}f_{K,\mu}\) is continuous. In the next section, we will show that \(h\) is non-vanishing and this will conclude Theorem 2.10.
Observe that
\[\int T_{t,v,g}(\mu_{\delta})(x)d\lambda(x)=\int T_{t,v,g}(\mu_{\delta})( \xi)\overline{\hat{\lambda}(\xi)}d\xi.\]
From (I), (II) we see that the above integrals converge absolutely in a manner that is uniform across \(\delta>0\). This is because under the action \(T_{t,v,g}\), the Fourier transform is changed accordingly in a manner that preserves the polynomial decay with the same exponent. Thus we can apply (I), (II). Moreover, if we restrict \(t\) within a compact interval away from \(0\), then there are constants \(c_{1},c_{2},C_{1},C_{2}>0\)
with
\[C_{1}\min_{|\xi^{\prime}|\in[c_{1}|\xi|,c_{2}|\xi||]}|\hat{\mu_{\delta}}(\xi^{ \prime})|\leq|T_{t,v,g}\hat{(}\mu_{\delta})(\xi)|\leq C_{2}\max_{|\xi^{\prime}| \in[c_{1}|\xi|,c_{2}|\xi|]}|\hat{\mu_{\delta}}(\xi^{\prime})|\]
for all \(\xi.\) The above discussion does not depend on \(\delta.\) From here we see that
\[\limsup_{\delta\to 0}\int T_{t,v,g}(\mu_{\delta})d\lambda(x)=\limsup_{\delta \to 0}\int T_{t,v,g}\hat{(}\mu_{\delta})(\xi)\overline{\hat{\lambda}(\xi)}d\xi.\]
The RHS above is continuous viewed as a function with variable \((t,v,g).\) This follows from the following two facts:
a. \(T_{t,v,g}\) acts on each of the Fourier coefficients of \(\mu_{\delta}\) continuously in a manner that is uniform across \(\delta>0\). Here we emphasize that \(T_{t,v,g}\) does not act continuously on the Fourier coefficients in a manner that is uniform across all the frequencies. The continuity here is only pointwise. Thus the role of the cut-off function \(\hat{\phi}_{\delta}\) is important.
b. \(\xi\to|T_{t,v,g}\hat{(}\mu_{\delta})(\xi)\hat{\lambda}(\xi)|\) is integrable in a manner that is uniform across \(\delta>0\) and \(t,v,g\) inside any (fixed) compact set \(U\subset(0,\infty)\times\mathbb{R}^{n}\times\mathbb{O}(n)\).
From here we deduce that
\[(t,v,g)\to\limsup_{\delta\to 0}\int T_{t,v,g}(\mu_{\delta})(x)d\lambda(x)\]
is continuous. Thus \(f_{K,\mu}\) is continuous. Moreover, the convergence and the continuity is uniform when we restrict \((t,v,g)\) within a compact set \(U\subset[0,\infty)\times\mathbb{R}^{n}\times\mathbb{O}(n)\) away from the set \(\{t=0\}\subset[0,\infty)\times\mathbb{R}^{n}\times\mathbb{O}(n)\).
### The integral over group actions: Proof of Theorem 2.10 part 2
In order to show that \(h=t^{-n+\dim M}f_{K,\mu}\) is non-vanishing, it is enough to show that \(f_{K,\mu}\) is non-vanishing (i.e. not identically zero). To show that \(f_{K,\mu}\) is non-vanishing, we show that the integral of \(f_{K,\mu}\) over a large enough region is positive. Following the previous section, for each \(\delta>0\), consider the integral
\[\int_{U}\int T_{t,v,g}(\mu_{\delta})(x)d\lambda(x)d(t,v,g),\]
where \(t,v,g\) is integrated over a compact subset \(U\) with respect to the Haar measure on \(U\). We can exchange the order of the double integral. Observe that
for each \(x\in\mathbb{R}^{n}\)
\[\int_{U}T_{t,v,g}(\mu_{\delta})(x)d(t,v,g)\asymp|\{(t,v,g)\in U:T_{t,v,g}^{-1}(x )\in supp(\mu_{\delta})\}|/\delta^{n-\dim M}.\]
where \(|.|\) is with respect to the Haar measure restricted on \(U\). This measure is not always the Haar measure on \([0,\infty)\times\mathbb{R}^{n}\times\mathbb{O}(n).\) In fact, it can also be the Haar measure on a subgroup. As \(\mu_{\delta}\) is a compactly supported Schwartz function,
\[x\to\int_{U}T_{t,v,g}(\mu_{\delta})(x)d(t,v,g)\]
is non-negative and continuous. Our requirement for \(U\) is that \(U\cap\{t=0\}=\emptyset\) and for each ball \(B\subset\mathbb{R}^{n}\), each \((t,v,g)\in U\) we have
\[|\{(t,v,g)\in U:T_{t,v,g}^{-1}(x)\in supp(\mu_{\delta})\}|/\delta^{n-\dim M}> \epsilon>0\]
for some \(\epsilon>0\) and all \(x\in B\) in a manner than does not depend on \(\delta.\)
There are many possible choices of \(U.\)
**Case 1:** For example, we can fix \(t>0,g\in\mathbb{O}(n)\) and let \(v\) range over a sufficiently large ball \(B^{\prime}\subset\mathbb{R}^{n}\) centred at the origin (i.e. \(U=\{t\}\times B^{\prime}\times\{g\}\)). In this way, the Haar measure restricted to \(U\) is the Lebesgue measure on the \(v\) component restricted to \(B^{\prime}.\)
**Case 2:** Another choice is to fix \(v\) and then fix an interval \([a,b]\subset(0,\infty)\) for \(t\) with a small enough \(a>0\) and a large enough \(b>a.\) Finally, we do not restrict \(g.\) In this way, the Haar measure restricted to \(U\) is the Haar measure of the scaling and rotation group \([0,\infty)\times\mathbb{O}(n).\) In this case, the Haar measure on \(U\) is equivalent to the \(n\)-dimensional Lebesgue measure.
In both of the cases illustrated above, (Positive) is satisfied. In the first example,
\[|\{(t,v,g)\in U:T_{t,v,g}^{-1}(x)\in supp(\mu_{\delta})\}|\]
reduces to
\[|\{v\in B^{\prime}:x+v\in supp(\mu_{\delta})\}|=|supp(\mu_{\delta})|.\]
The last term is the Lebesgue measure of \(\operatorname{supp}(\mu_{\delta})\) which is the \(\delta\)-neighbourhood of a compact piece of \(M.\) It is of order \(\delta^{n-\dim M}.\) The second example can be tested via a similar method.
**Case 3:** We now introduce the third case. We restrict the discussion to \(\mathbb{R}^{2}.\) Let \(r>0\) and consider the hyperbola \(H_{r}=\{xy=r\}_{x,y>0}.\) We can choose a piece of \(H_{r}\) by considering \(\mu\) to be any compactly support smooth measure on \(H_{r}.\) Let \(x\in(0,\infty)^{2}.\) Consider the line between \(x\) and \((0,0)\): \(l_{x}.\) If \(l_{x}\cap\operatorname{supp}(\mu)\neq\emptyset,\) then
\[|\{c>0:cx\in supp(\mu_{\delta})\}|\asymp\delta,\]
where \(|.|\) is the one dimensional Lebesgue measure. The implicit constants in \(\asymp\) depend on \(|x|.\) Furthermore, let \(r>0\) and \(P\subset\mathbb{R}^{2}\) be compact such that \(d(P,(0,0))=r.\) Then the implicit constants in \(\asymp\) can be chosen to be the same for all \(x\in P.\)
We have mentioned a few scenarios in which (Positive) holds. There are far more situations and we do not provide further examples. Notice that under the Condition (Positive),
\[\int_{U}f_{K,\mu}(t,v,g)d(t,v,g)=\limsup_{\delta\to 0}\int_{U}\int T_{t,v,g}( \mu_{\delta})(x)d\lambda(x)d(t,v,g)>0.\]
Here the order of \(\lim_{\delta\to 0}\) and the integral \(\int_{U}\) can be changed because that \(U\) is compact and \(U\cap\{t=0\}=\emptyset\) which implies that the limit is uniform on \(U\). Since \(f_{k,\mu}\) is continuous, we see that there exists a non-trivial ball \(E\subset U\) such that \(f_{K,\mu}(t,v,g)>0\) for all \((t,v,g)\in E.\) This concludes the non-vanishing part of Theorem 2.10.
Finally, for the lower box dimension \(\underline{\dim}_{\mathrm{B}}T_{t,v,g}(M)\cap K,\) observe that an estimate like (Lower) holds. However, there is a slight difference. We need to replace the above arguments with \(\limsup\) being replaced by \(\liminf\). Then we have
\[\lambda((T_{t,v,g}(\operatorname{supp}(\mu)))^{\delta})\gg\delta^{n-\dim M}.\]
This means that \(T_{t,v,g}(supp(\mu)^{\delta})\cap K\) cannot be covered with \(o(\delta^{n-\dim M-\dim H}\,K)\) many \(\delta\)-balls. Thus we see that
\[\mu((T_{t,v,g}(\operatorname{supp}(\mu)))^{\delta})\gg\delta^{n-\dim_{\mathrm{ H}}K}.\]
From here the rest of the arguments follow as in Section 5.4 and we have
\[\underline{\dim}_{\mathrm{B}}T_{t,v,g}(M)\cap K\geq\dim_{\mathrm{H}}K-(n- \sigma(M)).\]
### Sharpness of Theorem 2.5 for small missing digits sets: Proof of Theorem 2.12
The curve \(M\) under consideration is
\[|x|^{k}+|y-1|^{k}=1.\]
This curve passes through the origin and has an order \(k\) contact with the \(X\)-axis at \((0,0).\) This is the highest order of contact of \(M\) with hyperplanes (lines). Loosely speaking, this is because around \((0,0)\) the curve looks like \(y=|x|^{k}\) which has vanishing \((k-1)\)th derivative and non-vanishing \(k\)th derivative at \(x=0.\)
Let \(\delta_{0}=p^{-l}>0\) for a large integer \(l>0.\) Consider the square \([0,\delta_{0}]^{2}.\) We decompose this square into smaller squares with side length \(p^{-kl}.\) Consider the 'first row' of those smaller squares, i.e. those that intersect the \(X\)-axis. We see that \(M\) intersects each of those smaller squares. Because the digit set \(D\) contains \(0.\) This implies that \(M^{p^{-kl}}\cap K\) must intersect \(\gg(p^{-l}/p^{-kl})^{\dim_{\mathrm{H}}K_{p,D_{1}}}\) many of those squares.
Let \(\delta=p^{-kl}.\) We see that \(M^{\delta}\cap K\) cannot be covered with
\[\ll\left(\frac{1}{\delta^{(k-1)/k}}\right)^{\dim_{\mathrm{H}}K_{p,D_{1}}}\]
many \(\delta\)-balls. Similarly, consider the point \((1,0)\) where \(M\) has a \(k\)th order contact with a vertical line. We see that \(M^{\delta}\cap K\) cannot be covered with
\[\ll\left(\frac{1}{\delta^{(k-1)/k}}\right)^{\dim_{\mathrm{H}}K_{p,D_{2}}}\]
many \(\delta\)-balls. This proves Theorem 2.12.
## 6. Fourier norm dimensions for missing digits measures
### General Algorithm
For the case when \(n=1,\) this matter has been discussed in [2]. See also [33]. For \(n\geq 2,\) the arguments change very little. In this paper, we provide some details for being self-contained.
Let \(n\geq 1\) be an integer. Let \(p>2\) be an integer and \(D\subset\{0,\ldots,p-1\}^{n}.\) Consider the missing digit measure \(\lambda_{p,D}.\) In this case, \(\hat{\lambda}(\xi)\) can be computed with the help of the formula,
\[\hat{\lambda}_{p,D}(\xi)=\prod_{j\geq 0}\frac{1}{\#D}\sum_{d\in D}e^{-2\pi i(d, \xi)/p^{j}}.\]
For convenience, let
\[g(\xi)=\frac{1}{\#D}\sum_{d\in D}e^{-2\pi i(d,\xi)}.\]
Then we have
\[\hat{\lambda}_{p,D}(\xi)=\prod_{j\geq 0}g(\xi/p^{j}).\]
We want to estimate for large integers \(k\geq 1,\)
\[S_{k}=\sup_{\theta\in[0,1]^{n}}\sum_{\xi\in\mathbb{Z}^{n},|\xi|_{\infty}<p^{k} }|\hat{\lambda}_{p,D}(\xi+\theta)|.\]
Notice that in the sum, we conditioned on the max norm \(|\xi|_{\infty}=\max\{|\xi_{1}|,\dots,|\xi_{n}|\}.\) We now estimate \(S_{k}.\) Let \(\theta\in(0,1)^{n}\) be a vector. Consider the function
\[f(\theta)=\sum_{{\bf i}\in\{0,\dots,p-1\}^{n}}|g(({\bf i}+\theta)/p)|.\]
Clearly we have for all \(\theta,\)
\[0\leq f(\theta)\leq p^{n}.\]
Observe that for each \(\theta\in[0,1]^{n}\)
\[S_{k}(\theta)=\sum_{\xi\in\mathbb{Z}^{n},|\xi|_{\infty}<p^{k}}|\hat{\lambda}_ {p,D}(\xi+\theta)|\]
\[=\sum_{\xi\in\mathbb{Z}^{n},|\xi|_{\infty}<p^{k}}\left|\prod_{j\geq 0}g((\xi+ \theta)/p^{j})\right|.\]
Let \(\xi^{\prime}=\xi+{\bf i}\) for some \({\bf i}\in p^{k-1}\mathbb{Z}^{n}.\) Then we have
\[g((\xi^{\prime}+\theta)/p^{j})=g((\xi+\theta)/p^{j})\]
for all \(j=0,1,\dots,k-1.\) From here we see that (recall that \(|g|\leq 1\))
\[S_{k}(\theta) =\sum_{\xi\in\mathbb{Z}^{n},|\xi|_{\infty}<p^{k}}\left|\prod_{j \geq 0}g((\xi+\theta)/p^{j})\right|\] \[\leq\sum_{\xi\in\mathbb{Z}^{n},|\xi|_{\infty}<p^{k}}\left|\prod_{ j=0}^{k}g((\xi+\theta)/p^{j})\right|\]
\[=\sum_{\xi\in\mathbb{Z}^{n},|\xi|_{\infty}<p^{k-1}}\sum_{\mathbf{i} \in\{0,\ldots,p-1\}^{n}p^{k-1}}\left|\prod_{j=0}^{k}g((\xi+\mathbf{i}+\theta)/p ^{j})\right|\] \[=\sum_{\xi\in\mathbb{Z}^{n},|\xi|_{\infty}<p^{k-1}}\left|\prod_{j= 0}^{k-1}g((\xi+\theta)/p^{j})\right|\sum_{\mathbf{i}\in\{0,\ldots,p-1\}^{n}} \left|g(\mathbf{i}p^{-1}+\theta p^{-k}+\xi p^{-k})\right|\] \[\leq\sum_{\xi\in\mathbb{Z}^{n},|\xi|_{\infty}<p^{k-1}}\left|\prod_ {j=0}^{k-1}g((\xi+\theta)/p^{j})\right|\sup_{\theta^{\prime}}f(\theta^{\prime})\] \[\stackrel{{\text{Continue inductively}}}{{\leq}}\sum_{ \xi\in\mathbb{Z}^{n},|\xi|_{\infty}<p^{k-2}}\left|\prod_{j=0}^{k-2}g((\xi+ \theta)/p^{j})\right|(\sup_{\theta^{\prime}}f(\theta^{\prime}))^{2}\] \[\ldots\] \[\leq(\sup_{\theta^{\prime}}f(\theta^{\prime}))^{k}.\]
Thus we have
\[S_{k}(\theta)\leq(\sup_{\theta^{\prime}}f(\theta^{\prime}))^{k}.\]
Therefore we see that
\[S_{k}\leq(\sup_{\theta^{\prime}}f(\theta^{\prime}))^{k}.\]
This implies that (we take the Euclidean norm \(|\xi|\))
\[\sup_{\theta\in[0,1]^{n}}\sum_{\xi\in\mathbb{Z}^{n},|\xi|\leq p^{k}}|\hat{ \lambda}_{p,D}(\xi+\theta)|\leq S_{k}\leq(\sup_{\theta^{\prime}}f(\theta^{ \prime}))^{k}.\]
From here one can see that
\[n-\frac{\log\sup_{\theta}f(\theta)}{\log p}\leq\dim_{l^{1}}\lambda_{p,D}.\]
### A crude bound: proof of Theorem 2.15 part 1
Although it is in principle possible to compute \(\dim_{l^{1}}\lambda_{p,D}\) within any precision, we are not too obsessed with the exact computations. Instead, we provide a rather crude but still useful upper bound for the value
\[\sup_{\theta}f(\theta)\]
when \(D\) is a large set. This will give us a lower bound for \(\dim_{l^{1}}\lambda_{p,D}.\)
First, observe that
\[\#Dg(\xi)=\prod_{j=1}^{n}\frac{1-e^{2\pi ip\xi_{j}}}{1-e^{2\pi i\xi_{j}}}-\sum_{d \notin D}e^{-2\pi i(d,\xi)}.\]
Let \(\#D=p^{n}-t\) for some integer \(t>0.\) Then we have
\[-t\leq|(p^{n}-t)g(\xi)|-\left|\prod_{j=1}^{n}\frac{1-e^{2\pi ip\xi_{j}}}{1-e^{2 \pi i\xi_{j}}}\right|\leq t.\]
Now we want to consider the sum
\[f_{1}(\theta)=\sum_{{\bf i}\in\{0,p-1\}^{n}}|g_{1}(({\bf i}+\theta)/p)|,\]
where
\[g_{1}(\xi)=\prod_{j=1}^{n}\frac{1-e^{2\pi ip\xi_{j}}}{1-e^{2\pi i\xi_{j}}}.\]
To do this, consider the function \(h:\mathbb{R}\to\mathbb{R},\)
\[h(x)=\left|\frac{1-e^{2\pi ipx}}{1-e^{2\pi ix}}\right|.\]
We want to provide an estimate for
\[H(\theta)=\sum_{j\in\{0,\ldots,p-1\}}h((j+\theta)/p).\]
Notice that \((|e^{ix}-1|\leq 2,\forall x\in\mathbb{R})\)
(*) \[H(\theta)\leq 2\sum_{j=0,\{(j+\theta)/p\}\geq 1/p}^{p-1}\frac{1}{|1-e^{2\pi i(j /p)}e^{2\pi i(\theta/p)}|}+\sum_{j:\{(j+\theta)/p\}<1/p}h((j+\theta)/p).\]
For the first sum in (*), we see that \((|1-e^{2\pi ix}|^{2}=2(1-\cos(2\pi x))\geq 16x^{2},\) for \(x\in[0,1/2])\)
\[2\sum_{j=0,\{(j+\theta)/p\}\geq 1/p}^{p-1}\frac{1}{|1-e^{2\pi i(j/p)}e^{2\pi i (\theta/p)}|}\leq 8\sum_{k=1}^{[(p-1)/2]}\frac{1}{|1-e^{2\pi i(k/p)}|}\]
\[\leq\frac{1}{\pi}\sum_{k=1}^{[(p-1)/2]}\frac{k}{p}\leq\frac{p}{\pi}(\log p+1).\]
For the second sum in (*), notice that there are at most two \(j\)'s in the sum. As \(|h|\leq p\), we have
\[\sum_{j:\{(j+\theta)/p\}<1/p}h((j+\theta)/p)\leq 2p.\]
From here we see that for \(p\geq 4\),
\[H(\theta)\leq 2p\log p.\]
We can then use this estimate to see that
\[\sup_{\theta}f_{1}(\theta)\leq(2p\log p)^{n}.\]
Thus we have
\[(p^{n}-t)\sup_{\theta}f(\theta)\leq tp^{n}+(2p\log p)^{n}.\]
This implies that
\[\dim_{l^{1}}\lambda_{p,D}\geq n-\frac{\log\frac{tp^{n}+(2p\log p)^{n}}{p^{n}- t}}{\log p}.\]
This (Crude bound) tells us, for example, if we fix \(t>0\), then as long as \(p\) is large enough, \(\dim_{l^{1}}\lambda_{p,D}\) can be arbitrarily close to \(n\).
### Missing digits measures with large bases and rectangular digit sets: proof of Theorem 2.15 part 2
Let \(K_{p,D}\) be a missing digits set. In general, we have
\[\frac{1}{2}\dim_{\rm H}K_{p,D}\leq\dim_{l^{1}}K_{p,D}\leq\dim_{\rm H}K_{p,D}.\]
For missing digits sets, we expect the rightmost side should be closer to the truth. We now make this point clearer.
Let \(n\geq 1\) be an integer. Let \(p>1\) be an integer. For a missing digits set, we choose a subset \(D\subset\{0,\dots,p-1\}^{n}\) and construct \(K_{p,D}.\) We have seen that it is in principle possible to find the value of \(\dim_{l^{1}}\lambda_{p,D}\) with arbitrarily small errors. We can also find a lower bound for \(\dim_{l^{1}}\) with the help of (Crude bound). It
turns out that if the digit sets are chosen to be well structured, then we can have a much better estimate than the (Crude bound).
Let \(D\subset\{0,\ldots,p-1\}^{n}\) be an rectangle, i.e. it is of form
\[[a_{1},b_{1}]\times\cdots\times[a_{n},b_{n}]\cap\{0,\ldots,p-1\}^{n}\]
with integers \(a_{1}\leq b_{1},\ldots,a_{n}\leq b_{n}.\) With this special digit set \(D\), we see that the corresponding function \(g_{b,D}\) is of form
\[g_{b,D}(\theta)=(\#D)^{-1}\sum_{z\in D}e^{-2\pi i((z,\theta))}=(\#D)^{-1}\prod _{j=1}^{n}e^{-2\pi ia_{j}\theta_{j}}\frac{1-e^{-2\pi i(b_{j}-a_{j}+1)\theta_{j }}}{1-e^{-2\pi i\theta_{j}}}.\]
Next, we estimate the sum
\[(\#D)f_{b,D}(\theta)=(\#D)\sum_{\mathbf{i}\in\{0,\ldots,p-1\}^{n}}|g_{p,D}( \mathbf{i}+\theta)/p|.\]
For each \(j\in\{1,\ldots,n\}\), define
\[S_{j}(\theta_{j})=\left|\frac{1-e^{-2\pi i(b_{j}-a_{j}+1)\theta_{j}}}{1-e^{-2 \pi i\theta_{j}}}\right|.\]
Then we see that
\[(\#D)\sum_{\mathbf{i}\in\{0,\ldots,p-1\}^{n}}|g_{p,D}(\mathbf{i} +\theta)/p|\] \[=\sum_{\mathbf{i}\in\{0,\ldots,p-1\}^{n}}\prod_{j=1}^{n}S_{j}(( \mathbf{i}_{j}+\theta_{j})/p)\] \[=\prod_{j=1}^{n}\sum_{i\in\{0,\ldots,p-1\}}S_{j}((i+\theta_{j})/p).\]
Now we need to estimate for each \(j\in\{1,\ldots,n\}\),
\[\sum_{i=0}^{p-1}S_{j}((i+\theta_{j})/p).\]
We have already considered this type of sums before, see (*). Notice that \(S_{j}(\theta_{j})\leq b_{j}-a_{j}+1.\) As a result, we have for \(p\geq 4,\)
\[\sup_{\theta_{j}}\sum_{i=0}^{p-1}S_{j}((i+\theta_{j})/p)\leq\frac{p}{\pi}(\log p +1)+2(b_{j}-a_{j}+1)\leq 2p\log p.\]
Thus we see that
\[\sup_{\theta}f_{b,D}(\theta)\leq(\#D)^{-1}(2p\log p)^{n}.\]
From here we see that
\[\dim_{l^{1}}\lambda_{p,D}\geq n-\frac{\log((\#D)^{-1}(2p\log p)^{n})}{\log p}\] \[=n+\frac{\log\#D}{\log p}-n-\frac{n\log\log p^{2}}{\log p}\] \[=\dim_{\rm H}K_{p,D}-\frac{n\log\log p^{2}}{\log p},\]
where have used the fact that \(\dim_{\rm H}K_{p,D}=\log\#D/\log p.\)
## 7. Proofs of Theorems A, B, C
We now prove the special results at the beginning.
### Theorem A
For Theorem A, one can use Theorems 2.5, 2.8, 2.15(2) together with the fact that for the Veronese curve in \(\mathbb{R}^{3},\) the positive number \(\sigma\) can be chosen to be \(1/3.\) Notice that the missing digits sets in Theorem A is the threefold Cartesian product \(K^{3}\) where \(K\) is the set of numbers on \(\mathbb{R}\) whose base \(10^{9000}\) expansions only contain digits \(\{0,\ldots,10^{8100}-1\}.\) With the help of Theorem 2.15(2), it can be checked that \(\dim_{l^{1}}K>8/9.\) Thus we have \(\dim_{l^{1}}K^{3}>3-3^{-1}.\) The lower bound \(1/30\) for box dimension can be derived from the numbers given above.
### Theorem B
For Theorem B, there is still one more step to perform. The missing digits set in consideration is \(K\times K\) where \(K\) is the set of numbers in \(\mathbb{R}\) whose base \(10^{9000}\) expansions only have digits in \(\{0,\ldots,10^{7000}-1\}.\) By Theorem 2.15(2), it can be checked that \(\dim_{l^{1}}\) and then \(\dim_{l^{1}}K\times K>1.5.\) Let \(\lambda\) be the missing digits measures on \(K.\)
We first illustrate how to show that pinned distance sets have a positive Lebesgue measure. Then we upgrade the result to the non-trivial interval version with the help of Section 5.7. In this way, we hope to first illustrate clearly the idea behind the proof before the need for too many technical details (zipped in Section 5.7).
For each \(r>0,\) consider the circle \(C_{r}:x^{2}+y^{2}=r^{2}.\) For circles in \(\mathbb{R}^{2}\) (or spheres in \(\mathbb{R}^{n}\)), the Fourier decay exponent \(\sigma\) can be chosen to be \(1/2\) (or \((n-1)/2\) for spheres in \(\mathbb{R}^{n}\)). Theorem 2.5 tells us that \(\lambda(C_{r}^{\delta})\ll_{r}\delta.\) Moreover, a further insight of Section 5.2 tells us that the \(\ll_{r}\) estimate is uniform for \(r\) ranging over a bounded interval which is away from zero, i.e. for positive numbers \(b>a>0\)
\[\lambda(C_{r}^{\delta})\ll_{a,b}\delta\]
for \(r\in[a,b].\) Now we can choose \(a>0\) be sufficiently small and \(b>0\) being sufficiently large such that
(Positive) \[\lambda(B_{b}(0)\setminus B_{a}(0))>0.\]
Consider the set
\[\Sigma_{a,b}=\{r\in[a,b]:C_{r}\cap K\neq\emptyset\}.\]
Suppose that \(\Sigma_{a,b}\) has zero Lebesgue measure. Then we can cover \(\Sigma_{a,b}\) with small intervals. More precisely, for each \(\epsilon>0,\) there is a countable collection of intervals \(I_{j},j\geq 1\) of total length at most \(\epsilon\) and they cover \(\Sigma_{a,b}.\) Let \(I_{j}\) be one of those intervals. Let \(r_{j}\) be the centre Then we have
\[\lambda(C_{r_{j}}^{|I_{j}|/2})\leq c|I_{j}|,\]
where \(c>0\) is a constant depending on \(a,b.\) We can sum up the above estimates for all \(j,\)
\[\lambda(B_{b}(0)\setminus B_{a}(0))\leq c\sum_{i}|I|_{j}\leq c\epsilon.\]
As \(\epsilon>0\) can be arbitrarily chosen, this contradicts the statement (Positive). Thus we see that
\[\Delta_{(0,0)}(K)\supset\Sigma_{a,b}\]
has positive Lebesgue measure. Of course, the choice \((0,0)\) is of no speciality. One can replace it with any other point in \(\mathbb{R}^{2}.\) So that we showed that the pinned distance sets have positive Lebesgue measure.
Now we want to show that the pinned distance sets in fact have non-trivial intervals. This is not very straightforward to show. We use the method introduced in Section 5.7. Since the circles are compact, we do not need to choose compactly supported smooth surface measures for them. Thus \(\mu,\mu_{\delta}\) in Section 5.7 can be simply taken to be the natural (\(\delta\)-thickened) Lebesgue measures on the circles. For the range of group actions, we take \(U=(R^{-1},R)\times\{(0,0)\}\times\mathbb{O}(2)\) for a large enough number \(R>0.\) Observe that the circles are invariant under rotations. The arguments in Section 5.7 provide us with a non-vanishing continuous function \(f:[R^{-1},R]\to[0,\infty)\) so that whenever \(f(r)>0,\)\(C_{r}\cap K\neq\emptyset.\) This shows that \(\Delta_{(0,0)}(K)\) contains non-trivial intervals. Again, the point \((0,0)\) is of no significance. One can replace it with any \(x\in\mathbb{R}^{2}.\) However, the value \(R\) needs to be changed accordingly. From here the proof of Theorem B concludes.
### Theorem C
Finally, we prove Theorem C. Consider the class of hyperbola
\[H_{r}=\{(x,y):xy=r\},r>0.\]
For each \(r>0,\) let \((x_{1},x_{2})\in K\times K\cap(0,1]^{2}.\) We see that the line connecting \((0,0)\) and \((x_{1},x_{2})\) will intersect \(H_{r}.\) However, the intersection might be too close to the origin. To overcome this issue, we can consider a branch of \(K\times K\) that is away from both of the coordinate lines. Such branches certainly exist, e.g. the image \(Y=T_{t,v,g}(K\times K)\) with
\[g=\mathbb{I},t=10^{-9000},v=((10^{7000}-1)10^{-9000},(10^{7000}-1)10^{-9000}).\]
Now \(Y\subset K\times K\) and \(Y\) is far away from the coordinate lines. Let \(C_{1},C_{2}>0\) be large enough numbers such that for each \(x\in Y,\) the line connecting \((0,0)\) and \(x\) intersects \(H_{r},r\in[C_{1}^{-1},C_{1}]\) in \([0,C_{2}]^{2}.\)
Notice that the \(H_{r},r>0\) is a class of curves that can be obtained from each via scalings. Now we can apply Section 5.7 (more specifically, the third case) to deduce that
\[\{r\in[C_{1}^{-1},C_{1}]:H_{r}\cap Y\}\]
contains intervals. From here, the proof of Theorem C concludes.
## 8. More examples and a question on linear forms
We explain more applications in this section. For convenience, we fix three missing digits sets on \(\mathbb{R}:\)
\(K_{1}\): numbers whose base \(10^{9000}\) expansions only contain digits in
\[\{0,\ldots,10^{8100}-1\}.\]
\(K_{2}\): numbers whose base \(11^{9000}\) expansions only contain digits in
\[\{0,\ldots,11^{8100}-1\}.\]
\(K_{3}\): numbers whose base \(12^{9000}\) expansions only contain digits in
\[\{0,\ldots,12^{8100}-1\}.\]
The Hausdorff dimensions of \(K_{1},\ldots,K_{3}\) are equal to \(9/10.\) The \(l^{1}\) dimensions of \(K_{1},\ldots,K_{3}\) are all very close to \(9/10\) and in fact they are all larger than \(8/9\) by using Theorem 2.15(2).
**Example 8.1**.: _Consider the hyperbola \(\{xy=1\}\) in \(\mathbb{R}^{2}.\) We can apply Theorems 2.5, 2.8 to see that there is an integer \(l\geq 0\) and there are infinitely many numbers \(t>0\) with_
\[10^{l}t,\frac{10^{l}}{t}\]
_both in \(K_{1}.\)_
We want to list more results that go slightly beyond the scope of Theorem 2.8. Notice that \(K=K_{1}\times K_{2}\times K_{3}\) is not a missing-digit nor a self-similar set by our definition. It is nonetheless AD-regular. Theorem 2.5 still applies to this set. However, Theorem 2.8 does not apply to the set \(K\). One can review the proof of Theorem 2.8 and find two places where it stops working.
The first place is at the beginning of Section 5.4 where we constructed \(\mu_{l}.\) We need to replace it with the argument in Section 5.5. In our case, we can set \(p_{1}=10^{l_{1}},p_{2}=11^{l_{2}},p_{3}=12^{l_{3}}\) for suitable numbers \(l_{1},l_{2},l_{3}\to\infty.\)
The second place is at the end of Section 5.4 where we used (Rescale). It depends on the self-similarity of the underlying set (measure). As our current \(K\) is not self-similar, we cannot follow the proof without further modification.
We now make this modification. Let \(\delta>0\) be a small number. We can choose integers \(l_{1},l_{2},l_{3}\) such that
\[10^{l_{1}}\leq\delta^{-1}<10^{l_{1}+1},\] \[11l_{2}\leq\delta^{-1}<11^{l_{2}+1},\] \[12^{l_{3}}\leq\delta^{-1}<12^{l_{3}+1}.\]
Then we can decompose \([0,1]^{3}\) into rectangles of dimension \(10^{-l_{1}}\times 11^{-l_{2}}\times 12^{-l_{3}}.\) In this way, we can decompose \(K\) into small pieces of the above size. Let \(K^{\prime}\) be one of such pieces. Notice that \(K^{\prime}\) is roughly a box of size \(\delta\) up to some bounded multiplicative error. However, \(K^{\prime}\) is not a rescaled copy of \(K.\) We can now find the corresponding restricted and normalized measure \(\lambda^{\prime}\) on \(K^{\prime}\) given the original measure \(\lambda\) on \(K\) which is the product measure of missing-digit measures on \(K_{1},K_{2},K_{3}.\) Then one can continue the argument in Section 5.4.
After this modification, one can obtain results on products of missing-digit sets (measures). We list three results.
**Example 8.2**.: _Again consider the hyperbola \(\{xy=1\}\) in \(\mathbb{R}^{2}.\) We see that there is an integer \(l\geq 0\) and there are infinitely many numbers \(t>0\) with \(10^{l}t\in K_{1}\) and \(11^{l}/t\in K_{2}.\)_
**Example 8.3**.: _Consider the Veronese curve \((t,t^{2},t^{3})_{t\in\mathbb{R}}\) in \(\mathbb{R}^{3}.\) We see that there is an integer \(l\geq 0\) and there are infinitely many numbers \(t>0\) such that_
\[10^{t}t\in K_{1},11^{t}t^{2}\in K_{2},12^{t}t^{3}\in K_{3}.\]
**Example 8.4**.: _Consider the curve \(\{x^{3}+y^{3}=1\}\) on \(\mathbb{R}^{2}.\) For this curve, we can choose \(\sigma=1/3.\) There are points \((t_{1},t_{2})\) on this curve with \(10^{l}t_{1}\in K_{1}\) and \(11^{l}t_{2}\in K_{2}.\)_
In those examples, it is possible to study the lower box dimension of the set of points in consideration. This can be done with the same method as in the proof of Theorem A.
In this paper, we require that the manifold in consideration is of finite type. This excludes the case when it is a line. In fact, despite the simple geometry of lines, their intersections with fractals still remain mysterious:
**Question 8.5**.: _In Theorems 2.5,2.8 can the manifold \(M\) be taken to be irrational hyperplanes?_
Here, the irrationality condition is crucial. It says that the normal direction of \(M\) has rationally independent coordinates. If this condition is not satisfied, then the intersection can be larger than expected, see Section 1.2. If one allows an \(\epsilon\) uncertainty on the exponents then there are satisfactory results. See [38], [43].
## 9. Measures with polynomial Fourier decay
So far, we have only considered problems regarding manifolds intersecting missing digits sets. From the proofs of Theorems 2.5, 2.8, we see that the property we need for manifolds of finite type is that they support natural surface measures \(\mu\) with whose Fourier transforms have polynomial decay, i.e. for some \(\sigma>0\)
\[|\hat{\mu}(\xi)|\ll|\xi|^{-\sigma}.\]
The proofs of Theorems 2.5, 2.8 can be done with all measures satisfying the above properties. There is no lack of such measures other than surface measures of finite type. We list two special examples. Note that unlike for manifolds with finite type, here the decay exponent \(\sigma\) may not be easy to determine and \(\mu\) may not be AD-regular. The following example shows that digit expansion and continued fraction expansion are in some sense 'independent'.
**Example 9.1**.: _Gibbs measures from non-linear interval maps. See [23], [12]. In particular, from [23, Corollary 1], we can deduce the following result:_
_Let \(A\subset\mathbb{N}\) be a finite set. Consider the set \(B(A)\) of numbers whose continued fractions only contain digits in \(A\). Suppose that \(s=\dim_{\rm H}B(A)>1/2.\) Then there is a number \(\sigma>0\) such that for missing digits measure \(\lambda\) with \(\dim_{l^{1}}\lambda>1-\sigma,\)_
\[\lambda(B(A)^{\delta})\ll\delta^{1-s}.\]
_To prove this result, from [23] (or [12]) we know that there is a natural measure \(\mu\) supported on \(B(A)\) such that \(|\hat{\mu}(\xi)|\ll|\xi|^{-\sigma}\) for some \(\sigma>0.\) Loosely speaking, this example shows that the continued fraction expansions and digit expansions of numbers are somehow independent in a quantitative way. For example, since \(s<1,\) we deduce that \(\lambda(B(A)=0.\) This means that \(\lambda.a.e\) points \(x\in K\) are not in \(B(A)\) (This can be also deduced from a much stronger result in [39])._
_Next, we want to find points in \(B(A)\) that also has missing digits in base \(p\) expansion. From the facts that \(\dim_{l^{1}}\lambda>1-\sigma\) and \(|\hat{\mu}(\xi)|\ll|\xi|^{-\sigma}\) we deduce that_
\[\int|\hat{\mu}(\xi)\hat{\lambda}(-\xi)|d\xi<\infty.\]
_This implies that \(\mu*\lambda\) is absolutely continuous with respect to the Lebesgue measure and moreover the density function is continuous. This implies that the arithmetic sumset \(-{\rm supp}(\lambda)+{\rm supp}(\mu)\) contains intervals. Let \(K_{p,D}\) be a missing digits
set whose missing digits measure \(\lambda\) satisfies \(\dim_{l^{1}}\lambda>1-\sigma.\) Let \(I\) be one of those intervals. Let \(a\in I\) be any number with terminating base \(p\) expansion. Then we see that there are \(x\in K_{p,D}\) and \(y\in B(A)\) with_
\[-x+y=a.\]
_This implies that_
\[y=a+x.\]
_Since \(a\) has terminating base \(p\)-expansion and \(x\in K_{p,D},\)\(y\) has missing \(p\)-adic digits eventually (i.e. there is an integer \(l\geq 1\) such that \(\{p^{l}a\}\in K_{p,D}\))._
**Example 9.2**.: _Patterson-Sullivan measures. See [29]. A counting result can be deduced as in Example 9.1. We omit further details._
## 10. Further discussions
We do not have a complete picture of the distribution of missing digits points around the manifold yet. However, results in this paper provide us with rather satisfactory information in the following situation:
* The manifold is sufficiently curved and the missing digits set is sufficiently large with respect to the Fourier \(l^{1}\)-dimension.
There are several directions for further study:
* First, the largeness of missing digits sets are quantified with the help of \(\dim_{l^{1}}.\) We believe this is not the optimal condition to look at. In fact, we believe that the results in this paper can be proved with \(\dim_{\mathrm{H}}\) in the place of \(\dim_{l^{1}}.\) For example, we formulate the following conjecture.
**Conjecture 10.1**.: _Let \(n\geq 2\) be an integer. Let \(M\subset\mathbb{R}^{n}\) be a manifold of finite type. Then there is a number \(\sigma=\sigma(M)>0\) such that for each missing digits measure \(\lambda\) with \(\dim_{\mathrm{H}}\lambda>\sigma,\)_
\[\lambda(M^{\delta})\ll\delta^{n-\dim M}.\]
Part 2 of Theorem 2.15 provides us with examples of missing digits measures that satisfy the conclusion of this conjecture. However, the base of those missing digits measures are all large and the digit sets have to be chosen to be specially structured. Thus the task is to reduce the size of the bases and the structural requirement of the digit sets.
* Second, what happens if the size of the missing digits set is small? Our theory so far can only be applied when the size of the missing digits set is large. Then we have obtained an optimal intersection result by combining Theorems 2.5, 2.8. We expect that if the missing digits set has a small enough Hausdorff dimension then it should be rather rare to find those points inside the manifold. We mentioned this point briefly at the beginning of this paper. We formulate the following concrete problem. **Conjecture 10.2**.: _Consider the circle \(C:x^{2}+y^{2}=1.\) For large enough integers \(p,\) suppose that \(D\subset\{0,\ldots,p-1\}^{2}\) is small enough, say, \(\#D\leq 100,\) then \(C\cap K_{p,D,l}\) only contains rational points for each \(l\geq 0.\)_
* Third, for the method we used in this paper, there are two important factors. We have two sets \(A,B\) and we want to study \(A\cap B.\) To do this, we find nicely supported measures \(\mu_{A},\mu_{B}\) on \(A,B\) respectively. Then we need one of the measures, \(\mu_{A}\), say, to have a power Fourier decay, i.e. for some \(\sigma>0,\) \[|\hat{\mu}(\xi)|\ll|\xi|^{-\sigma}.\] For \(\mu_{B},\) we need that \(\dim_{l^{1}}\mu_{B}\) is sufficiently large. There is no lack of studies of the power Fourier decay property for various measures, e.g. as we have mentioned surface measures carried by manifolds, Gibbs measures, Patterson-Sullivan measures. On the other hand, the study of \(\dim_{l^{1}}\) is relatively new. So far, the best knowledge we have for \(\dim_{l^{1}}\mu_{B}\) is when \(\mu_{B}\) is a missing digits measure. See also [2] and [45]. In particular, in [45] a numerical method was proposed to treat self-similar measures which are not missing digits measures. This numerical method does not provide accurate estimates. Apart from these results, almost nothing is known in the case when \(\mu_{B}\) is a general Borel probability measure. We want to ask the following particular problems. **Question 10.3**.: _Estimate \(\dim_{l^{1}}\mu\) for \(\mu\) being:_ 1. _A smooth surface measure carried by non-degenerate manifolds._ 2. _A self-similar measure with the open set condition._ 3. _A self-affine measure with the strong irreducibility condition._ 4. _A self-conformal measure with the open set condition._
As in [45], answers to this question can help us gain more insights about how rational points are distributed around a specific set, e.g. a self-conformal set, e.g. a Julia set. More generally, as we have discussed in this paper, it is possible to study intersections between different sets from the above list.
* Theorem 2.8 is not satisfactory because we are only able to find a possibly not sharp lower bound for \(\underline{\dim}_{\mathrm{B}}M\cap K_{p,D,l}.\) In fact, as mentioned earlier, we believe that under the hypothesis of Theorem 2.8, it should be that \[\dim_{\mathrm{B}}M\cap K_{p,D,l}=\dim_{\mathrm{H}}K_{p,D}-(n-\dim M).\] We are in fact not too far away from such a result because Theorem 2.8 also tells us that \[M^{\delta}\cap K_{p,D,l}\] have the 'correct' size for small enough \(\delta>0.\) Thus we see that there are 'enough' points in \(K\) which are also close to \(M\) but we are not yet able to say that there are 'enough' points in \(K_{p,D,l}\cap M.\)
## 11. Acknowledgement
HY was financially supported by the University of Cambridge and the Corpus Christi College, Cambridge. HY has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 803711). HY has received funding from the Leverhulme Trust (ECF-2023-186). HY thank P.Varju and P. Shmerkin for various comments.
### Rights
For the purpose of open access, the authors have applied a Creative Commons Attribution (CC-BY) licence to any Author Accepted Manuscript version arising from this submission.
|
2309.10323 | The Impact of Exposed Passwords on Honeyword Efficacy | Honeywords are decoy passwords that can be added to a credential database; if
a login attempt uses a honeyword, this indicates that the site's credential
database has been leaked. In this paper we explore the basic requirements for
honeywords to be effective, in a threat model where the attacker knows
passwords for the same users at other sites. First, we show that for
user-chosen (vs. algorithmically generated, i.e., by a password manager)
passwords, existing honeyword-generation algorithms do not simultaneously
achieve false-positive and false-negative rates near their ideals of $\approx
0$ and $\approx \frac{1}{1+n}$, respectively, in this threat model, where $n$
is the number of honeywords per account. Second, we show that for users
leveraging algorithmically generated passwords, state-of-the-art methods for
honeyword generation will produce honeywords that are not sufficiently
deceptive, yielding many false negatives. Instead, we find that only a
honeyword-generation algorithm that uses the \textit{same} password generator
as the user can provide deceptive honeywords in this case. However, when the
defender's ability to infer the generator from the (one) account password is
less accurate than the attacker's ability to infer the generator from
potentially many, this deception can again wane. Taken together, our results
provide a cautionary note for the state of honeyword research and pose new
challenges to the field. | Zonghao Huang, Lujo Bauer, Michael K. Reiter | 2023-09-19T05:10:02Z | http://arxiv.org/abs/2309.10323v3 | # The Impact of Exposed Passwords
###### Abstract
Honeywords are decoy passwords that can be added to a credential database; if a login attempt uses a honeyword, this indicates that the site's credential database has been leaked. In this paper we explore the basic requirements for honeywords to be effective, in a threat model where the attacker knows passwords for the same users at other sites. First, we show that for user-chosen (vs. algorithmically generated, i.e., by a password manager) passwords, existing honeyword-generation algorithms largely fail to achieve reasonable tradeoffs between false positives and false negatives in this threat model. Second, we show that for users leveraging algorithmically generated passwords, state-of-the-art methods for honeyword generation will produce honeywords that are not sufficiently deceptive, yielding many false negatives. Instead, we find that only a honeyword-generation algorithm that uses the _same_ password generator as the user can provide deceptive honeywords in this case. However, when the defender's ability to infer the generator from the (one) account password is less accurate than the attacker's ability to infer the generator from potentially many, this deception can again wane. Taken together, our results provide a cautionary note for the state of honeyword research and pose new challenges to the field.
## I Introduction
Credential database breaches have long been a widespread security problem and are only becoming more so. The 2022 Verizon Data Breach Investigations Report places credentials as one of the two most often breached types of confidential data, since they are so useful for attackers to masquerade as legitimate users on the system [43, p. 18]. Credential database breaches are the largest source of compromised passwords used in credential stuffing campaigns [39]. In turn, credential stuffing campaigns are the cause of the vast majority of account takeovers [35]. Unfortunately, there is usually a significant delay between the breach of a credential database and its discovery. Estimates of the average delay to detect a data breach range from six to eight months in one report [22, Fig. 45] to fifteen in another [35]. The resulting window of vulnerability gives attackers the opportunity to crack the passwords offline, and then sell them or leverage them directly [39, 35].
A strategy to accelerate the detection of credential database breaches, suggested by Juels and Rivest nearly a decade ago [23], is for a site to store decoy passwords, or _honeywords_, alongside real passwords in its credential database, so that if the attacker breaches the database, the correct passwords are hidden among the honeywords. The entry of a honeyword in a login attempt then alerts the site to its breach, since the legitimate user does not know the honeyword. In the time since this proposal, researchers have proposed various algorithms for generating honeywords (see Sec. III) to meet two central criteria: (i) that it be difficult for an attacker who has breached a site's credential database to distinguish the legitimate password for an account from that account's honeywords, and (ii) that it be difficult for an attacker who has not breached a site's credential database to guess honeywords for an account, since such guesses will induce false breach alarms.
The tendency of users to reuse passwords across sites (e.g., [41, 54, 31]) presents a challenge for honeywords, since an attacker can stuff an account's breached passwords at _other_ sites where the same user has accounts, thereby discovering the legitimate password as the one that works at another site. As such, previous advances in honeyword system designs [52] provide a mechanism by which one site can monitor for the entry of another site's honeywords in local login attempts. Still, however, for an account for which the attacker can obtain the user's passwords on other sites (e.g., by breaching these other sites, or by phishing their passwords), the attacker will likely need not resort to credential stuffing to differentiate the legitimate password from its honeywords. While this might seem like an unnecessarily challenging threat model, it is unfortunately realistic: A July 2020 report found more than 15 billion credentials in circulation in cybercriminal marketplaces [12], or an average of more than two for every person on the planet.
In this paper we conduct the first critical analysis of honeyword-generation algorithms in this setting, i.e., wherein the attacker knows legitimate passwords at other sites for the users represented in a database it is targeting. There is reason to suspect that this threat model would pose significant challenges to honeyword efficacy for user-chosen (versus algorithmically generated) passwords. On the one hand, if the honeyword-generation algorithm used to populate the targeted database generates honeywords that are all dissimilar from the user-chosen password, then the known password(s) for the same user might enable the attacker to distinguish the user-chosen password from its honeywords with high probability. If so, the false-negative probability (the probability that the site fails to detect the breach) would be high. On the other hand, if the honeyword-generation algorithm generates some honeywords that are similar to the user-chosen password, then this might make it easier for an attacker who has _not_ breached the database to guess and enter honeywords in login attempts, thereby inducing a false breach alarm (false positive).
Through a systematic analysis of current honeyword-generation algorithms, we quantify this tension and, by doing so, show that there appears to be no known algorithm providing
a good tradeoff for accounts with user-chosen passwords. We additionally applied two password tweaking techniques from password guessing to improve honeword generation. While these two algorithms relieve this tension by providing slightly lower false-negative probability, they still induce a high false-positive probability. Therefore, it remains far from clear that there is _any_ honeyword-generation algorithm that ensures low false-negative probability and provides adequate resistance to false breach alarms.
We then turn our attention to accounts with algorithmically generated passwords, as might be generated by a password manager. The critical finding that we uncover in this case is that honeyword-generation algorithms that do not take into account the method by which the legitimate password was generated will yield high false-negative probability. For example, if the user employs a password manager that generates passwords to fit a user-configured specification, and if the passwords exposed for that user permit the attacker to infer this specification, then the attacker can discard any honey words not fitting that pattern. We will quantify the ability of the adversary to do so against existing honeyword-generation algorithms, most of which do not guarantee honeywords of the same pattern as the legitimate password. We then consider the possibility that the honeyword-generation algorithm itself leverages a password manager to generate honey whenever the user does. However, due to the numerous generator configurations that users might adopt, doing so is not foolproof. In particular, if the attacker knows potentially more passwords for the same user's accounts elsewhere, it can classify the user's typical configuration better than the defender can. This advantage thus implies an increase in false negatives, which we will demonstrate in certain cases.
To summarize, our contributions are as follows:
* We formalize the false-positive and false-negative rates of honeywords in a model in which the attacker possesses passwords for the same user at other sites (obtained by, e.g., breaching those sites or phishing the user).
* Using these definitions and empirical datasets of compromised passwords, we show that existing honeyword-generation algorithms (and two honeyword-generation methods adapted from password-guessing attacks) exhibit poor tradeoffs between false negatives and false positives in this threat model. All the analyzed methods have a false-negative rate much higher than random guessing (i.e., it is often easy for false-negative attackers to distinguish the account password from honeywords) or a false-positive rate much higher than zero (i.e., it is often easy for false-positive attackers to induce false breach alarms).
* We use passwords gathered from popular password managers to show that introducing honeywords without attention to the account's password being algorithmically generated offers little protection for existing honeyword-generation algorithms. We further explore the use of automatic password generators to generate honeywords when the account password is identified as being algorithmically generated itself, but find that the myriad configurations of these generators can be a pitfall for honeyword generation.
We will release our source code upon publication of this paper.
## II Related Work
### _Honeynods_
Since honeywords were first proposed [23], there have been several research efforts on designing honeyword-generation techniques [16, 1, 48, 14, 47] or evaluating their security, mostly against attacks trying to access a breached site's accounts without alerting the site to its breach. In their original proposal, Juels and Rivest defined an abstract model of a honeyword system and proposed several legacy-UI methods including _chaffing-by-nweaking_ and _chaffing-with-a-password-model (modeling syntax)_, and one modified-UI method. The modified-UI method requires the authentication system to guide the user in the selection of her account password and thus has inherent usability challenges, and so we do not consider it in this paper. The legacy-UI methods use random replacement of characters in the account password. We use one of them in this paper to represent this class of techniques, as discussed in Sec. III. We also consider a method, called the "List" model in Sec. III, that utilizes existing passwords as the honeywords for the site's accounts [46] (similar to Erguler [16]). A proposal by Dionysiou, et al. [14] leverages a machine learning model to search for similar passwords in the system and then generates \(n\) honeywords by tweaking the searched passwords randomly (e.g., by the chaffing-by-tweaking method), also described in Sec. III. More recently, Yu and Martin [57] proposed to leverage the Generative Pre-trained Transformer 3 Model (GPT-3) [5] to generate honeywords. Their method includes two steps: first, a password-specific segmentation technique called PwdSegment [56] is used to extract chunks from the input password, and second, a prompt including the chunk information is provided as the input to GPT-3, which returns a list of passwords similar to the input password, used as honeywords. Wang and Reiter [53] proposed a honeyword-selection mechanism based on a Bernoulli selection that achieves tunable false positives.
Recent works have investigated the security of honeywords under _targeted-guessing attacks_ where the attacker has personal identity information (PII) about the users. Wang, et al. [46] performed the first security analysis on honeywords under such attacks, but they focused only on the legacy-UI methods proposed by Juels and Rivest, empirically showing that these methods fail to achieve low false-negative rates. More recently, Wang, et al. [48] considered both PII and registration order (the time when the user accounts were created) as the auxiliary information available to the attacker. They proposed leveraging this auxiliary information in a password model like the List model, probabilistic context-free grammars (PCFG) [9], a Markov model [33], or a combination thereof, to generate honeywords. Their proposed methods achieved low false-negative rates under the threat model considered in their work [48]. However, our empirical results demonstrate that existing honeyword-generation techniques, including those considered by Wang, et al., have a high false-negative probability in our threat model. Setting a larger number of honeywords per account, as suggested by Wang, et al., generally lowers false-negative rates but increases false-positive rates. We are the first to systematically analyze the trade-off, showing that existing honeyword-generation methods suffer from high false-positive or false-negative rates under a threat model where the passwords of the same user from the other sites are exposed
to attackers.
### _Password Guessing_
A related topic to honeyword generation is password guessing, which is used to crack passwords [55, 15] in an online or offline manner or used to evaluate their strength [24, 11, 18]. Since a honeyword is simply a decoy password, it is reasonable that honeyword research will benefit from the development of password guessing techniques. Weir, et al. [55] proposed the first method to utilize a probabilistic model to generate passwords. They designed the model using PCFGs trained on a training set of passwords and empirically demonstrated the effectiveness compared with word-mangling rule-based methods. Ma, et al. [26] leveraged a Markov model to learn the distribution of passwords. They showed that their proposed method achieved slightly better performance than PCFGs in password cracking when normalization and smoothing were used. Melicher, et al. [28] designed a password model using a recurrent neural network [34], which achieves improved accuracy in password strength measurement. Pasquini, et al. [30] utilized Generative Adversarial Networks (GAN) [19] to train a password generative model. They showed that the trained model can be used to produce passwords more effectively if a password template is known, due to the strong locality in the latent space of the generative model.
Recent works showed that password guessing can be improved by utilizing account holder PII and passwords used by the same user at other sites. Wang, et al. [47] proposed a PCFG model named _TarGuess_ where PII is considered in the model training. Pal, et al. [29] studied the case that attacker utilized the passwords used by the same users leaked from _another_ site to crack password, known as _credential tweaking_. They trained a _Pass2Path_ model by a recurrent neural network to simulate credential tweaking, which compromised at least \(16\%\) of user accounts in their tests. He, et al. [20] considered a similar threat model but improved the compromising rate using a deep neural transformer [42]. Recently, Wang, et al. [49] modeled password reuse behavior by a multi-step generative model, which improved the password guessing. In this paper, we adapted some of these techniques from _credential tweaking_ (Tweak and P2P as described in Sec. III) for honeyword generation.
### _Honeyword-Based Systems_
Our study is agnostic to system designs leveraging honeywords, whether they be symmetric or asymmetric. Asymmetric designs are ones that detect honeyword entry using a secret that the attacker is presumed to be unable to capture in the breach. For example, the original honeyword-system design [23] leverages a trusted server called a _honeychecker_ that holds the index of the legitimate password for each account, which the login server results to determine whether a login attempt uses a honeyword or the legitimate password. This honeychecker is assumed to keep its indices secret despite the login server's breach. Other asymmetric designs include ErsatzPasswords [2] and Lethe [13].
By contrast, a symmetric design is one where the attacker is allowed to capture all state used for honeyword-entry detection when he breaches the site. An example of a symmetric design is Amnesia [52]. In this design, the attacker captures all the information needed to undetectably access an account--possibly using a honeyword--at a site it breaches. However, the act of doing so configures the site to learn of its breach once the legitimate user accesses the site subsequently, using a different password. That is, in Amnesia, the use of two different passwords to enter an account is what alerts the site to its breach, since one must be a honeyword.
## III Background
### _Definitions_
Honeywords are decoy passwords added to each account entry in a credential database. The principle behind honeywords is that since the legitimate user does not know the honeywords generated for her account, the only party who is able to enter those honeywords is an attacker who discovered them by breaching the credential database. As such, login attempts using honeywords should be taken as compelling evidence of a database breach.
To make this principle precise, we define the false-positive and false-negative probabilities of a honeyword scheme in a way that abstracts away the details of the system leveraging them. We do so using the experiments shown in Fig. 1. In these experiments, a random user is modeled by a randomized algorithm \(\mathcal{U}\), by which the user selects her password \(p\in\{0,1\}^{*}\) for a site. The invocation \(\mathcal{U}()\) outputs not only the password \(p\), but also auxiliary information \(X\subset\{0,1\}^{*}\) that is correlated with \(p\) and that the attacker might learn. In this work, \(X\) will be passwords set by the same user at other sites, though other works have considered other types of auxiliary information (e.g., [46]). Given \(p\), the site selects honeywords for this account using the randomized algorithm \(\mathcal{H}_{n}\), which outputs a set \(H\) where \(|H|=n\) and \(p\not\in H\).
A _false-positive attacker_\(\mathcal{A}\) attempts to trigger a breach alarm at this site even though it has not breached the site, by leveraging its knowledge of \(p\) and \(\mathcal{H}\) to guess honeywords in \(H\). In this work, we consider the worst case where \(\mathcal{A}\) is permitted to know \(p\) since \(\mathcal{A}\) might represent a legitimate user of this site or because it might represent an outsider who, say,
Fig. 1: Measures for breach detection by honeywords
phished \(p\). \(\mathcal{A}\) might know \(X\) but \(X\) does not help in guessing \(H\) if \(p\) is already known. \(\mathcal{A}\)'s probability of triggering an alarm is defined in Fig. 0(a), where \(\alpha\geq 1\) is the number of honeywords whose entry will trigger a breach alarm and where \(\beta\geq 1\) denotes the number of login attempts \(\mathcal{A}\) is permitted to attempt for this account. In words, given \(p\) (along with \(\mathcal{U}\), \(\mathcal{H}_{n}\), \(\alpha\), and \(\beta\), which are public parameters of the experiment), \(\mathcal{A}\) wins by outputting a set \(G\) that it can enter in its budget of login attempts (\(|G|\leq\beta\)) and that will trigger an alarm (\(|G\cap H|\geq\alpha\)). Traditionally the threshold for raising a breach alarm has typically been set to \(\alpha=1\), though this definition permits other values. \(\mathcal{A}\)'s _false-positive probability_\(\mathsf{FPP}_{\mathcal{U},\mathcal{H}_{n},\alpha,\beta}(\mathcal{A})\) is the probability that \(\mathcal{A}\) wins, and the overall false-positive probability \(\mathsf{FPP}_{\mathcal{U},\mathcal{H}_{n},\alpha,\beta}\) is \(\mathsf{FPP}_{\mathcal{U},\mathcal{H}_{n},\alpha,\beta}(\mathcal{A})\) for the attacker algorithm \(\mathcal{A}\) that maximizes that probability.
In contrast, a _false-negative attacker_\(\mathcal{B}\) is an attacker who attempts to access this user's account after breaching the site but without alerting the site that it has been breached. This adversary's advantage in doing so is defined in Fig. 0(b). In words, \(\mathcal{B}\) obtains the set \(H\cup\{p\}\), sometimes called the _sweetwords_ for this account, as well as auxiliary information \(X\). The set \(H\cup\{p\}\) is recovered by the attacker from the salted hash file. \(\mathcal{B}\) then wins if it outputs a set \(G\) that will not trigger an alarm (\(|G\cap H|<\alpha\)) and that permits it to access the account (\(p\in G\)). Here we presume that \(G\subseteq H\cup\{p\}\), since passwords outside \(H\cup\{p\}\) offer no help for \(\mathcal{B}\) to achieve his goals. Consequently, we drop \(\beta\) as a parameter of the experiment; since \(\beta\geq\alpha\geq|G|\), it does not constrain \(\mathcal{B}\)'s choice of \(G\). Again, traditionally the threshold for raising a breach alarm has been set to \(\alpha=1\), in which case the probability with which \(\mathcal{B}\) guesses \(p\) from \(H\cup\{p\}\) on the first try (i.e., \(|G|=1\)) is called the _flatness_ of the honeyword scheme. \(\mathcal{B}\)'s _false-negative probability_\(\mathsf{FNP}_{\mathcal{U},\mathcal{H}_{n},\alpha}(\mathcal{B})\) is the probability that \(\mathcal{B}\) wins, and the overall false-negative probability \(\mathsf{FNP}_{\mathcal{U},\mathcal{H}_{n},\alpha}\) is \(\mathsf{FNP}_{\mathcal{U},\mathcal{H}_{n},\alpha}(\mathcal{B})\) for the attacker \(\mathcal{B}\) that maximizes that probability.
A honeyword-generation algorithm \(\mathcal{H}_{n}\) can at best achieve \(\mathsf{FPP}_{\mathcal{U},\mathcal{H}_{n},\alpha,\beta}\approx 0\) and \(\mathsf{FNP}_{\mathcal{U},\mathcal{H}_{n},\alpha}=\frac{\alpha}{n+1}\). Our research evaluates the extent to which known honeyword-generation algorithms, described in Sec. III-B, approach this ideal. When considering false-negative attackers, we will evaluate an attacker who prioritizes accounts by its perceived likelihood of success in guessing the account password \(p\), by refining \(\mathcal{U}\) to represent "easy" users for whom \((H\cup\{p\})\cap X\neq\emptyset\), likely due to exact password reuse across accounts; "medium" users who are not "easy" but for whom there are elements of \(H\cup\{p\}\) and \(X\) that are close to one another (in a sense we will define later), likely because the user set passwords at her other accounts that are similar to \(p\) (partial password reuse); or "hard" users for whom neither condition holds.
### _Honeyword-Generation Algorithms_
In this section, we introduce honeyword-generation algorithms, some of which have been introduced in previous works [23, 14, 48]. Generally, honeyword-generation algorithms can be classified into two groups: _password-independent_ algorithms and _password-dependent_ algorithms.
#### Iii-B1 Password-Independent Honeyword Generation
Password-independent algorithms generate honeywords independently of the account passwords. They do so by sampling password candidates from _password models_ pretrained on a multiset of passwords. In this work, we consider four widely used password models: list model [46], probabilistic context-free grammar model [55], Markov model [26], and recurrent neural network [28], and their combination [48]. The detailed descriptions of these password models are included in App. A-A. We denote these generation methods as List, PCFG, Markov, RNN, and Combo, respectively.
#### Iii-B2 Password-Dependent Honeyword Generation
Password-dependent algorithms generate honeywords that are dependent on the account passwords. These algorithms include password-strength-dependent methods and password-context-dependent methods.
Password-strength-dependent methods generate honeywords whose strength is equal or similar to the input password \(p\). These methods still leverage password models such as List, PCFG, Markov, RNN, or their combination but select a sampled candidate as a honeyword if and only if its strength is equal to that of the input password. However, if the input password is weak, it might be difficult to generate \(n\) honeywords with equal password strength, under the hypothesis that user-chosen passwords follow a Zipf distribution (e.g., [45]). So, in this work, we relax this requirement so that a sampled candidate will be used as a honeyword if its length equals the length of the input password. We denote this algorithm for generating honeywords from List, PCFG, Markov, RNN, or a combined method by List\(*\), PCFG\(*\), Markov\(*\), RNN\(*\), and Combo\(*\).
Password-context-dependent methods generate honeywords by modifying the input password. Here we consider four types of techniques: targeted password model-based generation, LLM-based generation, random replacement-based tweaking, and DNN-based tweaking.
Targeted password model-based generationThese methods generate honeywords from password models that learn a distribution of _password templates_[48]. Here a password template is a pattern describing passwords set by the same user at different sites, wherein common substrings are indicated in the template using a special tag pwd_str. For example, the template "pwd_str \(\mathsf{Z}\)" might be generated from "bike123z" and "bike123" if these passwords were set by the same user at two different sites. Password models like PCFG are pretrained on a multiset of password templates, as targeted password models. Then, honeywords are generated by sampling templates from the targeted password models and replacing pwd_str in the templates with the input password. We denote these generation methods from List, PCFG, Markov, RNN, or a combined method by List\(\#\), PCFG\(\#\), Markov\(\#\), RNN\(\#\), and Combo\(\#\).
LLM-based generationThese techniques generate honeywords by querying a large language model like GPT-3 [5] with prompts based on the input password. We consider a recently proposed method, chunk-level GPT-3 (CGPT3) [57]. The detailed description of CGPT3 is included in App. A-B.
Random replacement-based tweakingThese techniques generate honeywords by randomly changing some characters of the input password or similar passwords. We
consider chaffing-by-tweaking or \(\mathsf{CBT}t\)[23], which generates honeywords by randomly replacing the last \(t\) characters of the input password with characters of the same type; \(\mathsf{CBT}*\)[14], which generates honeywords by similarly replacing all the characters; and chaffing-by-a-hybrid-model (\(\mathsf{CHM}\)[14]). Detailed descriptions of \(\mathsf{CBT}t\), \(\mathsf{CBT}*\), and \(\mathsf{CHM}\) are included in App. A-B.
DNN-based tweakingDNN-based tweaking techniques leverage DNNs to tweak the chosen password to generate its honeywords. We consider a deep tweak model (Tweak) [20] and tweaking path model (\(\mathsf{P2P}\)) [29], which are adapted from similar constructions originally developed to crack passwords [20, 29]. The deep tweak model is a DNN that, on input a password, outputs a tweaked password. The tweaking path model inputs a password and outputs an edit path that is used to change the input password. More descriptions of these two techniques are included in App. A-B.
## IV User-Chosen Passwords
The first case we consider is when \(\mathcal{U}\) is an algorithm implemented by an average human user, and \(X\) is a multiset of passwords chosen by the same user at other sites. In this case, we show that the field has yet to identify _any_ honeyword-generation algorithm that achieves small \(\mathsf{FNP}_{\mathcal{U},\mathcal{H}_{n},\alpha}\) and \(\mathsf{FPP}_{\mathcal{U},\mathcal{H}_{n},\alpha,\beta}\) simultaneously. Intuitively, this is true because when a user selects passwords without automated help (i.e., \(\mathcal{U}\) is an average user), then an attacker who guessessors \(G\) that are similar to passwords in \(X\) will be highly effective in either inducing false detections (a high \(\mathsf{FPP}_{\mathcal{U},\mathcal{H}_{n},\alpha,\beta}(\mathcal{A})\)) or avoiding true detection (a high \(\mathsf{FNP}_{\mathcal{U},\mathcal{H}_{n},\alpha}(\mathcal{B})\)). On the one hand, if \(\mathcal{H}_{n}(p)\) outputs honeywords dissimilar to \(p\), then since users often choose \(p\) similar to elements of \(X\), it will be relatively easy for an attacker \(\mathcal{B}\) to select \(p\) from \(H\cup\{p\}\) as the one most similar to passwords in \(X\). So, for \(\mathsf{FNP}_{\mathcal{U},\mathcal{H}_{n},\alpha}(\mathcal{B})\) to be small, \(\mathcal{H}_{n}(p)\) must output at least some honeywords that are similar to \(p\). On the other hand, the more it does so, the easier it is for an attacker \(\mathcal{A}\) to induce false detections by guessing passwords \(G\) similar to passwords in \(X\).
### _Attack Strategies_
In this section, we introduce the false-positive attacker \(\mathcal{A}\) and the false-negative attacker \(\mathcal{B}\) that we use in the evaluation of \(\mathsf{FPP}_{\mathcal{U},\mathcal{H}_{n},\alpha,\beta}(\mathcal{A})\) and \(\mathsf{FNP}_{\mathcal{U},\mathcal{H}_{n},\alpha}(\mathcal{B})\), respectively.
False-positive attacker\(\mathcal{A}\)In the evaluation of \(\mathsf{FPP}_{\mathcal{U},\mathcal{H}_{n},\alpha,\beta}(\mathcal{A})\), recall that \(\mathcal{A}\) is given access to \(p\). The attacker \(\mathcal{A}\) leverages the honeyword-generation algorithm \(\mathcal{H}\) on input \(p\) to generate a set of honeyword candidates. Then, if applicable, it sorts the candidates by the probabilities assigned by the honeyword-generation algorithm and uses the top \(\beta\) candidates as the guessed honeywords \(G\); otherwise, it picks \(\beta\) candidates uniformly at random without replacement as \(G\).
False-negative attacker\(\mathcal{B}\)We evaluate \(\mathsf{FNP}_{\mathcal{U},\mathcal{H}_{n},\alpha}(\mathcal{B})\) for user-chosen passwords as follows. Given passwords \(X\), \(\mathcal{B}\) leverages a metric function \(d(\cdot):\{0,1\}^{*}\times\{0,1\}^{*}\rightarrow\mathbb{R}\) to measure the similarity between the elements of \(X\) and the sweetwords \(H\cup\{p\}\), and ranks each sweetword based on its similarity to the most similar element of \(X\). The top \(\alpha\) ranked sweetwords are used to guess \(p\).
### _Model to Measure Password Similarity_
In the evaluation of \(\mathsf{FNP}_{\mathcal{U},\mathcal{H}_{n},\alpha}(\mathcal{B})\), we need to define a metric function that inputs a pair of passwords and returns a score reflecting the similarity between the inputs. To formulate such a metric function, we designed a similarity model \(f(\cdot):\{0,1\}^{*}\rightarrow\mathbb{R}^{d}\) by a deep neural network, which takes as input a password \(p\) and outputs its _latent representation_ such that the cosine similarity between any two latent representations \(f(p)\) and \(f(p^{\prime})\) grows with the probability that \(\mathcal{U}()\) would have output both, i.e., with \(\mathbb{P}\big{(}p^{\prime}\in X\bigm{|}(p,X)\leftarrow\mathcal{U}()\big{)}\).
The similarity model is used to learn the embedding of passwords. Learning such an embedding of passwords into a latent space is essentially a _metric learning_ problem [50, 37]. Therefore, we applied contrastive learning, which is one of the most widely used frameworks to train a model to perform this embedding so as to maximizing cosine similarity between positive (similar) pairs while minimizing cosine similarity of negative (dissimilar) pairs [8]. Training a contrastive model is performed in _batches_, each a multiset \(B\subseteq\{0,1\}^{*}\times\{0,1\}^{*}\). Each \((p,p^{\prime})\in B\) consists of similar passwords (intuitively, for which \(\mathbb{P}\big{(}p^{\prime}\in X\bigm{|}(p,X)\leftarrow\mathcal{U}()\big{)}\) is high), whereas for any \((p^{\prime\prime},p^{\prime\prime\prime})\in B\setminus\big{(}(p,p^{\prime})\big{)}\), \(p\) and \(p^{\prime\prime}\) are presumed to be dissimilar, as are \(p^{\prime}\) and \(p^{\prime\prime\prime}\). Training for a contrastive learning model of password similarity, therefore, updates \(f\) to minimize a _loss function_, which typically would take the form
\[\underset{(p,p^{\prime})\in B}{\operatorname{avg}}-\log\frac{\exp(\mathsf{ sim}(f(p),f(p^{\prime})))}{\sum_{\begin{subarray}{c}(p^{\prime\prime},p^{\prime\prime \prime})\in B:\\ (p^{\prime\prime},p^{\prime\prime\prime})\neq(p,p^{\prime})\end{subarray}} \begin{pmatrix}\exp(\mathsf{sim}(f(p),f(p^{\prime\prime})))+\\ \exp(\mathsf{sim}(f(p^{\prime}),f(p^{\prime\prime\prime})))\end{pmatrix}} \tag{1}\]
where \(\mathsf{sim}\) denotes cosine similarity (see Chen, et al. [8]). Such updates with all the data samples from the training dataset passed through the trained model constitute one _epoch_. The design and training of the similarity model are described in App. B.
### _Evaluation_
In this subsection, we detail our evaluation of the user-chosen password case, which includes the used dataset, and the experimental results for \(\mathsf{FPP}_{\mathcal{U},\mathcal{H}_{n},\alpha,\beta}(\mathcal{A})\) and \(\mathsf{FNP}_{\mathcal{U},\mathcal{H}_{n},\alpha}(\mathcal{B})\).
#### Iv-C1 The Dataset
The dataset we used in the case of user-chosen passwords is the _4iQ_ dataset [6], consisting of \(1.4\) billion (email, password) pairs, of which \(1.1\) billion emails and \(463\) million passwords are unique. Others attribute the _4iQ_ dataset to various leaks from LinkedIn, Myspace, Badoo, Yahoo, Twitter, Zoosk, and Neopet, and have used it to analyze users' choices of passwords across sites [29] (despite the possibility of some being automatically generated). Our use of leaked passwords was approved by our IRB, which specified protections in our handling of this data (who could access the data, what results could be reported, etc.). In
Fig. 2: Distribution of \(|X|\)
order to use _4iQ_, we preprocessed the dataset by referring to previous works (e.g., [29]).
* **Cleaning:** We removed any (email, password) pairs that satisfied any of the following conditions: the password contained non-ASCII characters, the space character, or a substring of \(20\) (or more) hex characters; the password had a length of less than \(4\) or more than \(30\); or the email contained non-ASCII characters or the space character.
* **Joining by email and username:** For each email address addr appearing in the dataset, we collected the passwords appearing with that email address into a multiset \(S_{\mathsf{addr}}\). Then we merged some password multisets \(S_{\mathsf{addr}}\) as follows: two multisets were merged if they contained at least one password in common and if the username parts of their email addresses were the same. We then eliminated each \(S_{\mathsf{addr}}\) containing only one password or \(>1\),\(000\) passwords. In the resulting dataset, around \(48\%\) of users reused passwords, which is within the range between \(43\%\) and \(51\%\) estimated by previous work (e.g. [10]). More statistics about the resulting dataset are shown in Table I.
* **Splitting into training and testing sets:** Of the \(195{,}894{,}983\) password multisets that remained, \(80\%\) (\(156{,}722{,}455\) multisets with \(451{,}020{,}019\) passwords) were set aside as training sets \(D^{\mathsf{tr}}_{\mathsf{u}}\) used to train models. The other \(20\%\) (\(39{,}172{,}528\) multisets with \(112{,}723{,}111\) passwords) of the password multisets were set aside as testing sets \(D^{\mathsf{te}}_{\mathsf{u}}\). When evaluating \(\mathsf{FNP}_{\mathcal{U},\mathcal{H}_{n},\alpha,\mathcal{O}}(\mathcal{B})\) and \(\mathsf{FPP}_{\mathcal{U},\mathcal{H}_{n},\alpha,\mathcal{O}}(\mathcal{A})\), the algorithm \(\mathcal{U}\) was implemented by choosing \(p\) and the members of \(X\) without replacement from a single multiset \(S_{\mathsf{addr}}\) chosen uniformly at random from the testing sets, and returning \((p,X)\) as the result with \(X=S_{\mathsf{addr}}\setminus\{p\}\). \(|X|\) represents the amount of attacker's knowledge about this user's passwords at other sites. Its distribution in \(D^{\mathsf{te}}_{\mathsf{u}}\) is shown in Fig. 2.
#### Iv-B2 Experimental Results
We now report \(\mathsf{FPP}_{\mathcal{U},\mathcal{H}_{n},\alpha,\beta}(\mathcal{A})\) and \(\mathsf{FNP}_{\mathcal{U},\mathcal{H}_{n},\alpha}(\mathcal{B})\) for the attackers \(\mathcal{A}\) and \(\mathcal{B}\) described in Sec. IV-A. To depict the tradeoffs between these measurements, we plot them against one another as \(\alpha\) is varied. When evaluating \(\mathsf{FNP}_{\mathcal{U},\mathcal{H}_{n},\alpha}(\mathcal{B})\), we isolate three subcases, to permit modeling of an attacker who prioritizes accounts based on similarities between \(H\cup\{p\}\) and \(X\) per account. We measured such similarity based on definitions like those for password reuse introduced in previous work (e.g., [31]). Specifically, "easy" accounts are those for which \((H\cup\{p\})\cap X\neq\emptyset\); "medium" accounts are those for which \((H\cup\{p\})\cap X=\emptyset\) but there is a sweetword in \(H\cup\{p\}\) that shares a substring of length at least four characters with some password in \(X\); and "hard" accounts are those that are neither "easy" nor "medium". The percentages of accounts of different hardness are shown in Table II.
Fig. 3 shows the tradeoffs between \(\mathsf{FPPP}_{\mathcal{U},\mathcal{H}_{n},\alpha,\beta}(\mathcal{A})\) and \(\mathsf{FNP}_{\mathcal{U},\mathcal{H}_{n},\alpha}(\mathcal{B})\) for \(n=19\) honeywords and \(\beta=1000\), for the various honeyword-generation algorithms described in Sec. III. RNN and its variants achieved similar performance to List, PCFG, Markov, Combo, and their variants, and thus we only show the results from List, PCFG, Markov, Combo, and their variants in Fig. 3; results for \(\mathsf{RNN}\) and its variants are in App. C. In each plot, there are four curves presenting the overall tradeoff ("all") and those of three subcases: "easy", "medium", and "hard". In each curve, markers highlight the \(\mathsf{FPP}_{\mathcal{U},\mathcal{H}_{n},\alpha,\beta}(\mathcal{A})\) vs. \(\mathsf{FNP}_{\mathcal{U},\mathcal{H}_{n},\alpha}(\mathcal{B})\) tradeoff at a specific values of \(\alpha\) ranging from \(\alpha=1\) to \(n\). Intuitively, a smaller \(\alpha\) yields lower \(\mathsf{FNP}_{\mathcal{U},\mathcal{H}_{n},\alpha}(\mathcal{B})\) but higher \(\mathsf{FPP}_{\mathcal{U},\mathcal{H}_{n},\alpha,\mathcal{O}}(\mathcal{A})\) and so a marker closer to the top left corner. Increasing \(\alpha\) to \(n\) yields a higher \(\mathsf{FNP}_{\mathcal{U},\mathcal{H}_{n},\alpha}(\mathcal{B})\) but lower \(\mathsf{FPP}_{\mathcal{U},\mathcal{H}_{n},\alpha,\mathcal{O}}(\mathcal{A})\) and so a marker closer to the bottom right corner. We stress that \(\beta=1000\) yields an optimistic evaluation of \(\mathsf{FPP}_{\mathcal{U},\mathcal{H}_{n},\alpha,\beta}(\mathcal{A})\). For example, Florencio, et al. [17] recommend that an account should withstand targeted online password-guessing attacks of \(10^{6}\) attempts in practice. As such, arguably \(\beta=1000\) is \(1000\times\) too small.
An ideal honeyword-generation algorithm would achieve \(\mathsf{FPP}_{\mathcal{U},\mathcal{H}_{n},\alpha,\beta}(\mathcal{A})\approx 0\) and \(\mathsf{FNP}_{\mathcal{U},\mathcal{H}_{n},\alpha}(\mathcal{B})=\frac{1}{n+1}\) (which is \(0.05\) when \(n=19\)) at \(\alpha=1\). Unfortunately, no known honeyword algorithm comes close. As seen in Fig. 3, the best \(\mathsf{FNP}_{\mathcal{U},\mathcal{H}_{n},\alpha}(\mathcal{B})\) that the honeyword-generation techniques accomplish overall is \(0.54\) (P2P, Fig. 3r), \(0.56\) (Tweak, Fig. 3q), \(0.57\) (CHM, Fig. 3p), and \(0.58\) (CGPT3, Fig. 3m); all others have \(\mathsf{FNP}_{\mathcal{U},\mathcal{H}_{n},\alpha}(\mathcal{B})>0.59\). When we consider the attacker prioritizing "easy" accounts, \(\mathsf{FNP}_{\mathcal{U},\mathcal{H}_{n},\alpha}(\mathcal{B})\) of P2P, C6PT3, and Tweak are at least \(0.93\), \(0.95\), and \(0.96\), respectively, while others have \(\mathsf{FNP}_{\mathcal{U},\mathcal{H}_{n},\alpha}(\mathcal{B})\approx 1\). This indicates that the false-negative attacker can break at least \(43\%\) accounts by targeting the "easy" ones, with only P2P, C6PT3, and Tweak presenting any significant chance of catching the attacker. That
\begin{table}
\begin{tabular}{l r r r r r} \hline \hline Statistic & \multicolumn{2}{c}{Value} \\ \hline Total number of users & \(195{,}894{,}983\) & \(563{,}743{,}130\) & \(19{,}62{,}62{,}62\) & \(36{,}82\) \\ Average passwords per user & \(2.877\) & \(2.877\) & \(1.961\) \\ Average distinct passwords per user & \(1.961\) & \(2.877\) & \(1.961\) \\ Percentage of users reusing passwords & \((H\cup\{p\})\cap X=\emptyset\) & \(0\) & \(0\) \\ \hline \hline \end{tabular}
\end{table} TABLE I: Statistics of the preprocessed dataset
\begin{table}
\begin{tabular}{l r r r r r} \hline \hline & \multicolumn{2}{c}{\(n=19\)} & \multicolumn{2}{c}{\(n=99\)} \\ \(\mathcal{H}_{n}\) & easy & med & hard & easy & med & hard \\ \hline List & 43.37 & 16.00 & 40.63 & 43.56 & 19.62 & 36.82 \\ Markov & 43.36 & 15.96 & 40.68 & 43.57 & 19.78 & 36.65 \\ PCFG & 43.36 & 15.59 & 41.05 & 43.47 & 18.40 & 38.13 \\ RNN & 43.36 & 16.01 & 40.63 & 43.59 & 19.57 & 36.84 \\ Combo & 43.33 & 15.98 & 40.69 & 43.55 & 19.27 & 37.18 \\ List\(*\) & 43.37 & 16.07 & 40.56 & 43.41 & 19.39 & 37.20 \\ Markov\(*\) & 43.33 & 16.05 & 40.62 & 43.38 & 19.57 & 37.05 \\ PCFG\(*\) & 43.35 & 15.66 & 40.99 & 43.38 & 18.68 & 37.94 \\ RNN\(*\) & 43.34 & 15.92 & 40.74 & 43.40 & 19.46 & 37.14 \\ Combo\(*\) & 43.37 & 15.94 & 40.69 & 43.38 & 19.70 & 36.92 \\ List\(\#\) & 44.11 & 19.25 & 36.64 & 43.55 & 15.80 & 40.65 \\ Markov\(\#\) & 44.00 & 18.91 & 37.09 & 43.47 &
said, when such an attacker wants to guess more account passwords, i.e., targeting the "medium" accounts after the "easy" ones, the probability of inducing an alarm will increase with number of attacked accounts since \(\mathsf{FNP}_{\mathcal{U},\mathcal{H}_{n,\alpha}}(\mathcal{B})<0.88\) for the "medium" subcase when \(\alpha=1\). The four most successful algorithms (P2P, Tweak, CHM, and CgPT3) are password-context-dependent techniques that generate honeywords similar to the account password, and thus it is more challenging for \(\mathcal{B}\) to distinguish the account password from honeywords produced by these algorithms than from those of the other methods. We conclude that honeywords more similar to the account password yield a lower \(\mathsf{FNP}_{\mathcal{U},\mathcal{H}_{n,\alpha}}(\mathcal{B})\), though one that is still far from \(\frac{1}{n+1}\) due to password reuse.
However, P2P has \(\mathsf{FPP}_{\mathcal{U},\mathcal{H}_{n,\alpha},\beta}(\mathcal{A})\approx 0.89\) at \(\alpha=1\), where most others have lower \(\mathsf{FPP}_{\mathcal{U},\mathcal{H}_{n,\alpha},\beta}(\mathcal{A})\). The only exception is CHM, which includes a deterministic step that searches for nearest neighbors of the account password and thus yields a high false-positive rate, \(\mathsf{FPP}_{\mathcal{U},\mathcal{H}_{n,\alpha},\beta}(\mathcal{A})\approx 1\). While P2P is the best technique for generating honeywords similar to the account password, it is almost the easiest for the false-positive attacker to guess the generated honeywords with \(p\) known. Still, no generation method achieves \(\mathsf{FPP}_{\mathcal{U},\mathcal{H}_{n,\alpha},\beta}(\mathcal{A})\leq 0.27\) at \(\alpha=1\). Growing \(\alpha\) of course reduces \(\mathsf{FPP}_{\mathcal{U},\mathcal{H}_{n,\alpha},\beta}(\mathcal{A})\) but increases \(\mathsf{FNP}_{\mathcal{U},\mathcal{H}_{n,\alpha},\beta}(\mathcal{B})\): all methods capable of reaching \(\mathsf{FPP}_{\mathcal{U},\mathcal{H}_{n,\alpha},\beta}(\mathcal{A})\approx 0\) do so with \(\mathsf{FNP}_{\mathcal{U},\mathcal{H}_{n,\alpha}}(\mathcal{B})>0.81\) overall, \(\mathsf{FNP}_{\mathcal{U},\mathcal{H}_{n,\alpha}}(\mathcal{B})\approx 1\) for the "easy" subcase, and \(\mathsf{FNP}_{\mathcal{U},\mathcal{H}_{n,\alpha}}(\mathcal{B})>0.91\) for the "medium" subcase.
A natural method to decrease \(\mathsf{FNP}_{\mathcal{U},\mathcal{H}_{n,\alpha}}(\mathcal{B})\) would be to increase the number \(n\) of honeywords, but the more pronounced effect of doing so is increasing \(\mathsf{FPP}_{\mathcal{U},\mathcal{H}_{n,\alpha},\beta}(\mathcal{A})\), instead. Indeed, Fig. 4 shows the impact of increasing \(n\) to \(n=99\). As seen there, an order-of-magnitude increase in \(n\) resulted in a slight improvement to \(\mathsf{FNP}_{\mathcal{U},\mathcal{H}_{n,\alpha}}(\mathcal{B})\) in each case, but a more substantial increase to \(\mathsf{FPP}_{\mathcal{U},\mathcal{H}_{n,\alpha},\beta}(\mathcal{A})\).
To summarize, honeyword-generation techniques like \(\mathsf{Combo}\#\) that have been demonstrated to have good flatness in previous works (e.g., [48]) fail to achieve a low false-negative rate in our threat model, particularly not at settings of \(\alpha\) to ensure a small false-positive rate. Among the honeyword-generation techniques we consider, P2P achieves the best \(\mathsf{FNP}_{\mathcal{U},\mathcal{H}_{n,\alpha}}\) but has a high \(\mathsf{FPP}_{\mathcal{U},\mathcal{H}_{n,\alpha},\beta}\). Most other methods have lower \(\mathsf{FPP}_{\mathcal{U},\mathcal{H}_{n,\alpha},\beta}\) but a higher \(\mathsf{FNP}_{\mathcal{U},\mathcal{H}_{n,\alpha}}\). Regardless, in the case of user-chosen passwords, no existing algorithm achieves low rates of both false positives and false negatives. In addition, when the attacker targets the "easy" accounts that
are approximately \(43\%\) of users, all the honeyword-generation methods are ineffective in detecting a breach at settings of \(\alpha\) achieving \(\mathsf{FPP}_{\mathcal{U},\mathcal{H}_{n},\alpha,\beta}(\mathcal{A})\approx 0\).
## V Algorithmically Generated Passwords
The second case we consider is when \(\mathcal{U}\) is implemented using a password-generating algorithm. We assume there is a large but limited number \(Y\) of password generators denoted as \(\{\mathcal{U}_{y}\}_{y=1}^{Y}\), each of which is defined by an algorithm and values of user-configurable parameters. We assume that each user determines \(\mathcal{U}\) by choosing a generator uniformly at random from \(\{\mathcal{U}_{y}\}_{y=1}^{Y}\), and that each user stays with its choice. To justify this assumption, in App. D we report a brief study we did using the password policies of 20 commonly visited websites and Tranco Top 1M websites [32], where we found that setting passwords at these websites in a random order would permit the user to retain her chosen password-generation configuration for \(>6.3\) sites in expectation, before encountering a site for which the user's configuration was inconsistent. This finding is consistent with Alroomi et al. [3], who reported that only \(15\%\) sites have character constraints on password creation.
We assume the length of the generated passwords is one parameter that users can configure. Some password managers permit user configuration of allowable symbols, as well. Similarly, password managers that enable generation of easy-to-read passwords might avoid use of certain characters that are ambiguous in some fonts (e.g., "1" vs. "1" in sans-serif fonts). Password managers that generate easy-to-say passwords might restrict the symbols used in different positions of a password. We will see examples below. The user's choice of these parameters will generally be unknown to the defender, except as revealed by the account password \(p\).
In this section, we analyze the contribution of honeywords for detecting credential database breaches for accounts with algorithmically generated passwords. In App. F, we show that honeyword-generation methods used in the user-chosen password case fail to achieve both low false-negative rate and low false-positive rate for algorithmically generated passwords. Although utilizing password-generation algorithms to generate honeywords can do better, in this section we show that the choice of selected generator is critical to achieving a low false-negative rate.
### _Attack Strategies_
In this section, we introduce the false-positive attacker \(\mathcal{A}\) and the false-negative attacker \(\mathcal{B}\) used in the evaluation
of \(\mathsf{FPP}_{\mathcal{U},\mathcal{H}_{n},\alpha,\beta}(\mathcal{A})\) and \(\mathsf{FNP}_{\mathcal{U},\mathcal{H}_{n},\alpha}(\mathcal{B})\), respectively, when account passwords are generated algorithmically.
False-positive attacker \(\mathcal{A}\)\(\mathcal{A}\)uses the same strategy used in the case of user-chosen passwords in Sec. IV-A. Specifically, the attacker \(\mathcal{A}\) leverages the honeyword-generation algorithm \(\mathcal{H}\) to generate a set of candidates and sorts the candidates by their assigned probabilities, if applicable. Finally, it picks the top \(\beta\) candidates as the guessed honebywords \(G\).
False-negative attacker \(\mathcal{B}\)\(\mathcal{B}\)was implemented as follows. Given \(X\), \(\mathcal{B}\) leverages a classifier \(f(\cdot):\{0,1\}^{*}\rightarrow[0,1]^{Y}\) that outputs a confidence score per possible class. The construction of this classifier is described in App. E. \(\mathcal{B}\) classifies each element of \(X\) using \(f\), using the highest-scored generator for each \(p^{\prime}\in X\) as a "vote" for the password generator that the user employs; the password generator obtaining the most such votes is denoted \(\mathcal{U}_{y\mathcal{B}}\). Then \(\mathcal{B}\) assigns scores to the sweetwords from \(H\cup\{p\}\) as follows: if the length of the sweetword is the same as those in \(X\), \(\mathcal{B}\) utilizes the classifier \(f(\cdot)\) to value the sweetword by the confidence score of being from class \(y\mathcal{B}\); otherwise, \(\mathcal{B}\) will value it by \(0\). The attacker ranks the sweetwords based on the assigned scores and uses the top \(\alpha\) sweetwords as \(G\).
### _Generating Honeywords Using Algorithmic Password Generators_
The honeyword-generation methods introduced in Sec. III-B do not fare well (in terms of false-negative probability) when the account password is generated algorithmically. Intuitively, the password-independent honeyword generators fail to achieve a low \(\mathsf{FNP}_{\mathcal{U},\mathcal{H}_{n},\alpha}(\mathcal{B})\) since the honeywords they generate are user-chosen passwords, which makes it easy for \(\mathcal{B}\) to distinguish the algorithmically generated account password from the honeywords. Many password-dependent generators do little better, because even though the account password is algorithmically generated, these models are trained on artifacts of human behavior, which renders the honeywords recognizable to \(\mathcal{B}\). The primary exceptions are \(\mathsf{CBT}3\) and \(\mathsf{CBT}4\), which are not trained at all. These can achieve a low \(\mathsf{FNP}_{\mathcal{U},\mathcal{H}_{n},\alpha}(\mathcal{B})\), though still with a too-high \(\mathsf{FPP}_{\mathcal{U},\mathcal{H}_{n},\alpha,\beta}(\mathcal{A})\). We have empirically demonstrated these findings in App. F.
Therefore, here we consider the use of algorithmic password generators to generate honeywords for algorithmically generated passwords submitted by the user. Given an account password \(p\), the honeyword system selects a generator from \(\{\mathcal{U}_{y}\}_{y=1}^{Y}\) and then leverages the selected generator to generate \(n\) honeywords. We categorize the methods based on the selection strategy, as follows:
* \(\mathsf{F\!XED}\): Given a fixed \(\mathcal{U}_{y_{\mathcal{B}}}\in\{\mathcal{U}_{y}\}_{y=1}^{Y}\), \(\mathcal{H}_{n}\) samples \(n\) distinct honeywords using \(\mathcal{U}_{y_{\mathcal{B}}}\) to build \(H\).
* \(\mathsf{RAND}\): \(\mathcal{H}_{n}\) samples a \(\mathcal{U}_{y}\) uniformly from \(\{\mathcal{U}_{y}\}_{y=1}^{Y}\) and builds \(H\) by sampling \(n\) distinct honeywords using \(\mathcal{U}_{y}\).
* \(\mathsf{CLSD}\): \(\mathcal{H}_{n}\) classifies the account password into one of \(Y\) classes, indicating the generator \(\mathcal{U}_{y}\) most likely to have generated it. \(\mathcal{H}_{n}\) then builds \(H\) by sampling \(n\) distinct honeywords using \(\mathcal{U}_{y}\).
### _Evaluation_
#### Iv-C1 Dataset
The datasets we used to evaluate honeyword-generation strategies in the case of algorithmically generated passwords were synthetically produced by querying online password generators.1 Specifically, after browser-integrated password managers (Google Password Manager and iCloud Keychain), LastPass and 1Password are two of the most widely used password managers/generators [44]. LastPass permits the user to select one of three password-generation algorithms, namely "Easy-to-say", "Easy-to-read", or "All-characters". For each type, users can further specify the generator by checking or unchecking "Uppercase", "Lowercase", "Numbers", or "Symbols", though the "Easy-to-say" generator does not permit inclusion of Symbols or Numbers. 1Password allows users to select a type of password from among "Random Password", "Memorable Password", and "Pin". The Random Password generator includes Lowercase and Uppercase letters always, but users can check or uncheck Numbers and Symbols. The Memorable Password algorithm generates memorable passwords, each of which is a sequence of word fragments connected by separators. In this option, users can select separators among "Hyphens", "Spaces", "Periods", "Commas", "Underscores", "Numbers", and "Numbers and Symbols". In addition, users could check or uncheck "Full Words" and "Capitalize" to specify the "Memorable Password" generator. In this work, we used all the configurations from LastPass and 1Password's Random Password, and selected configurations for 1Password's Memorable Password. We consider passwords generated from each specification as one class, yielding \(38\) classes in total. These classes are shown in Table III. We set the fixed \(\mathcal{U}_{y_{\mathcal{B}}}\) to be the "All characters" generator from LastPass with "U", "L", "S", and "N" checked (\(y_{\mathcal{K}}=32\)).
Footnote 1: We used PyAutoGui ([https://pyautogui.readthedocs.io/en/latest/](https://pyautogui.readthedocs.io/en/latest/)) to automate interactions with the password managers like 1Password and LastPass. That is, we automated generating random password, copying them into the clipboard, and storing them in a local file interactively.
Using these online generators, we generated three datasets, denoted \(D_{\mathsf{a}}^{\mathsf{tr}}\), \(D_{\mathsf{a}}^{\mathsf{va}}\), and \(D_{\mathsf{a}}^{\mathsf{te}}\), all consisting of passwords of length \(14\) only. We used \(D_{\mathsf{a}}^{\mathsf{tr}}\) to train a classifier to classify random passwords and evaluated it on \(D_{\mathsf{a}}^{\mathsf{va}}\). To assemble \(D_{\mathsf{a}}^{\mathsf{tr}}\) and \(D_{\mathsf{a}}^{\mathsf{va}}\), we generated \(80\),\(000\) and \(2,000\) passwords from each class, yielding \(304\),\(000\) and \(76\),\(000\) passwords in total, respectively. We applied \(D_{\mathsf{a}}^{\mathsf{te}}\) in the evaluation of \(\mathsf{FPP}_{\mathcal{U},\mathcal{H}_{n},\alpha,\beta}\) and \(\mathsf{FNP}_{\mathcal{U},\mathcal{H}_{n},\alpha}\). In \(D_{\mathsf{a}}^{\mathsf{te}}\), there were \(38\) classes of passwords, each containing \(10\),\(000\) sets (corresponding to \(10\),\(000\) users) with \(100\) passwords of that class. When evaluating \(\mathsf{FPP}_{\mathcal{U},\mathcal{H}_{n},\alpha,\beta}\) and \(\mathsf{FNP}_{\mathcal{U},\mathcal{H}_{n},\alpha}\), we implemented \(\mathcal{U}\) by sampling \(p\) and \(X\) without replacement from a set (user) chosen uniformly at random from \(D_{\mathsf{a}}^{\mathsf{te}}\).
#### Iv-C2 Experimental Results
We evaluated the honeyword-generation methods described in Sec. V-B, though we plot only \(\mathsf{FNP}_{\mathcal{U},\mathcal{H}_{n},\alpha}(\mathcal{B})\) since \(\mathsf{FPP}_{\mathcal{U},\mathcal{H}_{n},\alpha,\beta}(\mathcal{A})\) was essentially perfect. We plot \(\mathsf{FNP}_{\mathcal{U},\mathcal{H}_{n},\alpha}(\mathcal{B})\) against \(\alpha\) in Fig. 6. As seen there, both the \(\mathsf{FXED}\) and \(\mathsf{RAND}\) methods had a high \(\mathsf{FNP}_{\mathcal{U},\mathcal{H}_{n},\alpha}(\mathcal{B})\). Even when \(\alpha=1\), they had \(\mathsf{FNP}_{\mathcal{U},\mathcal{H}_{n},\alpha}(\mathcal{B})>0.94\) for \(n=19\) and \(\mathsf{FNP}_{\mathcal{U},\mathcal{H}_{n},\alpha}(\mathcal{B})>0.93\) for \(n=99\).
In contrast, the \(\mathsf{CLSD}\) method achieves nearly perfect \(\mathsf{FNP}_{\mathcal{U},\mathcal{H}_{n},\alpha}(\mathcal{B})\). This method selects the most plausible algorithmic password generator based on the account password to
generate homewords. The confusion matrix experienced by \(\mathcal{H}_{n}\) (i.e., using \(p\)) are shown in Fig. 4(a). When \(|X|=1\), the confusion experienced by \(\mathcal{B}\) is virtually identical, of course, but the confusion experienced by \(\mathcal{B}\) when \(|X|>1\) is notably less, as shown in Fig. 4(b). As this figure shows, when \(|X|>1\), \(\mathcal{B}\) has greater ability to classify the user's password generator based on \(X\) than \(\mathcal{H}_{n}\) does based on \(p\), at least for certain classes. Since our dataset is dominated by accounts for which the number of passwords known by \(\mathcal{B}\) numbers \(|X|=1\) (Fig. 2), the confusion shown in Fig. 4(a) (where \(|X|>1\)) cannot effectively be exploited by \(\mathcal{B}\).
However, if the fraction of accounts for which \(\mathcal{B}\) holds \(|X|>1\) passwords were larger, the better classification accuracy this would enable (Fig. 4(b)) would permit an average increase in \(\mathsf{FNP}_{\mathcal{U},\mathcal{H}_{n},\alpha}(\mathcal{B})\). To illustrate this, in Fig. 7 and Fig. 8 we show the effect on \(\mathsf{FNP}_{\mathcal{U},\mathcal{H}_{n},\alpha}(\mathcal{B})\) of increasing \(|X|\) from its original distribution to \(|X|=99\) always, for \(n=19\) or \(n=99\), respectively. Each subfigure shows \(\mathsf{FNP}_{\mathcal{U},\mathcal{H}_{n},\alpha}(\mathcal{B})\) for certain classes of the actual password \(p\); e.g., Fig. 4(a) shows this effect when \(p\leftarrow\mathcal{U}_{22}()\). As can be seen in these figures, increasing \(|X|\) to \(|X|=99\) enables \(\mathcal{B}\) to improve \(\mathsf{FNP}_{\mathcal{U},\mathcal{H}_{n},\alpha}(\mathcal{B})\), for these classes noticeably.
In conclusion, in the case of algorithmically generated passwords, it is critical for \(\mathcal{H}_{n}\) to identify the algorithmic password generator used by each user in order to achieve low \(\mathsf{FNP}_{\mathcal{U},\mathcal{H}_{n},\alpha}(\mathcal{B})\). Even then, as the number of passwords \(|X|\) grows, this measure will decay.
## VI Discussion
### _Balancing Attention to False Positives and False Negatives_
Since honeywords' proposal, a challenge has been to design good honeyword-generation methods that achieve both low false-positives and low false-negatives, i.e., \(\mathsf{FPP}_{\mathcal{U},\mathcal{H}_{n},\alpha,\beta}\approx 0\) and \(\mathsf{FNP}_{\mathcal{U},\mathcal{H}_{n},\alpha}=\frac{\alpha}{n+1}\). However, our experimental results in Sec. IV-C2 show that no existing method achieves this goal in a threat model in which passwords from the same user at other sites are exposed to the attacker. While this leaves us skeptical that a perfect honeyword-generation method exists for this threat model (at least not when passwords are user-chosen, versus algorithmically generated), we do not mean to suggest that research in this direction should end. However, we do advocate that new honeyword-generation methods should be investigated with balanced attention to false positives and false negatives in this threat model, rather than more narrowly focusing on false negatives, as has been typical in most prior research. For example, a goal of designing honeywords could be to optimize the false-negative rate such that the false-positive rate is lower than a threshold.
Fig. 5: Confusion matrices: Probability with which a password of one class (row) is classified as another class (column) by \(\mathcal{H}_{n}\) (Fig. 4(a)) or \(\mathcal{B}\) with \(|X|>1\) (Fig. 4(b)). Box shading is scaled linearly between 0.0 (white) and 1.0 (black).
\begin{table}
\begin{tabular}{c c c c c c c} \hline Class & \multirow{2}{*}{Manager} & \multirow{2}{*}{Type} & \multicolumn{3}{c}{Alphabet} \\ & & & U & L & S & N \\ \hline
[MISSING_PAGE_POST]
emorable Password & & & \\
38 & 1Password & Memorable Password & & & \\ \hline \end{tabular}
\end{table} TABLE III: Classes of algorithmically generated passwords used in our experiments
### _Countermeasures to False-Positive Attacks_
False-positive attacks can be very costly to sites, since they require investigating the possibility of a breach and potentially inducing a password reset for every site account. Moreover, repeated false positives will eventually result in the defense being ignored or disabled outright. Despite the consequences of false positives, only a few previous works on honeywords have briefly discussed how to prevent them [23, 48]. Wang, et al. [48] suggested that applying a blocklist of popular passwords to honeyword selection can reduce false positives, since the honeyword-generation methods considered in their work generate honeywords by sampling from a public password distribution (e.g., leveraging a password model like List). As such, a blocklist would avoid using popular passwords as honeywords, which can mitigate the guessability of honeywords by their proposed methods. However, a blocklist of popular passwords is much less effective when considering password-dependent honeyword-generation algorithms (e.g., CBT, CHM, Tweak, and P2P), since these methods assign more likelihood to those candidates similar to the account password. A way to mitigate false positives of these methods is to avoid using passwords similar to the account password as honeywords, which makes them suffer a high false-negative rate.
Another countermeasure to reduce false positives, as mentioned by Juels and Rivest [23], is to select \(n\) honeywords uniformly at random from a large pool of candidate honeywords that are similar to the account password. In order to achieve a small false-positive rate, the size of the pool should be much larger than \(n\). However, it is challenging to generate such a large pool of candidates that are sufficiently similar to the account password to ensure a small false-negative rate via this process. As such, an interesting direction is to explore how to generate such a large candidate pool to achieve a target false-negative rate.
### _Changes of Algorithmic Password Generator Configuration_
A limitation of our analysis in Sec. V is that it was conducted assuming that the user, once adopting a configuration for her algorithmic password generator, does not change that configuration. Our findings from surveying twenty common websites (see App. D) suggests that users are rarely _required_ to change configurations in the course of (re)setting passwords at different sites. However, an interesting direction for future work would be to confirm or refute this assumption more broadly, since as shown in Sec. V-C2, the assumption somewhat diminishes the effectiveness of honeywords generated for accounts with algorithmically generated passwords. Alternatively, an algorithmic password generator could be designed to encourage changing these configuration settings regularly, in which case an interesting research direction would be to explore the acceptability of this practice for users.
### _Password Reuse_
Our findings that password reuse across sites is so detrimental to honeyword false-negative rates (Sec. IV-C2) provides yet more evidence that moving more users toward password managers would be good policy (notwithstanding the risk of password-manager breaches, e.g., [40]). That said, a recent university survey [27] found that though a large majority (77%) of respondents reported using a password manager, another large majority (again, 77%) also reported still reusing passwords across accounts. So, while a step in the right direction, password managers are evidently not a panacea. A potentially more effective approach might be explicitly hindering attempts to reuse passwords, either through adoption of intentionally conflicting password requirements at websites (which is not commonplace, see App. D) or through explicit interventions during the password (re)setting process to interfere with reusing the same or similar passwords (e.g., [51]).
### _A Mixed Case_
In this work, we studied two preventive cases where users create user-chosen passwords (Sec. IV) or where users generate their passwords algorithmically using a password manager (Sec. V). To assess the efficacy of honeywords when users employ mixed strategies (i.e., chose some passwords themselves and algorithmically generate others), we further constructed two test datasets by mixing \(D_{u}^{\text{te}}\) and the algorithmically generated dataset. Then we generated honeywords based on the type of the account password, i.e., applying honeyword-generation methods described in Sec. III-B to generate honeywords for user-chosen passwords and password managers to generate honeywords for algorithmically generated passwords. Our study showed that increased use of password managers in password creation can ease the tensions brought on by password reuse and thus make better trade-offs between false-positive and false-negative rates of honeywords. More details on the experiments and results are shown in App. G.
## VII Conclusion
In this paper, we have conducted the first critical analysis of honeyword-generation techniques for users who have suffered exposed passwords for their accounts at other sites. We formalized the false-positive rate and false-negative rate of honeywords in a model where the attacker has access to passwords for the same users at other sites or, in the case of false-positive attackers, even passwords for users at the defending site (as the real users would). Using these formalized definitions and a large dataset of leaked passwords, we experimentally demonstrated that existing honeyword-generation algorithms exhibit poor tradeoffs between false positives and false negatives when the account password is chosen by an average human user. Then we studied the case where the account password is algorithmically generated and used passwords from popular password managers to show that the existing honeyword-generation methods offer modest protection against false-negative attackers. We further explored the use of algorithmic password generators in honeyword generation and determined that seemingly the only effective strategy is to generate honeywords using the same password generator that the user does, if it can determine what that password generator is. In total, we believe our results paint a cautionary picture for the state of honeyword-generation algorithms to date, though they also set forth new research challenges for the field.
|
2309.10445 | Product of Rankin-Selberg convolutions and a new proof of Jacquet's
local converse conjecture | In this article, we construct a family of integrals which represent the
product of Rankin-Selberg $L$-functions of $\mathrm{GL}_{l}\times
\mathrm{GL}_m$ and of $\mathrm{GL}_{l}\times \mathrm{GL}_n $ when $m+n<l$. When
$n=0$, these integrals are those defined by Jacquet--Piatetski-Shapiro--Shalika
up to a shift. In this sense, these new integrals generalize
Jacquet--Piatetski-Shapiro--Shalika's Rankin-Selberg convolution integrals. We
study basic properties of these integrals. In particular, we define local gamma
factors using this new family of integrals. As an application, we obtain a new
proof of Jacquet's local converse conjecture using these new integrals. | Pan Yan, Qing Zhang | 2023-09-19T09:04:31Z | http://arxiv.org/abs/2309.10445v2 | # Product of Rankin-Selberg Convolutions and a new proof of Jacquet's local converse conjecture
###### Abstract.
In this article, we construct a family of integrals which represent the product of Rankin-Selberg \(L\)-functions of \(\operatorname{GL}_{l}\times\operatorname{GL}_{m}\) and of \(\operatorname{GL}_{l}\times\operatorname{GL}_{m}\) when \(m+n<l\). When \(n=0\), these integrals are those defined by Jacquet-Piatetski-Shapiro-Shalika up to a shift. In this sense, these new integrals generalize Jacquet-Piatetski-Shapiro-Shalika's Rankin-Selberg convolution integrals. We study basic properties of these integrals. In particular, we define local gamma factors using this new family of integrals. As an application, we obtain a new proof of Jacquet's local converse conjecture using these new integrals.
Key words and phrases:Rankin-Selberg convolution, \(L\)-functions, gamma factors, local converse theorem 2010 Mathematics Subject Classification: 11F70, 22E50 The first named author is partially supported by an AMS-Simons Travel Grant. The second named author is partially supported by NSFC grant 12371010.
non-negative integers with \(m+n<l\). If \(n=0\), our integrals degenerate to those defined by Jacquet-Piatetski-Shapiro-Shalika (JPSS for abbreviation). In this sense, our integrals indeed generalize the JPSS Rankin-Selberg convolution integrals.
To give more details, we introduce some notations. For an integer \(j\) with \(0\leq j\leq l-m-n-1\), we set \(k=l-m-n-1-j\) and consider the embedding \(\iota_{j}:\operatorname{GL}_{m+n}\to\operatorname{GL}_{l}\) given by
\[\begin{pmatrix}a&b\\ c&d\end{pmatrix}\mapsto\begin{pmatrix}I_{j}&&&\\ &a&b\\ &&1&&\\ &c&d&\\ &&&I_{k}\end{pmatrix}\]
for \(a\in\operatorname{Mat}_{m\times m},b\in\operatorname{Mat}_{m\times n},c\in \operatorname{Mat}_{n\times m},d\in\operatorname{Mat}_{n\times n}.\) Given an irreducible cuspidal automorphic representation \(\pi\) (resp. \(\tau_{1},\tau_{2}\)) of \(\operatorname{GL}_{l}(\mathbb{A})\) (resp. \(\operatorname{GL}_{m}(\mathbb{A}),\operatorname{GL}_{n}(\mathbb{A})\)), we consider the integral
\[I_{j}(\phi,f_{\mathbf{s}})=\int_{\operatorname{GL}_{m+n}(F)\setminus \operatorname{GL}_{m+n}(\mathbb{A})}\phi_{Y_{j}}^{\psi}(\iota_{j}(h))E(h,f_{ \mathbf{s}})dh.\]
Here \(\phi\in\pi\) is a cusp form, \(\phi_{Y_{j}}^{\psi}\) is a certain Fourier coefficient of \(\phi\) along certain subgroup \(Y_{j}\subset\operatorname{GL}_{l}\). Moreover, \(\mathbf{s}=(s_{1},s_{2})\) is a pair of complex numbers and \(E(h,f_{\mathbf{s}})\) is the standard Eisenstein series on \(\operatorname{GL}_{m+n}(\mathbb{A})\) associated with a section \(f_{\mathbf{s}}\) in the representation induced from \(\tau_{1}||^{s_{1}-1/2}\otimes\tau_{2}||^{-s_{2}+1/2}\) on the standard Levi subgroup of \(\operatorname{GL}_{m+n}\) with partition \((m,n)\). See SS2 for the unexplained notations.
**Theorem 1.1**.: _The integral \(I_{j}(\phi,f_{\mathbf{s}})\) converges absolutely and uniformly in vertical strips for each variable \(s_{1},s_{2}\) in \(\mathbf{s}=(s_{1},s_{2})\), away from the poles of Eisenstein series. The integral is Eulerian, and for decomposing data, for any given \(\mathbf{s}\), up to a holomorphic function, the integral is equal to_
\[\frac{L^{S}(s_{1}+\frac{k-j}{2},\pi\times\tau_{1})L^{S}(s_{2}-\frac{k-j}{2}, \widetilde{\pi}\times\widetilde{\tau}_{2})}{L^{S}(s_{1}+s_{2},\tau_{1}\times \widetilde{\tau}_{2})},\]
_where \(\widetilde{\pi}\) (resp. \(\widetilde{\tau}_{2}\)) is the contragredient representation of \(\pi\) (resp. \(\tau_{2}\)), and \(L^{S}(s_{1}+\frac{k-j}{2},\pi\times\tau_{1})\) denotes the partial Rankin-Selberg L-function of \(\pi\times\tau_{1}\). Here \(S\) is a finite set of places which contains all infinite places and outside \(S\), \(\pi,\tau_{1}\) and \(\tau_{2}\) are unramified._
Theorem 1.1 is proved in SS2 and SS3. In addition, we also prove the existence of local gamma factors. More precisely, let \(\Psi(W,f_{\mathbf{s}};j)\) be the local zeta integral in the unfolding of \(I_{j}(\phi,f_{\mathbf{s}})\). Here \(W\) is a Whittaker function of a local representation \(\pi_{v}\) and \(f_{\mathbf{s}}\) is a section in the local induced representation by abuse of notation. Then we prove that there exists a local gamma factor \(\Gamma(\mathbf{s},\pi,(\tau_{1},\tau_{2}),\psi)\) such that
\[\Psi(W,M(f_{\mathbf{s}});0)=\Gamma(\mathbf{s},\pi,(\tau_{1},\tau_{2}),\psi) \Psi(W,f_{\mathbf{s}};0).\]
Here \(M\) denotes an intertwining operator. See SS3 for more details. Here we remark that if \(n=0\), the local zeta integral \(\Psi(W,f_{\mathbf{s}};j)\) is exactly the JPSS local zeta integral and when \(l=2r+1\) and \(m=n\), then the local zeta integral \(\Psi(W,f_{\mathbf{s}};r-m)\) is the local zeta integral of \(\operatorname{U}_{2r+1,E/F}\times\operatorname{Res}_{E/F}(\operatorname{GL}_{n})\) at split places as considered in [1]. In the above definition of local gamma factors, we only used the integral when \(j=0\). Although we don't address it here, it should not be too hard to consider a similar local functional equation for general \(j\) so that it will degenerate to the JPSS local functional equation for general \(j\). Moreover, as it is suggested by the unramified calculation, we expect that
\[\Gamma(\mathbf{s},\pi,(\tau_{1},\tau_{2}),\psi)=\frac{\gamma(s_{1}+(k-j)/2,\pi \times\tau_{1},\psi)\gamma(s_{2}+(j-k)/2,\widetilde{\pi}\times\widetilde{ \tau}_{2},\psi)}{\gamma(s_{1}+s_{2},\tau_{1}\times\widetilde{\tau}_{2})}. \tag{1.1}\]
Here the gamma factors on the right side are those defined by JPSS or by Shahidi [1, 1]. The proof of this expected property is standard and it will be addressed in our sequel paper [13].
As we mentioned above, one important application of the JPSS Rankin-Selberg integrals is the proof of the converse theorems given by Cogdell and Piatetski-Shapiro in [1, 1], which roughly says that for an admissible irreducible representation \(\pi\) of \(\operatorname{GL}_{l}(\mathbb{A})\), if \(L(s,\pi\times\tau)\) is "nice" (see [1, page 165] for the definition) for all irreducible cuspidal automorphic representation \(\tau\) of \(\operatorname{GL}_{m}(\mathbb{A})\) with \(1\leq m\leq l-2\), then \(\pi\) is cuspidal automorphic. In applications to the functoriality problems, it is desirable to reduce the number of twists used in the converse theorem. In this direction, one important open question is the following
**Conjecture 1.2** (Jacquet's global converse conjecture, see [10, SS8, Conjecture 1]).: _Let \(\pi=\otimes_{v}^{\prime}\pi_{v}\) be an irreducible admissible generic representation of \(\operatorname{GL}_{l}(\mathbb{A})\) such that its central character is trivial on \(F^{\times}\) and its \(L\)-function \(L(s,\pi)\) is convergent in some half plane. If \(L(s,\pi\times\tau)\) is nice for all irreducible cuspidal automorphic representation \(\tau\) of \(\operatorname{GL}_{m}(\mathbb{A})\) with \(1\leq m\leq[l/2]\), then \(\pi\) is cuspidal automorphic._
After many years of the original proof given in [10, 11], it seems very hard to use the original JPSS integral to attack the above conjecture. We expect that our new family of integrals might be useful in the above problem. In fact, assuming the expected property of the gamma factors (1.1), the condition that \(L(s,\pi\times\tau)\) is nice for all irreducible cuspidal automorphic representation \(\tau\) of \(\operatorname{GL}_{m}\) with \(1\leq m\leq[l/2]\) will give us an extra new family of equalities of integrals besides those JPSS integrals.
Although we don't know how to attack the above Jacquet's global converse conjecture at this moment, in this paper, assuming (1.1), we illustrate the above idea by giving a new proof of the following
**Conjecture 1.3** (Jacquet's local converse conjecture).: _Let \(F\) be a non-archimedean local field and let \(\pi_{1},\pi_{2}\) be two supercuspidal representations of \(\operatorname{GL}_{l}(F)\) with the same central character. If \(\gamma(s,\pi_{1}\times\tau,\psi)=\gamma(s,\pi_{2}\times\tau,\psi)\) for all irreducible generic representation \(\tau\) of \(\operatorname{GL}_{m}(F)\) with \(1\leq m\leq[l/2]\), then \(\pi_{1}\cong\pi_{2}\)._
As proved in [11], one can drop the supercuspidal condition in the above conjecture. In fact, what we proved is the following
**Theorem 1.4** (Theorem 4.1).: _Let \(F\) be a non-archimedean local field and let \(\pi_{1},\pi_{2}\) be two irreducible supercuspidal representations of \(\operatorname{GL}_{l}(F)\) with the same central character. If \(\Gamma(\mathbf{s},\pi_{1}\times(\tau_{1},\tau_{2}),\psi)=\Gamma(\mathbf{s},\pi _{2}\times(\tau_{1},\tau_{2}),\psi)\) for all irreducible generic representations \(\tau_{1}\) (resp. \(\tau_{2}\)) of \(\operatorname{GL}_{m}(F)\) (resp. \(\operatorname{GL}_{n}(F)\)) with \(0\leq n\leq[l/2],0\leq m\leq[l/2]\), then \(\pi_{1}\cong\pi_{2}\)._
Local converse theorems for \(\operatorname{GL}_{l}\) using twists up to \(l-1\) and \(l-2\) have been proved in [14, 10, 11]. The Jacquet's local converse conjecture has been proved in [13] and [15] independently. Our new contribution here is to use the new family of integrals. A proof of Jacquet's local converse conjecture along this method was promised in [12, SS8.2] and in [12, Introduction], where it was (incorrectly) believed that the integrals of \(\operatorname{U}_{l,E/F}\times\operatorname{Res}_{E/F}(\operatorname{GL}_{m})\) at split places for a quadratic extension \(E/F\) as developed in [1] were enough. As explained above, these integrals are just our new family of integrals when \(m=n\). It turns out that we need to use the whole new family of integrals. Proof of Theorem 1.4 uses partial Bessel functions developed in [11] and is indeed similar as outlined in [12, SS8.2] and in [12, Introduction]. Similar methods have been successfully used in proving local converse theorems of other classical groups over \(p\)-adic fields and \(G_{2}\) over finite fields, see [12, 13, 14, 15, 16, 17, 18, 19, 20, 21]. See [14] for more references on local converse problems.
In this paper, we only considered the integrals which represents the product of Rankin-Selberg \(L\)-functions \(\operatorname{GL}_{l}\times\operatorname{GL}_{m}\) and \(\operatorname{GL}_{l}\times\operatorname{GL}_{n}\) when \(m+n<l\). It is natural to ask if similar construction is generalizable to the case when \(m+n\geq l\). We will address this question in future work.
The paper is organized as follows. In SS2, we introduce the global integrals and discuss the absolute convergence, functional equation, and the unfolding computations of the global integrals. SS3 is devoted to the local theory of the integrals. We prove the existence of a local gamma factor \(\Gamma(\mathbf{s},\pi,(\tau_{1},\tau_{2}),\psi)\), and carry out the local unramified computation for the local integrals when all data are unramified. In SS4, we restate Theorem 1.4 and prepare some necessary tools for the proof. In particular, we recall the notions of partial Bessel functions and a result from [11]. Theorem 1.4 is proved in SS5. Actually, we prove a slightly more general result (see Theorem 5.1).
To conclude the introduction, we introduce some notations which will be used throughout the paper. For a positive integer \(k\), let \(I_{k}\) be the identity \(k\times k\) matrix. Let \(B_{k}=T_{k}N_{k}\subset\operatorname{GL}_{k}\) the standard upper triangular Borel subgroup, with \(T_{k}\) the group of diagonal matrices and \(N_{k}\) the upper triangular unipotent subgroup. Let \(\overline{N}_{k}\) be the opposite of \(N_{k}\), i.e., \(\overline{N}_{k}\) is the lower triangular unipotent subgroup of \(\operatorname{GL}_{k}\). For positive integers \(m,n\), let \(\operatorname{Mat}_{m\times n}\) be the set of \(m\times n\) matrices.
We consider the following subgroups of \(\mathrm{GL}_{m+n}\) given by
\[M_{m,n}=\left\{\left(\begin{pmatrix}g_{1}&\\ &g_{2}\end{pmatrix},g_{1}\in\mathrm{GL}_{m},g_{2}\in\mathrm{GL}_{n}\right),N_{m, n}=\left\{\left(\begin{matrix}I_{m}&X\\ &I_{n}\end{matrix}\right),X\in\mathrm{Mat}_{m\times n}\right\},\]
and \(P_{m,n}=M_{m,n}N_{m,n}\). Denote \(w_{m,n}=\begin{pmatrix}&I_{m}\\ I_{n}\end{pmatrix}\).
## Acknowledgement
We thank our advisor Jim Cogdell for his guidance and support over the years. It is our pleasure to dedicate this paper to him on the occasion of his 70th birthday. Some ideas of this paper grew out from the second named author thesis work under the direction of Professor Cogdell and we would like to thank him for a lot of fruitful communications related to this project. We would like to thank Professor Terrence Tao, who answered the second named author a question on MathOverFlow about Littlewood-Richardson coefficients and also generously allowed us to reproduce his answer in our paper, see SS3.4. The second named author thanks the support of a start-up funding of Huazhong University of Science and Technology.
## 2. The global integrals
In this section, let \(F\) be a global field and \(\mathbb{A}\) be its ring of adeles.
### Eisenstein series
Notice that the modulus character of \(P_{m,n}\) is given by
\[\delta_{P_{m,n}}(\mathrm{diag}(a_{1},a_{2}))=|\det(a_{1})|^{n}|\det(a_{2})|^{ -m},\quad a_{1}\in\mathrm{GL}_{m},a_{2}\in\mathrm{GL}_{n}.\]
Let \(\tau_{1}\) (resp. \(\tau_{2}\)) be an irreducible automorphic cuspidal representation of \(\mathrm{GL}_{m}(\mathbb{A})\) (resp. \(\mathrm{GL}_{n}(\mathbb{A})\)), we write \(\boldsymbol{\tau}=(\tau_{1},\tau_{2})\). Given a pair of complex numbers \(\mathbf{s}:=(s_{1},s_{2})\), we consider the normalized induced representation
\[\mathrm{I}(\mathbf{s},\boldsymbol{\tau}):=\mathrm{Ind}_{P_{m,n}(\mathbb{A})} ^{\mathrm{GL}_{m+n}(\mathbb{A})}(\tau_{1}|\det|^{s_{1}-\frac{1}{2}}\otimes\tau _{2}|\det|^{-s_{2}+\frac{1}{2}}).\]
Concretely, we associate with each \(u\in\mathrm{I}(\mathbf{s},\boldsymbol{\tau})\) the function \(f_{\mathbf{s}}(h)=(u(h))(1),h\in\mathrm{GL}_{m+n}(\mathbb{A})\). Thus the space \(\mathrm{I}(\mathbf{s},\boldsymbol{\tau})\) consists of all functions \(f_{\mathbf{s}}:\mathrm{GL}_{m+n}(\mathbb{A})\to\mathbb{C}\) satisfying
\[f_{\mathbf{s}}(\mathrm{diag}(a,b)uh)=|\det(a)|^{s_{1}+\frac{n-1}{2}}|\det(b)|^ {-s_{2}+\frac{1-m}{2}}\varphi_{h}(a,b),\]
where, \(a\in\mathrm{GL}_{m}(\mathbb{A}),b\in\mathrm{GL}_{n}(\mathbb{A}),u\in N_{m,n}( \mathbb{A}),h\in\mathrm{GL}_{m+n}(\mathbb{A})\) and for a fixed \(h\), the function \((a,b)\mapsto\varphi_{h}(a,b)\) is a cusp form in the space of \(\tau=\tau_{1}\boxtimes\tau_{2}\) of the group \(M_{m,n}(\mathbb{A})=\mathrm{GL}_{m}(\mathbb{A})\times\mathrm{GL}_{n}(\mathbb{A})\).
Denote \(\widehat{\mathbf{s}}:=(s_{2},s_{1}),1-\widehat{\mathbf{s}}:=(1-s_{2},1-s_{1})\) and \(\widehat{\boldsymbol{\tau}}:=(\tau_{2},\tau_{1})\). There is a standard intertwining operator
\[M_{w_{m,n}}:\mathrm{I}(\mathbf{s},\boldsymbol{\tau})\to\mathrm{I}(1-\widehat {\mathbf{s}},\widehat{\boldsymbol{\tau}})\]
defined by
\[M_{w_{m,n}}f_{\mathbf{s}}(g)=\int_{N_{n,m}(\mathbb{A})}f_{\mathbf{s}}\left(w_{ m,n}ug\right)du.\]
Notice that \(\mathrm{I}(1-\widehat{\mathbf{s}},\widehat{\boldsymbol{\tau}})\) is the induced representation
\[\mathrm{Ind}_{P_{n,m}(\mathbb{A})}^{\mathrm{GL}_{m+n}(\mathbb{A})}(\tau_{2}| \det|^{(1-s_{2})-\frac{1}{2}}\otimes\tau_{1}|\det|^{-(1-s_{1})+\frac{1}{2}}),\]
which consists of all functions \(f_{1-\widehat{\mathbf{s}}}\) satisfying
\[f_{1-\widehat{\mathbf{s}}}(\mathrm{diag}(a,b)uh)=|\det(a)|^{1-s_{2}+\frac{m-1} {2}}|\det(b)|^{-(1-s_{1})-\frac{n-1}{2}}\varphi_{h}(a,b).\]
In the above equation, \(\mathrm{diag}(a,b)\in M_{n,m}(\mathbb{A}),u\in N_{n,m}(\mathbb{A}),h\in \mathrm{GL}_{m+n}(\mathbb{A})\), and for a fixed \(h\), the function \((a,b)\mapsto\varphi_{h}(a,b)\) is a cusp form in the space of \(\widehat{\tau}:=\tau_{2}\otimes\tau_{1}\) of the group \(M_{n,m}(\mathbb{A})\).
Given \(f_{\mathbf{s}}\in\mathrm{I}(\mathbf{s},\boldsymbol{\tau})\), we consider the Eisenstein series
\[E(h,f_{\mathbf{s}})=\sum_{\gamma\in P_{m,n}(F)\,\mathrm{GL}_{m+n}(F)}f_{ \mathbf{s}}(\gamma h).\]
Similarly, we can also consider the Eisenstein series
\[E(h,f_{1-\widehat{\mathbf{s}}})=\sum_{\gamma\in P_{n,m}(F)\setminus\operatorname{GL }_{m+n}(F)}f_{1-\widehat{\mathbf{s}}}(\gamma h),\]
for \(f_{1-\widehat{\mathbf{s}}}\in\operatorname{I}(1-\widehat{\mathbf{s}},\widehat{ \boldsymbol{\tau}})\).
### Global integrals
Fix a positive integer \(l\). Let \(m,n\) be non-negative integers such that \(l>m+n\). For a non-negative integer \(j\) with \(0\leq j\leq l-m-n-1\), we set \(k=l-m-n-1-j\geq 0\) and consider the embedding
\[\iota_{j,m,n}:\operatorname{GL}_{m+n}\to\operatorname{GL}_{l}\]
\[\begin{pmatrix}a&b\\ c&d\end{pmatrix}\mapsto\begin{pmatrix}I_{j}&&&\\ &a&b\\ &&1&&\\ &c&d\\ &&&I_{k}\end{pmatrix}\]
for \(a\in\operatorname{Mat}_{m\times m},b\in\operatorname{Mat}_{m\times n},c\in \operatorname{Mat}_{n\times m},d\in\operatorname{Mat}_{n\times n}\). we also consider \(s_{j,m,n}\in\operatorname{GL}_{l}\) defined by
\[s_{j,m,n}=\begin{pmatrix}0&I_{m}&0&0&0\\ 0&0&0&I_{n}&0\\ I_{j}&0&0&0&0\\ 0&0&1&0&0\\ 0&0&0&0&I_{k}\end{pmatrix}.\]
Then the embedding \(\iota_{j,m,n}:\operatorname{GL}_{m+n}\to\operatorname{GL}_{l}\) can be written as
\[\iota_{j,m,n}(h)=(s_{j,m,n})^{-1}\begin{pmatrix}h&&\\ &I_{j+1+k}\end{pmatrix}s_{j,m,n},\quad h\in\operatorname{GL}_{m+n}.\]
Next, we consider the subgroup \(Y_{j,m,n}\) of \(\operatorname{GL}_{l}\) defined by
\[Y_{j,m,n}=\left\{\begin{pmatrix}u&*&*\\ &I_{m+n+1}&*\\ &&v\end{pmatrix},u\in N_{j},v\in N_{k}\right\}.\]
To ease the notation, if \(m,n\) are understood, we usually drop \(m,n\) from the subscripts from the above notations. For example, we may write \(Y_{j,m,n}\) as \(Y_{j}\). We now define a character \(\psi_{j}\) on \(Y_{j}(F)\backslash Y_{j}(\mathbb{A})\) by
\[\psi_{j}(y)=\psi\left(\sum_{i=1}^{j-1}y_{i,i+1}+\sum_{i=j+m+n+2}^{l-1}y_{i,i+1} +y_{j-1,j+m+1}+y_{j+m+1,j+m+n+2}\right),\]
for \(y=(y_{p,q})_{1\leq p,q\leq l}\in Y_{j}(\mathbb{A})\).
**Lemma 2.1**.: _For \(h\in\operatorname{GL}_{m+n}(\mathbb{A})\), \(y\in Y_{j}(\mathbb{A})\), we have_
1. \(\iota_{j}(h)^{-1}y_{i}(h)\in Y_{j}\)_, and_
2. \(\psi_{j}(\iota_{j}(h)^{-1}y_{i}(h))=\psi_{j}(y)\)_._
Proof.: This follows from a simple matrix calculation.
Let \(\pi\) be an irreducible cuspidal automorphic representation of \(\operatorname{GL}_{l}(\mathbb{A})\) and for \(\phi\in V_{\pi}\), we consider the following Fourier coefficient of \(\phi\) along \(Y_{j}\):
\[\phi_{Y_{j},\psi_{j}}(h)=\int_{Y_{j}(F)\backslash Y_{j}(\mathbb{A})}\phi(y_{ j}(h))\psi_{j}^{-1}(y)dy,\quad h\in\operatorname{GL}_{m+n}(\mathbb{A}).\]
By Lemma 2.1, \(\phi_{Y,\psi}\) is left \(\operatorname{GL}_{m+n}(F)\)-invariant. Thus for \(f_{\mathbf{s}}\in\operatorname{I}(\mathbf{s},\boldsymbol{\tau})\), we can consider the integral
\[I_{j}(\phi,f_{\mathbf{s}}):=\int_{\operatorname{GL}_{m+n}(F)\setminus \operatorname{GL}_{m+n}(\mathbb{A})}\phi_{Y_{j},\psi_{j}}(h)E(h,f_{\mathbf{s}})dh.\]
Similarly, we can also consider \(I_{j}(\phi,M_{w_{m,n}}(f_{\mathbf{s}}))\).
**Proposition 2.2**.: _The integral \(I_{j}(\phi,f_{\mathbf{s}})\) converges absolutely and uniformly in vertical strips in \(\mathbb{C}\) for each variable \(s_{1},s_{2}\) in \(\mathbf{s}=(s_{1},s_{2})\), away from the poles of the Eisenstein series. Moreover, away from the poles of \(E(h,f_{\mathbf{s}})\) and \(E(h,M_{w_{m,n}}(f_{\mathbf{s}}))\), we have_
\[I_{j}(\phi,f_{\mathbf{s}})=I_{j}(\phi,M_{w_{m,n}}(f_{\mathbf{s}})).\]
Proof.: The second statement follows from the functional equation of the Eisenstein series. For the first statement, it is sufficient to show that \(\phi_{Y_{j},\psi_{j}}\) is rapidly decreasing. The proof is similar to other situations appeared elsewhere, see [1, Lemma 2.1] for one example. We provide some details below following the same argument as in [1, Lemma 2.1].
Let \(\Omega\) be a compact subset of \(B_{m+n}(\mathbb{A})\). Let \(c\) be a real number with \(0<c<1\), and we define a set \(A_{c}\) as follows. We embed the positive real numbers diagonally in the archimedean part of \(\mathbb{A}^{\times}\), and \(1\) at the finite part of \(\mathbb{A}^{\times}\). Denote the image of this embedding by \(\mathbb{R}_{+,\mathbb{A}}\). Then \(A_{c}\) is the set of all \(\operatorname{diag}(t_{1},\ldots,t_{m+n})\), such that \(t_{i}\in\mathbb{R}_{+,\mathbb{A}}\) and \(t_{1}\geq ct_{2}\geq c^{2}t_{3}\geq\cdots\geq c^{m+n-1}t_{m+n}\geq c^{m+n}\). Then \(\mathcal{S}=\Omega A_{c}K_{\operatorname{GL}_{m+n}(\mathbb{A})}\) is a Siegel domain for \(\operatorname{GL}_{m+n}(\mathbb{A})\). Similarly, let \(\mathcal{S}^{\prime}=\Omega^{\prime}A^{\prime}_{c}K_{\operatorname{GL}_{l}( \mathbb{A})}\) be a Siegel domain for \(\operatorname{GL}_{l}(\mathbb{A})\), where \(\iota_{j}(\Omega)\subset\Omega^{\prime}\) is a compact subset of \(B_{l}(\mathbb{A})\) and \(A^{\prime}_{c}\) is similarly defined. We take \(c\) small enough and \(\Omega,\Omega^{\prime}\) large enough, so that \(\operatorname{GL}_{l}(\mathbb{A})=\operatorname{GL}_{l}(F)\mathcal{S}^{\prime}\), and \(\operatorname{GL}_{m+n}(\mathbb{A})=\operatorname{GL}_{m+n}(F)\mathcal{S}\). Now let \(h=\omega ak\in\mathcal{S}\), where \(\omega\in\Omega\), \(a=\operatorname{diag}(t_{1},\ldots,t_{m+n})\in A_{c}\), and \(k\in K_{\operatorname{GL}_{m+n}(\mathbb{A})}\). Associated to \(a\), we define
\[b=\operatorname{diag}(c^{j}t_{1},c^{j-1}t_{1},\ldots,ct_{1},I_{m},t_{m},I_{n}, c^{-1}t_{m+n},c^{-2}t_{m+n},\ldots,c^{-k}t_{m+n}).\]
Then \(b\iota_{j}(a)\in A^{\prime}_{c}\). Let \(\Omega^{\prime}_{b}=\Omega^{\prime}\cup\Omega^{\prime}\cdot b^{-1}\). For fixed \(a\in A_{c}\), \(\Omega^{\prime}_{b}\) is a compact subset of \(B_{l}(\mathbb{A})\) which contains \(\Omega^{\prime}\). Let \(\mathcal{S}^{\prime}_{b}=\Omega^{\prime}_{b}A^{\prime}_{c}K_{\operatorname{GL} _{l}(\mathbb{A})}\). This is a Siegel domain for \(\operatorname{GL}_{l}(\mathbb{A})\), which contains \(\mathcal{S}^{\prime}\). Thus, \(h=(\omega b^{-1})(ba)k\in\mathcal{S}^{\prime}_{b}\). We fix a compact subset \(Y_{j,0}\subset Y_{j}(\mathbb{A})\) such that \(Y_{j}(\mathbb{A})=Y_{j}(F)Y_{j,0}\). We may assume that \(Y_{j,0}\subset\Omega^{\prime}\). Then we have
\[|\phi_{Y_{j},\psi_{j}}(h)|\leq\int_{Y_{j,0}}|\phi(y\iota_{j}(\omega b^{-1}(ba) k))|dy. \tag{2.1}\]
Let \(N>0\) be given. Since \(\phi\) is rapidly decreasing in \(\mathcal{S}^{\prime}\), there exists a constant \(c_{0}\) such that for all \(\omega^{\prime}\in\Omega^{\prime}\), \(a^{\prime}\in A^{\prime}_{c}\), and \(k^{\prime}\in K_{\operatorname{GL}_{l}(\mathbb{A})}\), we have
\[|\phi(\omega^{\prime}a^{\prime}k^{\prime})|\leq c_{0}\|a^{\prime}\|^{-N}. \tag{2.2}\]
Here, \(\|\cdot\|\) is the norm on \(\operatorname{GL}_{l}(\mathbb{A})\) defined by
\[\|g\|=\prod_{v}\|g_{v}\|_{v}\]
where \(g\in\operatorname{GL}_{l}(\mathbb{A})\), \(v\) runs over all places of \(F\), and \(\|g_{v}\|_{v}\) is the local norm on \(\operatorname{GL}_{l}(F_{v})\) defined by
\[\|g_{v}\|_{v}=\max\{|(g_{v})_{i,j}|_{v},|(g_{v}^{-1})_{i,j}|_{v}:1\leq i,j\leq l\}.\]
When passing from the Siegel domain \(\mathcal{S}^{\prime}\) to the Siegel domain \(\mathcal{S}^{\prime}_{b}\), the constant \(c_{0}\) in (2.2) can be replaced by \(c_{0}\|b^{-1}\|^{N_{0}}=c_{0}\|b\|^{N_{0}}\), for some positive number \(N_{0}\), which does not depend on \(b\) (see [13, Sec. I.2.10, I.2.11]). Thus, in the integrand in (2.1), we have
\[|\phi(y\iota_{j}(\omega b^{-1}(ba)k))|\leq c_{0}\|b\|^{N_{0}}\|b\iota_{j}(a)\|^{-N}.\]
Notice that
\[\|b\|= \max\{c^{j}t_{1},c^{j-1}t_{1},\ldots,ct_{1},t_{m},c^{-1}t_{m+n},c^ {-2}t_{m+n},\ldots,c^{-k}t_{m+n},\] \[c^{-j}t_{1}^{-1},c^{-j+1}t_{1}^{-1},\ldots,c^{-1}t_{1}^{-1},t_{m }^{-1},ct_{m+n}^{-1},c^{2}t_{m+n}^{-1},\ldots,c^{k}t_{m+n}^{-1}\}\] \[= \max\{ct_{1},c^{-j}t_{1}^{-1},t_{m},t_{m}^{-1},c^{-k}t_{m+n},ct_{m +n}^{-1}\}\] \[\leq c^{\max\{1,-j_{j}-k\}}\|a\|\]
and
\[\|b\iota_{j}(a)\|= \max\{c^{j}t_{1},c^{j-1}t_{1},\ldots,ct_{1},t_{1},t_{2},\ldots,t_{ m+n},c^{-1}t_{m+n},c^{-2}t_{m+n},\ldots,c^{-k}t_{m+n},\] \[c^{-j}t_{1}^{-1},c^{-j+1}t_{1}^{-1},\ldots,c^{-1}t_{1}^{-1},t_{1}^ {-1},\ldots,t_{m+n}^{-1},ct_{m+n}^{-1},c^{2}t_{m+n}^{-1},\ldots,c^{k}t_{m+n}^{-1}\}\] \[\geq \max\{t_{1},t_{2},\ldots,t_{m+n},t_{1}^{-1},t_{2}^{-1},\ldots,t_{ m+n}^{-1}\}\] \[= \|a\|.\]
We conclude that
\[|\phi(y_{j}(\omega ak))|\leq c_{1}\|a\|^{N_{0}-N} \tag{2.3}\]
where \(c_{1}\) is a positive constant, depending on \(c\) and \(c_{0}\). Since \(Y_{j,0}\) is compact, we combine (2.2) and (2.3) to conclude that \(\phi_{Y_{j},\psi_{j}}\) is rapidly decreasing in \(\mathcal{S}\). This completes the proof.
### Unfolding of the global integral \(I_{j}(\phi,f_{\mathbf{s}})\)
For integers \(m,n\geq 0\), denote
\[Z_{m,n}=\left\{\begin{pmatrix}I_{m}&0&z\\ &1&0\\ &&I_{n}\end{pmatrix}:z\in\operatorname{Mat}_{m\times n}\right\}\subset \operatorname{GL}_{m+n+1}.\]
For a cusp form \(\phi\) on \(\operatorname{GL}_{m+n+1}(F)\backslash\operatorname{GL}_{m+n+1}(\mathbb{A})\), we define its constant term along \(Z_{m,n}\) by
\[\phi_{Z_{m,n}}(g)=\int_{Z_{m,n}(F)\backslash Z_{m,n}(\mathbb{A})}\phi\left( zg\right)dz.\]
We have the following expansion of \(\phi_{Z_{m,n}}\).
**Lemma 2.3**.: _For \(\phi\in\mathcal{A}_{0}(\operatorname{GL}_{m+n+1})\), the space of cusp forms on \(\operatorname{GL}_{m+n+1}(F)\backslash\operatorname{GL}_{m+n+1}(\mathbb{A})\), we have_
\[\phi_{Z_{m,n}}(g)=\sum_{\begin{subarray}{c}\gamma_{1}\in N_{m}(F)\backslash \operatorname{GL}_{m}(F),\\ \gamma_{2}\in N_{n}(F)\backslash\operatorname{GL}_{n}(F)\end{subarray}}W_{ \phi}^{\psi}\left(\begin{pmatrix}\gamma_{1}&&\\ &1&\\ &&\gamma_{2}\end{pmatrix}g\right),\]
_where \(W_{\phi}^{\psi}\) is the \(\psi\)-Whittaker function of \(\phi\)._
Note that when \(n=0\), the above expansion is just the usual Fourier expansion of cusp forms, due to Piatetski-Shapiro [10] and Shalika [11]. On the other hand, the above version expansion is an easy consequence of the result of Piatetski-Shapiro and Shalika. We give a sketch of the proof below.
Proof.: Let
\[Q_{m}=\left\{\begin{pmatrix}g_{1}&x\\ &1\end{pmatrix}:g_{1}\in\operatorname{GL}_{m},x\in\operatorname{Mat}_{m\times 1 }\right\}\]
be the usual mirabolic subgroup of \(\operatorname{GL}_{m+1}\). We consider the function \(\phi_{1}\) on \(Q_{m}(F)\backslash Q_{m}(\mathbb{A})\) defined by
\[\phi_{1}\left(\begin{pmatrix}g_{1}&x\\ &1\end{pmatrix}\right)=\phi_{Z_{m,n}}\left(\begin{pmatrix}g_{1}&x\\ &1&\\ &&I_{n}\end{pmatrix}g\right).\]
Then \(\phi_{1}\) is a cuspidal automorphic form on \(Q_{m}(F)\backslash Q_{m}(\mathbb{A})\) in the sense that for any parabolic subgroup \(P=MU\) of \(Q_{m}\) with unipotent subgroup \(U\), we have
\[\int_{U(F)\backslash U(\mathbb{A})}\phi_{1}(uq)du=0,\quad\forall q\in Q_{m}( \mathbb{A}).\]
This can be checked easily using cuspidality of \(\phi\), see [14, Lemma 2.2] for a similar situation. Thus by the Fourier expansion for \(\phi_{1}\) we get that
\[\phi_{1}(I_{m+1})=\sum_{\gamma_{1}\in N_{m}(F)\backslash\operatorname{GL}_{m}( F)}W_{\phi_{1}}^{\psi}\left(\begin{pmatrix}\gamma_{1}&&\\ &&1\end{pmatrix}\right),\]
where \(W_{\phi_{1}}^{\psi}\) is the standard \(\psi\)-Whittaker function of \(\phi_{1}\). Plugging in the definitions, we get that
\[\phi_{Z_{m,n}}(g)=\sum_{\gamma_{1}\in N_{m}(F)\backslash\operatorname{GL}_{m} (F)}\int\phi\left(\begin{pmatrix}u&x&z\\ &1&\\ &&I_{n}\end{pmatrix}\begin{pmatrix}\gamma_{1}&&\\ &&1&\\ &&I_{n}\end{pmatrix}g\right)\psi^{-1}(u)\psi^{-1}(x_{m})dudxdz,\]
where \(u=(u_{ij})\in N_{m}(\mathbb{A})\), \(\psi^{-1}(u)=\psi^{-1}(\sum_{i}u_{i,i+1})\) and \(x_{m}\) is the last component of \(x\). Similarly, we consider the mirabolic subgroup \(Q^{\prime}_{n}\) of \(\mathrm{GL}_{n+1}\) of the form
\[Q^{\prime}_{n}=\left\{\begin{pmatrix}1&y\\ 0&g_{2}\end{pmatrix},y\in\mathrm{Mat}_{1\times n},g_{2}\in\mathrm{GL}_{n} \right\}.\]
For fixed \(\gamma_{1}\) and \(g\), we consider the function \(\phi_{2}\) on \(Q^{\prime}_{n}(F)\backslash Q^{\prime}_{n}(\mathbb{A})\) defined by
\[\phi_{2}\left(\begin{pmatrix}1&y\\ 0&g_{2}\end{pmatrix}\right)=\int\phi\left(\begin{pmatrix}u&x&z\\ &1&y\\ &&g_{2}\end{pmatrix}\begin{pmatrix}\gamma_{1}&&\\ &1&\\ &&I_{n}\end{pmatrix}g\right)\psi^{-1}(u)\psi^{-1}(x_{m})dudxdz.\]
Again, \(\phi_{2}\) is a cusp form on \(Q^{\prime}_{n}(F)\backslash Q^{\prime}_{n}(\mathbb{A})\). By a slightly variant form of the Fourier expansion, see for example [1, SS1, Proposition], we have
\[\phi_{2}(I_{n+1})=\sum_{\gamma_{2}\in N_{n}(F)\backslash\mathrm{GL}_{n}(F)}W _{\phi_{2}}^{\psi}\left(\begin{pmatrix}1&\\ &\gamma_{2}\end{pmatrix}\right).\]
Note that
\[W_{\phi_{2}}^{\psi}\left(\begin{pmatrix}1&\\ &\gamma_{2}\end{pmatrix}\right) =\int\phi\left(\begin{pmatrix}u&x&z\\ &1&y\\ &&v\end{pmatrix}\begin{pmatrix}\gamma_{1}&&\\ &1&\\ &&\gamma_{2}\end{pmatrix}g\right)\psi^{-1}(uv)\psi^{-1}(x_{m}+y_{1})dxdydudv\] \[=W_{\phi}^{\psi}\left(\begin{pmatrix}\gamma_{1}&&\\ &&1&\\ &&\gamma_{2}\end{pmatrix}g\right),\]
where \(y_{1}\) in the first integral is the first component of \(y\). The result follows.
**Theorem 2.4**.: _The integral \(I_{j}(\phi,f_{\mathbf{s}})\) is Eulerian. More precisely, in the region of absolute convergence, we have_
\[I_{j}(\phi,f_{\mathbf{s}})=\int_{N_{m+n}(\mathbb{A})\backslash\mathrm{GL}_{m+ n}(\mathbb{A})}\int_{\overline{U}^{j,m,n}(\mathbb{A})}W_{\phi}^{\psi}\left( \overline{u}\eta_{j}{}_{t}{}_{j}{}_{j}{}_{h}{}_{j}{}_{h}{}_{j}{}_{j}{}_{h}{}_{ j}{}_{j}{}_{h}{}_{j}{}_{h}{}_{j}{}_{h}{}_{j}{}_{h}{}_{j}{}_{h}{}_{j}{}_{h}{}_{h}{}_{h}{} _{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{} _{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{} _{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{} _{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{} _{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{} _{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{} _{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{} {}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}_{h}{}_{h}{}_{} {}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{}_{h}{h}_{}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}_{h}{}_{h}{}_{h}_{h}{}_{h}{}_{h}{}_{}_{h}{}_{h}{}_{h}{}_{h}{}_{}_{h}{}_{h}{}_{h}{}_{h}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{} {}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{h}{}_{}{}_{h}{}_{h}{}_{h}{}_{h}{}_{
where
\[\phi_{Y_{j},\psi_{j},N_{m,n}}(h) =\int_{[N_{m,n}]}\phi_{Y_{j},\psi_{j}}(uh)du\] \[=\int_{[Y_{j}]\times[N_{m,n}]}\phi(y_{\ell}(uh))\psi_{j}^{-1}(y)dudy. \tag{2.5}\]
For
\[y=\begin{pmatrix}v_{1}&x_{1}&x_{2}&x_{3}&z\\ &I_{m}&&&y_{3}\\ &&1&&y_{2}\\ &&&I_{n}&y_{1}\\ &&&v_{2}\end{pmatrix}\in Y_{j}(\mathbb{A}),\quad u=\begin{pmatrix}I_{m}&t\\ &I_{n}\end{pmatrix}\in N_{m,n}(\mathbb{A}), \tag{2.6}\]
we have
\[\eta_{j}y_{\ell}{}_{j}(u)\eta_{j}^{-1}=\begin{pmatrix}I_{m}&0&0&y_{3}&t\\ x_{1}&v_{1}&x_{2}&z&x_{3}\\ 0&0&1&y_{2}&0\\ 0&0&0&v_{2}&0\\ 0&0&0&y_{1}&I_{n}\end{pmatrix}, \tag{2.7}\]
where \(v_{1}\in[N_{j}],v_{2}\in[N_{k}],(x_{1},x_{2},x_{3})\in[\text{Mat}_{j\times(m+n +1)}],z\in[\text{Mat}_{j\times k}],(y_{3},y_{2},y_{1})^{t}\in[\text{Mat}_{(m+n+ 1)\times k}],t\in[\text{Mat}_{m\times n}]\). Since \(\phi\) is left \(\text{GL}_{l}(F)\)-invariant and \(\eta_{j,m,n}\in\text{GL}_{l}(F)\), we have
\[\phi_{Y_{j},\psi_{j},N_{m,n}}(h)=\int_{[Y_{j}]\times[i_{j}(N_{m,n})]}\phi \left(\begin{pmatrix}I_{m}&0&0&y_{3}&t\\ x_{1}&v_{1}&x_{2}&z&x_{3}\\ 0&0&1&y_{2}&0\\ 0&0&0&v_{2}&0\\ 0&0&0&y_{1}&I_{n}\end{pmatrix}\eta_{j}\iota_{j}(h)\right)\psi_{j}^{-1}(y)dydu. \tag{2.8}\]
Write
\[Z=\begin{pmatrix}y_{3}&t\\ z&x_{3}\end{pmatrix}\in\text{Mat}_{(m+j)\times(n+k)}(\mathbb{A}).\]
In the right side integral of (2.8), there is an inner integral
\[\int_{[\text{Mat}_{(m+j)\times(n+k)}]}\phi\left(\begin{pmatrix}I_{m+j}&&Z\\ &1&&\\ &&I_{n+k}\end{pmatrix}g\right)dZ,\]
which is
\[\sum_{\begin{subarray}{c}\gamma_{1}\in N_{m+j}(F)\backslash\text{GL}_{m+j}(F )\\ \gamma_{2}\in N_{n+k}(F)\backslash\text{GL}_{m+k}(F)\end{subarray}}W_{\phi}^{ \psi}\left(\begin{pmatrix}\gamma_{1}&&\\ &1&\\ &&\gamma_{2}\end{pmatrix}g\right) \tag{2.9}\]
by Lemma 2.3. Plugging (2.9) into (2.8), we get
\[\phi_{Y_{j},\psi_{j},N_{m,n}}(h)=\sum_{\gamma_{1},\gamma_{2}}\int W_{\phi}^{ \psi}\left(\begin{pmatrix}\gamma_{1}&&\\ &1&\\ &&\gamma_{2}\end{pmatrix}\begin{pmatrix}I_{m}&0&0&0&0\\ x_{1}&v_{1}&x_{2}&0&0\\ 0&0&1&y_{2}&0\\ 0&0&0&v_{2}&0\\ 0&0&0&y_{1}&I_{n}\end{pmatrix}\eta_{j}\iota_{j}(h)\right)\psi_{j}^{-1}(y)dy. \tag{2.10}\]
To simplify the above integral (2.10), we consider its inner integral with respect to \(x_{2}=[x^{1},\dots,x^{j}]\in[\text{Mat}_{j\times 1}]\) first, which is
\[\int_{(F\backslash\mathbb{A})^{j}}W_{\phi}^{\psi}\left(\begin{pmatrix}\gamma_{ 1}&&\\ &1&\\ &&\gamma_{2}\end{pmatrix}\begin{pmatrix}I_{m}&0&0&0&0\\ 0&I_{j}&x_{2}&0&0\\ 0&0&1&0&0\\ 0&0&0&I_{k}&0\\ 0&0&0&0&I_{n}\end{pmatrix}\begin{pmatrix}I_{m}&0&0&0&0\\ x_{1}&v_{1}&0&0&0\\ 0&0&1&y_{2}&0\\ 0&0&0&v_{2}&0\\ 0&0&0&y_{1}&I_{n}\end{pmatrix}\eta_{j}\iota_{j}(h)\right)\psi^{-1}(x^{j})dx_{2}.\]
Write \(\gamma_{1}=(\gamma_{pq})_{1\leq p,q\leq m+j}\), then we have
\[\gamma_{1}\begin{pmatrix}0\\ x_{2}\end{pmatrix}=\begin{pmatrix}*\\ *\\ \vdots\\ \gamma_{m+j,m+1}x^{1}+\gamma_{m+j,m+2}x^{2}+\cdots+\gamma_{m+j,m+j}x^{j} \end{pmatrix}.\]
Thus we get
\[W_{\phi}^{\psi}\left(\begin{pmatrix}\gamma_{1}&&\\ &1&\\ &&\gamma_{2}\end{pmatrix}\begin{pmatrix}I_{m}&0&0&0&0\\ 0&I_{j}&x_{2}&0&0\\ 0&0&1&0&0\\ 0&0&0&I_{k}&0\\ 0&0&0&0&I_{n}\end{pmatrix}g\right)= \psi(\gamma_{m+j,m+1}x^{1}+\cdots+\gamma_{m+j,m+j}x^{j})\] \[\cdot W_{\phi}^{\psi}\left(\begin{pmatrix}\gamma_{1}&&\\ &1&\\ &&\gamma_{2}\end{pmatrix}g\right),\]
with
\[g=\begin{pmatrix}I_{m}&0&0&0&0\\ x_{1}&v_{1}&0&0&0\\ 0&0&1&y_{2}&0\\ 0&0&0&v_{2}&0\\ 0&0&0&y_{1}&I_{n}\end{pmatrix}\eta_{j}\iota_{j}(h).\]
Thus the inner integral of (2.10) with respect to \(x_{2}\) is
\[\int_{(F\backslash\mathbb{A})^{j}}\psi(\gamma_{m+j,m+1}x^{1}+\cdots+(\gamma_{ m+j,m+j}-1)x^{j})dx^{1}\ldots dx^{j}W_{\phi}^{\psi}\left(\begin{pmatrix}\gamma_{ 1}&&\\ &1&\\ &&\gamma_{2}\end{pmatrix}g\right).\]
The above integral over \(x^{1},\ldots,x^{j}\) is \(1\) if \(\gamma_{m+j,m+1}=\cdots=\gamma_{m+j,m+j-1}=0\) and \(\gamma_{m+j,m+j}=1\), and is zero otherwise. Note that if \(\gamma_{m+j,m+1}=\cdots=\gamma_{m+j,m+j-1}=0\), as an element of the coset \(N_{m+j}(F)\backslash\mathrm{GL}_{m+j}(F)\), we can write
\[\gamma_{1}=\begin{pmatrix}\gamma_{1}^{\prime}&\\ &1\end{pmatrix}\begin{pmatrix}I_{m}&&\\ &I_{j-1}&\\ \xi&&1\end{pmatrix},\]
with \(\gamma_{1}^{\prime}\in N_{m+j}(F)\backslash\mathrm{GL}_{m+j}(F),\xi\in \mathrm{Mat}_{1\times m}(F).\) By changing the summation notation, integral (2.10) becomes
\[\phi_{Y_{j},\psi_{j},N_{m,n}}(h)= \sum_{\begin{subarray}{c}\gamma_{1}\in N_{m+j-1}(F)\backslash \mathrm{GL}_{m+j-1}(F)\in F^{m}\\ \gamma_{2}\in N_{n+k}(F)\backslash\mathrm{GL}_{n+k}(F)\end{subarray}}\sum_{ \begin{subarray}{c}\xi\in F^{m}\\ \end{subarray}}\] \[\int W_{\phi}^{\psi}\left(\begin{pmatrix}\gamma_{1}&\\ &I_{2}\\ &&\gamma_{2}\end{pmatrix}\begin{pmatrix}I_{m}&&\\ &\xi&&1&\\ &&&1&\\ &&&I_{n+k}\end{pmatrix}\begin{pmatrix}I_{m}&0&0&0&0\\ x_{1}&v_{1}&0&0&0\\ 0&0&1&y_{2}&0\\ 0&0&0&v_{2}&0\\ 0&0&0&y_{1}&I_{n}\end{pmatrix}\eta_{j}\iota_{j}(h)\right)\] \[\cdot\psi^{-1}(v_{1})\psi^{-1}(v_{2})\psi^{-1}(y^{1})dx_{1}dy_{1} dy_{2}dv_{1}dv_{2}. \tag{2.11}\]
Here \(y^{1}\) is the first component of the \(y_{2}\in\operatorname{Mat}_{1\times k}\). In (2.11), the summation over \(\xi\) could be absorbed into the integral over the last row of \(x_{1}\). Thus we get
\[\phi_{Y_{j},\psi_{j},N_{m,n}}(h)= \sum_{\begin{subarray}{c}\gamma_{1}\in N_{m+j-1}(F)\backslash \operatorname{GL}_{m+j-1}(F)\\ \gamma_{2}\in N_{n+k}(F)\backslash\operatorname{GL}_{n+k}(F)\end{subarray}} \int_{(F\backslash\mathbb{A})^{*}}\int_{\mathbb{A}^{m}}\] \[\int W_{\phi}^{\psi}\left(\begin{pmatrix}\gamma_{1}&\\ &I_{2}\\ &&\gamma_{2}\end{pmatrix}\begin{pmatrix}I_{m}&0&0&0&0&0\\ x_{1}^{\prime}&v_{1}^{\prime}&p&0&0&0\\ x_{j1},\ldots,x_{jm})&0&1&0&0&0\\ 0&0&0&1&y_{2}&0\\ 0&0&0&0&v_{2}&0\\ 0&0&0&0&y_{1}&I_{n}\end{pmatrix}\eta_{j}\iota_{j}(h)\right)\] \[\cdot\psi^{-1}(v_{1})\psi^{-1}(v_{2})\psi^{-1}(y^{1})(\prod_{t=1}^ {m}dx_{jt})dx_{1}^{\prime}dy_{1}dy_{2}dv_{1}dv_{2}, \tag{2.12}\]
where we wrote \(x_{1}=\begin{pmatrix}x_{1}^{\prime}\\ (x_{j1},\ldots,x_{jm})\end{pmatrix}\), \(v_{1}=\begin{pmatrix}v_{1}^{\prime}&p\\ &1\end{pmatrix}\) with \(p\in[\operatorname{Mat}_{(j-1)\times 1}]\), and \(*\) in \((F\backslash\mathbb{A})^{*}\) denotes the number of variables other than the part in \((x_{j1},\ldots,x_{jm})\). We next compute the inner integral over the \(p\)-part, which is similar as above. Note that \(\psi(v_{1})=\psi(v_{1}^{\prime})\psi(p^{j-1})\), where \(p=(p^{1},\ldots,p^{j-1})^{t}\). For \(\gamma_{1}\in\operatorname{GL}_{m+j-1}(F)\), and \(p=(p^{1},\ldots,p^{j-1})^{t}\) we have
\[\gamma_{1}\begin{pmatrix}0_{m\times 1}\\ p\end{pmatrix}=\begin{pmatrix}*\\ &\vdots\\ *\\ \gamma_{m+j-1,m+1}p^{1}+\ldots\gamma_{m+j-1,m+j-1}p^{j-1}\end{pmatrix}.\]
Thus the inner integral over \(p\) in (2.12) is
\[\int_{(F\backslash\mathbb{A})^{j-1}}\psi(\gamma_{m+j-1,m+1}p^{1}+\cdots+( \gamma_{m+j-1,m+j-1}-1)p^{j-1})\prod_{t}dp^{t}W_{\phi}^{\psi}\left(\begin{pmatrix} \gamma_{1}&\\ &1&\\ &&\gamma_{2}\end{pmatrix}g\right),\]
for certain appropriate \(g\) which should be self-evident from the context. The above integral is \(1\) if \(\gamma_{m+j-1,m+1}=\cdots=\gamma_{m+j-1,m+j-2}=0\) and \(\gamma_{m+j-1,m+j-1}=1\), and is zero otherwise. In this case, we can write that
\[\gamma_{1}=\begin{pmatrix}\gamma_{1}^{\prime}\\ &1\end{pmatrix}\begin{pmatrix}I_{m}&&\\ &I_{j-2}&\\ &\xi&1\end{pmatrix}\]
as an element in the coset \(N_{m+j-1}(F)\backslash\operatorname{GL}_{m+j-1}(F)\), where \(\gamma_{1}^{\prime}\in N_{m+j-2}(F)\backslash\operatorname{GL}_{m+j-2}(F),\xi \in F^{m}\). Similarly as above, by absorbing the summation over \(\xi\), we get that
\[\phi_{Y_{j},\psi_{j},N_{m,n}}(h)= \sum_{\begin{subarray}{c}\gamma_{1}\in N_{m+j-2}(F)\backslash \operatorname{GL}_{m+j-2}(F)\\ \gamma_{2}\in N_{n+k}(F)\backslash\operatorname{GL}_{n+k}(F)\end{subarray}} \int_{(F\backslash\mathbb{A})^{*}}\int_{\mathbb{A}^{2m}}\] \[\int W_{\phi}^{\psi}\left(\begin{pmatrix}\gamma_{1}&\\ &I_{3}\\ &&\gamma_{2}\end{pmatrix}\begin{pmatrix}I_{m}&0&0&0&0&0&0\\ x_{1}^{\prime\prime}&v_{1}^{\prime\prime}&p^{\prime}&0&0&0&0\\ (x_{j-1,1},\ldots,x_{j-1,m})&0&1&0&0&0&0\\ (x_{j1},\ldots,x_{jm})&0&0&1&0&0&0\\ 0&0&0&0&1&y_{2}&0\\ 0&0&0&0&0&v_{2}&0\\ 0&0&0&0&0&y_{1}&I_{n}\end{pmatrix}\eta_{j}\iota_{j}(h)\right)\] \[\cdot\psi^{-1}(v_{1}^{\prime})\psi^{-1}(v_{2})\psi^{-1}(y^{1})( \prod_{i=j-1}^{j}\prod_{t=1}^{m}dx_{it})dx_{1}^{\prime}dy_{1}dy_{2}dv_{1}^{ \prime}dv_{2},\]
where \(v_{1}^{\prime}=\begin{pmatrix}v_{1}^{\prime\prime}&p^{\prime}\\ &1\end{pmatrix}\). An induction argument shows that
\[\phi_{Y_{j},\psi_{j},N_{m,n}}(h)= \sum_{\begin{subarray}{c}\gamma_{1}\in N_{m}(F)\mathrm{GL}_{m}(F )\\ \gamma_{2}\in N_{n+k}(F)\mathrm{GL}_{n+k}(F)\end{subarray}}\int_{(F\setminus \mathbb{A})^{*}}\int_{\mathrm{Mat}_{j\times\mathrm{m}}(\mathbb{A})}\] \[W_{\phi}^{\psi}\left(\begin{pmatrix}\gamma_{1}&&\\ &I_{j+1}&\\ &&\gamma_{2}\end{pmatrix}\begin{pmatrix}I_{m}&&&\\ &x&I_{j}&&\\ &&1&\\ &&&1&y_{2}&\\ &&&y_{1}&I_{n}\end{pmatrix}\eta_{m,n}j(h)\right)\] \[\cdot\psi^{-1}(v_{2})\psi^{-1}(y^{1})dxdy_{1}dy_{2}dv_{2}.\]
The integral over \(y_{1},y_{2},v_{2}\) can be done similarly and we have
\[\phi_{Y_{j},\psi_{j},N_{m,n}}(h)= \sum_{\begin{subarray}{c}\gamma_{1}\in N_{m}(F)\mathrm{GL}_{m}( F)\\ \gamma_{2}\in N_{n}(F)\setminus\mathrm{GL}_{n}(F)\end{subarray}}\int_{\mathrm{Mat}_{j \times\mathrm{m}}(\mathbb{A})}\int_{\mathrm{Mat}_{n\times k}(\mathbb{A})}\] \[W_{\phi}^{\psi}\left(\begin{pmatrix}\gamma_{1}&I_{j+k+1}&\\ &\gamma_{2}\end{pmatrix}\begin{pmatrix}I_{m}&&&\\ x&I_{j}&&\\ &&1&\\ &&&I_{k}&\\ &&&y&I_{n}\end{pmatrix}\eta_{j}\iota_{j}(h)\right)dydx\] \[= \sum_{\begin{subarray}{c}\gamma_{1}\in N_{m}(F)\setminus\mathrm{ GL}_{m}(F)\\ \gamma_{2}\in N_{n}(F)\mathrm{GL}_{n}(F)\end{subarray}}\int_{\mathrm{Mat}_{j \times\mathrm{m}}(\mathbb{A})}\int_{\mathrm{Mat}_{n\times k}(\mathbb{A})}\] \[W_{\phi}^{\psi}\left(\begin{pmatrix}I_{m}&&&\\ x&I_{j}&&\\ &&1&\\ &&&I_{k}&\\ &&&y&I_{n}\end{pmatrix}\eta_{j}\iota_{j}\left(\begin{pmatrix}\gamma_{1}&\\ &\gamma_{2}\end{pmatrix}h\right)\right)dydx\]
We now plug the above formula into (2.4) to get
\[\begin{split} I_{j}(\phi,f_{\mathbf{s}})=& \int_{M_{m,n}(F)N_{m,n}(\mathbb{A})\setminus\mathrm{GL}_{m+n}(\mathbb{A})} \phi_{Y_{j},\psi_{j},N_{m,n}}(h)f_{\mathbf{s}}(h)dh\\ =&\int_{(N_{m}(F)\times N_{n}(F))N_{m,n}(\mathbb{A})\setminus \mathrm{GL}_{m+n}(\mathbb{A})}\int_{\overline{U}^{j,m,n}(\mathbb{A})}W_{\phi}^{ \psi}\left(\overline{u}\eta_{j}\iota_{j}(h)\right)f_{\mathbf{s}}(h)d\overline {u}dh.\end{split} \tag{2.13}\]
In order to justify this step, we need to show that the double integral in the second line of (2.13) converges absolutely. This will be done in Subsection 2.5. From (2.13), we obtain
\[\begin{split} I_{j}(\phi,f_{\mathbf{s}})=& \int_{N_{m+n}(\mathbb{A})\setminus\mathrm{GL}_{m+n}(\mathbb{A})}\int_{V^{j,m,n}(\mathbb{A})}\int_{N_{m}(F)\setminus N_{m}(\mathbb{A})}\int_{N_{n}(F) \setminus N_{n}(\mathbb{A})}W_{\phi}^{\psi}\left(\overline{u}\eta_{j}\iota_{j }\left(\begin{pmatrix}u_{1}&\\ &u_{2}\end{pmatrix}h\right)\right)\\ &\cdot f_{\mathbf{s}}\left(\begin{pmatrix}u_{1}&\\ &u_{2}\end{pmatrix}h\right)du_{2}du_{1}dydxdh\\ =&\int_{N_{m+n}(\mathbb{A})\setminus\mathrm{GL}_{m+n}(\mathbb{A})} \int_{\overline{U}^{j,m,n}(\mathbb{A})}\int_{N_{m}(F)\setminus N_{m}(\mathbb{A })}\int_{N_{n}(F)\setminus N_{n}(\mathbb{A})}W_{\phi}^{\psi}\left(\overline{u} \eta_{j}\iota_{j}\left(\begin{pmatrix}u_{1}&\\ &u_{2}\end{pmatrix}h\right)\right)\\ &\cdot f_{\mathbf{s}}\left(\begin{pmatrix}u_{1}&\\ &u_{2}\end{pmatrix}h\right)du_{2}du_{1}dydxdh\\ =&\int_{N_{m+n}(\mathbb{A})\setminus\mathrm{GL}_{m+n}(\mathbb{A})} \int_{\overline{U}^{j,m,n}(\mathbb{A})}W_{\phi}^{\psi}\left(\overline{u}\eta_ {j}\iota_{j}(h)\right)\xi_{f_{\mathbf{s}}}^{\psi^{-1}}(h)d\overline{u}dh. \end{split}\]
The result follows.
### Unfolding of \(I_{j}(\phi,M_{w_{m,n}}(f_{\mathbf{s}}))\)
**Theorem 2.5**.: _The integral \(I_{j}(\phi,M_{w_{m,n}}(f_{\mathbf{s}}))\) is Eulerian. More precisely, in the region of absolute convergence, we have_
\[I_{j}(\phi,\widetilde{f}_{\mathbf{s}})=\int_{N_{n+m}(\mathbb{A})\setminus \operatorname{GL}_{n+m}(\mathbb{A})}\int_{\overline{V}^{j,m,n}}W_{\phi}^{\psi} \left(\overline{u}\gamma_{n,m}\begin{pmatrix}h&&\\ &I_{l-m-n}\end{pmatrix}s_{j,m,n}\right)\xi_{\widetilde{f}_{\mathbf{s}}}^{\psi ^{-1}}(h)d\overline{u}dh,\]
_where_
\[\widetilde{f}_{\mathbf{s}} =M_{w_{m,n}}(f_{\mathbf{s}}),\] \[\overline{V}^{j,m,n} =\left\{\begin{pmatrix}I_{n}&0&0&0&0\\ x&I_{j}&0&0&0\\ &1&0&0\\ &&I_{k}&0\\ &&y&I_{m}\end{pmatrix}:x\in\operatorname{Mat}_{j\times n},y\in\operatorname{ Mat}_{m\times k}\right\}=\overline{U}^{j,n,m},\] \[\gamma_{n,m} =\begin{pmatrix}I_{n}&&&\\ &I_{l-m-n}\\ &I_{m}&\end{pmatrix}\] \[\xi_{\widetilde{f}_{\mathbf{s}}}^{\psi^{-1}}(h) =\int_{N_{n}(F)\setminus N_{n}(\mathbb{A})\times N_{m}(F) \setminus N_{m}(\mathbb{A})}\widetilde{f}_{\mathbf{s}}\left(\begin{pmatrix}u _{1}&\\ &u_{2}\end{pmatrix}h\right)\psi(u_{1})\psi(u_{2})du_{1}du_{2}.\]
Notice that
\[\eta_{j,m,n}s_{j,m,n}^{-1}=\begin{pmatrix}I_{m}&&&\\ &I_{l-m-n}\\ &I_{n}&\end{pmatrix}=\gamma_{m,n}.\]
The proof is similar to the proof of Theorem 2.4. We give some details for completeness.
Proof.: In the following, we assume that \(m\geq n\). If \(n\leq m\), the matrix calculation performed below is a little bit different, but other parts of the proof go through and the result is the same. Note that \(\widetilde{f}_{\mathbf{s}}\in\operatorname{I}(1-\widehat{\mathbf{s}}, \widehat{\mathbf{\tau}})\) is left invariant under \(N_{n,m}(\mathbb{A})\). Thus we have
\[I_{j}(\phi,\widetilde{f}_{\mathbf{s}}) =\int_{P_{n,m}(F)\setminus\operatorname{GL}_{n+m}(\mathbb{A})} \phi_{Y_{j},\psi_{j}}(h)\widetilde{f}_{\mathbf{s}}(h)dh\] \[=\int_{M_{n,m}(F)N_{n,m}(\mathbb{A})\setminus\operatorname{GL}_{ n+m}(\mathbb{A})}\int_{N_{n,m}(F)\setminus N_{n,m}(\mathbb{A})}\phi_{Y_{j}, \psi_{j}}(uh)\widetilde{f}_{\mathbf{s}}(h)dudh\] \[=\int_{M_{n,m}(F)N_{n,m}(\mathbb{A})\setminus\operatorname{GL}_{ n+m}(\mathbb{A})}\phi_{Y_{j},\psi_{j},N_{n,m}}(h)\widetilde{f}_{\mathbf{s}}(h)dh, \tag{2.14}\]
where
\[\phi_{Y_{j},\psi_{j},N_{n,m}}(h): =\int_{N_{n,m}(F)\setminus N_{n,m}(\mathbb{A})}\phi_{Y_{j},\psi _{j}}(uh)du\] \[=\int_{N_{n,m}(F)\setminus N_{n,m}(\mathbb{A})}\int_{Y_{j}(F) \setminus Y_{j}(\mathbb{A})}\phi(y_{j}(u)_{\ell_{j}}(h))\psi_{j}^{-1}(y)dydu.\]
Since \(\phi\) is left \(\operatorname{GL}_{l}(F)\)-invariant, we have
\[\phi(y_{j}(u)_{\ell_{j}}(h))=\phi\left(\gamma_{n,m}s_{j,m,n}y_{\ell_{j}}(u)s_{ j,m,n}^{-1}\gamma_{n,m}^{-1}\gamma_{n,m}\begin{pmatrix}h&&\\ &I_{l-m-n}\end{pmatrix}s_{j,m,n}\right).\]
Write
\[y=\begin{pmatrix}v_{1}&x_{1}&x_{1}^{\prime}&x_{2}&x_{3}&z\\ &I_{n}&&&y_{3}\\ &&I_{m-n}&&&y_{3}^{\prime}\\ &&1&&y_{2}\\ &&&&I_{n}&y_{1}^{\prime}\\ &&&&v_{2}\end{pmatrix}\in Y(\mathbb{A}),\quad u=\begin{pmatrix}I_{n}&t_{1}&t_{2 }\\ &I_{m-n}&\\ &&&I_{n}\end{pmatrix}\in N_{n,m}(\mathbb{A}),\]
with \(v_{1}\in N_{j}(\mathbb{A}),v_{2}\in N_{k}(\mathbb{A})\) and other variables in appropriate matrices spaces. A matrix calculation shows that
\[\gamma_{n,m}s_{j,m,n}y_{tj}(u)s_{j,m,n}^{-1}\gamma_{n,m}^{-1}=\begin{pmatrix}I_{ n}&0&0&y_{3}&t_{1}&t_{2}\\ x_{1}&v_{1}&x_{2}&z&x_{1}^{\prime}+x_{1}t_{1}&x_{3}+x_{1}t_{2}\\ &&1&y_{2}&0&0\\ &&v_{2}&0&0\\ &&&y_{3}^{\prime}&I_{m-n}&0\\ &&&y_{1}^{\prime}&0&I_{n}\end{pmatrix}.\]
Thus we get
\[\phi_{Y_{j},\psi_{j},N_{n,m}}(h)=\int_{[Y_{j}]\times[N_{n,m}]}\phi \left(\begin{pmatrix}I_{n}&0&0&y_{3}&t_{1}&t_{2}\\ x_{1}&v_{1}&x_{2}&z&x_{1}^{\prime}&x_{3}\\ &&1&y_{2}&0&0\\ &&&v_{2}&0&0\\ &&&y_{3}^{\prime}&I_{m-n}&0\\ &&&y_{1}^{\prime}&0&I_{n}\end{pmatrix}\gamma_{n,m}\begin{pmatrix}h&\\ &I_{l-m-n}\end{pmatrix}s_{j}\right)\psi_{j}^{-1}(y)dydu.\]
Denote
\[Z=\begin{pmatrix}y_{3}&t_{1}&t_{2}\\ z&x_{1}^{\prime}&x_{3}\end{pmatrix}\in[\text{Mat}_{(n+j)\times(m+k)}].\]
Then inside the integral \(\phi_{Y_{j},\psi_{j},N_{n,m}}(h)\), there is an inner integral
\[\int_{[\text{Mat}_{n+j}\times(m+k)]}\phi\left(\begin{pmatrix}I_{n+j}&&Z\\ &1&\\ &&I_{m+k}\end{pmatrix}g\right)dZ,\]
which, by Lemma 2.3, equals to
\[\sum_{\begin{subarray}{c}\gamma_{1}\in N_{n+j}(F)\backslash\text{GL}_{n+j}(F) \\ \gamma_{2}\in N_{m+k}(F)\backslash\text{GL}_{m+k}(F)\end{subarray}}W_{\phi}^{ \psi}\left(\begin{pmatrix}\gamma_{1}&&\\ &1&\\ &&\gamma_{2}\end{pmatrix}g\right).\]
Thus we get
\[\phi_{Y_{j},\psi_{j},N_{n,m}}(h)=\sum_{\begin{subarray}{c}\gamma_{1}\in N_{n+j }(F)\backslash\text{GL}_{n+j}(F)\\ \gamma_{2}\in N_{m+k}(F)\backslash\text{GL}_{m+k}(F)\end{subarray}}\int W_{ \phi}^{\psi}\left(\begin{pmatrix}\gamma_{1}&&\\ &1&\\ &&\gamma_{2}\end{pmatrix}\begin{pmatrix}I_{n}&0&0&0&0\\ x_{1}&v_{1}&x_{2}&0&0\\ &&1&y_{2}&0\\ &&&v_{2}&0\\ &&&y_{1}&I_{m}\end{pmatrix}\gamma_{n,m}hs_{j}\right)\]
where \(y_{1}=\begin{pmatrix}y_{3}^{\prime}\\ y_{1}^{\prime}\end{pmatrix}\in[\text{Mat}_{m\times k}]\), and \(h=\begin{pmatrix}h&\\ &I_{l-m-n}\end{pmatrix}\). Note that the above formula is similar to (2.10). By the same method as in the proof of Theorem 2.4, we get that
\[\phi_{Y,\psi,N^{\prime}}(h) =\sum_{\begin{subarray}{c}\gamma_{1}\in N_{n}(F)\text{GL}_{n}(F) \\ \gamma_{2}\in N_{m}(F)\backslash\text{GL}_{m}(F)\end{subarray}}\int_{\nabla^{j,m,n}(\mathbb{A})}W_{\phi}^{\psi}\left(\begin{pmatrix}\gamma_{1}&\\ &I_{l-m-n}&\\ &&\gamma_{2}\end{pmatrix}\overline{v}\gamma_{n,m}hs_{j}\right)d\overline{v}\] \[=\sum_{\begin{subarray}{c}\gamma_{1}\in N_{n}(F)\text{GL}_{n}(F )\\ \gamma_{2}\in N_{m}(F)\backslash\text{GL}_{m}(F)\end{subarray}}\int_{\nabla^{j,m,n}(\mathbb{A})}W_{\phi}^{\psi}\left(\overline{v}\gamma_{n,m}\begin{pmatrix} \gamma_{1}&\\ &\gamma_{2}&\\ &&I_{l-m-n}\end{pmatrix}hs_{j}\right)d\overline{v}.\]
Plugging the above equation into (2.14), we get that
\[I(\phi,\widetilde{f_{\mathbf{s}}}) =\int_{\{(N_{n}(F)\times N_{m}(F))N_{n,m}(\mathbb{A})\}\backslash \operatorname{GL}_{n+m}(\mathbb{A})}\int_{\overline{V}^{m,n}(\mathbb{A})}W_{ \phi}^{\psi}(\bar{v}\gamma_{n,m}hs_{j})\widetilde{f_{\mathbf{s}}}(h)d\overline {v}dh\] \[=\int_{N_{n+m}(\mathbb{A})\backslash\operatorname{GL}_{n+m}( \mathbb{A})}\int_{\overline{V}^{m,n}(\mathbb{A})}W_{\phi}^{\psi}(\overline{v} \gamma_{n,m}hs_{j})\] \[\quad\cdot\int_{[N_{n}]\times[N_{m}]}\widetilde{f_{s}}\left( \begin{pmatrix}u_{1}&\\ &u_{2}\end{pmatrix}\right)\psi(u_{1})\psi(u_{2})du_{1}du_{2}d\overline{v}dh\] \[=\int_{N_{n+m}(\mathbb{A})\backslash\operatorname{GL}_{n+m}( \mathbb{A})}\int_{\overline{V}^{m,n}(\mathbb{A})}W_{\phi}^{\psi}(\overline{v }\gamma_{n,m}hs_{m,n})\xi_{\widetilde{f_{\mathbf{s}}}}^{\psi^{-1}}(h)d \overline{v}dh.\]
The result follows.
### Convergence and justifications
In this subsection, we prove the convergence of the double integral in (2.13), for \(\operatorname{Re}(s_{1})\gg 0,\operatorname{Re}(s_{2})\gg 0\). This is standard and similar to many other situations like [1]. Using the Iwasawa decomposition, and the fact that \((N_{m}(F)\times N_{n}(F))\backslash(N_{m}(\mathbb{A})\times N_{n}(\mathbb{A}))\) is compact, the convergence of the double integral in (2.13), for \(\operatorname{Re}(s_{1})\gg 0,\operatorname{Re}(s_{2})\gg 0\), quickly reduces to the convergence of
\[\int_{T_{m+n}(\mathbb{A})}\|t\|^{N_{0}}|\det(a)|^{\operatorname{Re}(s_{1})+c_ {1}}|\det(b)|^{-\operatorname{Re}(s_{2})+c_{2}}\int_{\overline{U}^{j,m,n}( \mathbb{A})}|W_{\phi}^{\psi}\left(\mathbf{t}_{m,n}(a,b)\overline{u}\right)|d \overline{u}dt \tag{2.15}\]
where \(t=\operatorname{diag}(a,b)\) with \(a\in T_{m}(\mathbb{A}),b\in T_{n}(\mathbb{A})\) and \(\mathbf{t}_{m,n}(a,b)=\eta_{j}i_{j}(t)\eta_{j}^{-1}=\operatorname{diag}(a,I_ {l-m-n},b)\). Here, \(N_{0}\), \(c_{1}\) and \(c_{2}\) are fixed given positive numbers. Note that the integration over \(K_{\operatorname{GL}_{l}(\mathbb{A})}\) is dropped, using a similar reasoning as in [1, Remark 4.7] (in conjunction with Lemma 2.7 and (2.21)). We may assume that the Whittaker function \(W_{\phi}^{\psi}\) decomposes as \(\prod_{v}W_{v}\), where \(W_{v}\) is a local \(\psi_{v}\)-Whittaker function of \(\pi_{v}\), such that outside of a finite set \(S\) of places (including the archimedean ones), \(\pi_{v}\) is unramified, and \(W_{v}=W_{v}^{0}\) is the normalized unramified \(\psi_{v}\)-Whittaker function of \(\pi_{v}\) whose value at the identity is equal to \(1\). We assume that for \(v\) outside of \(S\), \(\psi_{v}\) is unramified. It suffices to prove, for \(\operatorname{Re}(s_{1})\gg 0,\operatorname{Re}(s_{2})\gg 0\), that we have
\[\prod_{v}\int_{T_{m+n}(F_{v})}\|t\|_{v}^{N_{0}}|\det(a)|_{v}^{ \operatorname{Re}(s_{1})+c_{1}}|\det(b)|_{v}^{-\operatorname{Re}(s_{2})+c_{2} }\int_{\overline{U}^{j,m,n}(F_{v})}|W_{v}\left(\mathbf{t}_{m,n}(a,b)\overline {u}\right)|d\overline{u}dt<\infty. \tag{2.16}\]
**Lemma 2.6**.: _Let \(v\) be a finite place of \(F\). For fixed \(W_{v}\in\mathcal{W}(\pi_{v},\psi_{v})\), and \(t=\operatorname{diag}(a,b)\) with \(a\in T_{m}(F_{v}),b\in T_{n}(F_{v})\), the function_
\[\overline{u}\mapsto W_{v}(\mathbf{t}_{m,n}(a,b)\overline{u}),\quad\overline{ u}\in\overline{U}^{j,m,n}(F_{v})\]
_has compact support in \(\overline{U}^{j,m,n}(F_{v})\). If \(W_{v}=W_{v}^{0}\), then this support is in \(\overline{U}^{j,m,n}(\mathcal{O}_{v})\)._
Proof.: The proof is a standard "root killing" argument and it is similar to the proof of [1, Lemma 4.1]. We omit the details.
Suppose \(v\) is finite. By Lemma 2.6, for the local integral of (2.16) at \(v\), it suffices to show
\[\int_{T_{m+n}(F_{v})}\|t\|_{v}^{N_{0}}|\det(a)|_{v}^{\operatorname{Re}(s_{1})+c _{1}}|\det(b)|_{v}^{-\operatorname{Re}(s_{2})+c_{2}}|W_{v}\left(\mathbf{t}_{m,n}(a,b)\right)|dt<\infty \tag{2.17}\]
for \(\operatorname{Re}(s_{i})\gg 0\). Now we recall gauge estimates on Whittaker functions in [1, Section 2]. A gauge on \(\operatorname{GL}_{l}(F_{v})\) is a function \(\xi\) on \(\operatorname{GL}_{l}(F_{v})\) which is invariant on the left under \(N_{l}(F_{v})\), on the right under \(\operatorname{GL}_{l}(\mathcal{O}_{v})\), and which on \(T_{l}(F_{v})\) has the form
\[\xi(t)=|t_{1}t_{2}\cdots t_{l-1}|_{v}^{-c}\Phi(t_{1},t_{2},\cdots,t_{l-1}) \tag{2.18}\]
for
\[t=\operatorname{diag}(t_{1}t_{2}\cdots t_{l},t_{2}\cdots t_{l},\cdots,t_{l-1}t_ {l},t_{l})\in T_{l}(F_{v}),\]
where \(c\geq 0\) is a real number and \(\Phi\geq 0\) is a Schwartz-Bruhat function on \(F_{v}^{l-1}\). In particular, \(\xi\) is invariant under the center of \(\operatorname{GL}_{l}(F_{v})\). Write \(a\in T_{m}(F)\) and \(b\in T_{n}(F)\) as
\[a =\operatorname{diag}(a_{1}\cdots a_{m},a_{2}\cdots a_{m},\dots,a_ {m-1}a_{m},a_{m}),\] \[b =\operatorname{diag}(b_{1}^{-1},b_{1}^{-1}b_{2}^{-1},\dots,b_{1} ^{-1}b_{2}^{-1}\cdots b_{n}^{-1}),\]
with \(a_{i}\in F^{\times},b_{j}\in F^{\times}\). Then
\[|\det(a)|_{v} =|a_{1}a_{2}^{2}\cdots a_{m-1}^{m-1}a_{m}^{m}|_{v},\] \[|\det(b)|_{v} =|b_{1}^{n}b_{2}^{n-1}\cdots b_{n-1}^{2}b_{n}|_{v}^{-1},\]
and
\[\operatorname{\mathbf{t}}_{m,n} (\operatorname{diag}(a,b))=\] \[\operatorname{diag}(a_{1}\cdots a_{m},a_{2}\cdots a_{m},\dots,a_ {m-1}a_{m},a_{m},1,1,\dots,1,b_{1}^{-1},b_{1}^{-1}b_{2}^{-1},\dots,b_{1}^{-1}b _{2}^{-1}\cdots b_{n}^{-1}).\]
Then for a gauge \(\xi\) on \(\operatorname{GL}_{l}(F_{v})\), it follows from (2.18) that there is some real number \(c\geq 0\) and a Schwartz-Bruhat function \(\Phi\) on \(F_{v}^{l-1}\) such that
\[\xi(\operatorname{\mathbf{t}}_{m,n}(\operatorname{diag}(a,b)))=|a_{1}\cdots a _{m}b_{1}\cdots b_{n}|_{v}^{-c}\Phi(a_{1},a_{2},\dots,a_{m},1,\dots,1,b_{1},b_ {2},\dots,b_{n}). \tag{2.19}\]
Write \(|\omega_{\pi}|_{v}=\alpha^{c_{0}}\), where \(\alpha\) is a non-negative real-valued function on \(F_{v}^{\times}\) and \(c_{0}\) is a real number. By [13, Proposition 2.3.6], for any Whittaker function \(W_{v}\in\mathcal{W}(\pi_{v},\psi_{v})\), there is a gauge \(\xi\) such that
\[|W_{v}\otimes\alpha^{-c_{0}/l}|\leq\xi. \tag{2.20}\]
Then (2.17) follows from (2.19) and the estimate (2.20). This proves that the product in (2.16) over finite places is convergent.
Now we turn to the archimedean places. Let \(v\) be an archimedean place, so \(F_{v}\) is either \(\mathbb{R}\) or \(\mathbb{C}\). We recall the notion of gauge [13] in this setting, which is slightly different from the non-archimedean case. Let \(\chi\) be a sum of positive characters of \(T_{l}(F_{v})\) trivial on the center of \(\operatorname{GL}_{l}(F_{v})\). An homogeneous gauge on \(\operatorname{GL}_{l}(F_{v})\) is a function \(\xi\) on \(\operatorname{GL}_{l}(F_{v})\) of the form
\[\xi(ntk)=\chi(t)\Phi(t_{1},t_{2},\cdots,t_{l-1}),\]
where \(n\in N_{l}(F_{v})\), \(t=\operatorname{diag}(t_{1},\cdots,t_{l})\in T_{l}(F_{v})\), \(k\) is in the maximal compact subgroup \(K_{l}\) of \(\operatorname{GL}_{l}(F_{v})\), and \(\Phi>0\) is a rapidly decreasing function in \(l-1\) variables. Here, \(\Phi\) being rapidly decreasing means that, for every set of integers \(N_{i}\), \(1\leq i\leq l-1\), there is a constant \(C>0\) such that
\[\Phi(t_{1},t_{2},\cdots,t_{l-1})\leq C\prod_{i}(1+|t_{i}|_{v}^{2})^{-N_{i}}.\]
We have the following estimate.
**Lemma 2.7**.: _Let \(v\) be an archimedean place. Let \(\xi\) be an homogeneous gauge on \(\operatorname{GL}_{l}(F_{v})\). Let \(a\in T_{m}(F_{v})\) and \(b\in T_{n}(F_{v})\), with Iwasawa decompositions_
\[a=n_{1}\mathrm{diag}(t_{1},\cdots,t_{m})k_{1},\quad b=n_{2}\mathrm{diag}(t_{l -n+1},\cdots,t_{l})k_{2},\]
_where \(t_{1},\cdots,t_{m},t_{l-n+1},\cdots,t_{l}\) are positive real numbers. Set \(t_{m+1}=t_{m+2}=\cdots=t_{l-n}=1\). Given positive integers \(M_{1},\cdots,M_{j},N_{1},\cdots,N_{n}\), \(L_{1},\cdots,L_{l-1}\), there exists a positive constant \(C>0\) such that_
\[\xi\left(\left(\begin{pmatrix}a&I_{j}&&&\\ &1&&\\ &&&I_{k}&\\ &&&b\end{pmatrix}\begin{pmatrix}I_{m}&&&\\ x&I_{j}&&\\ &&1&&\\ &&&I_{k}&\\ &&&y&I_{n}\end{pmatrix}\right)\right)\] \[\leq C\prod_{i=1}^{j}(1+||x_{i}||^{2})^{-M_{i}}\prod_{i=1}^{n}(1+||y_{i} ||^{2})^{-N_{i}}\prod_{i=1}^{l-1}\left(1+\left|\frac{t_{i}}{t_{i+1}}\right|^{ 2}\right)^{-L_{i}}\chi(t_{1},t_{2},\cdots,t_{l}).\]
_Here, \(\chi\) is a fixed sum of positive characters of \(T_{l}(F_{v})\)._
Proof.: The proof is similar to that of [13, Lemma 5.2]. See also [1, Lemma 4.6]. We omit the details.
By [13, Proposition 2.1], for any Whittaker function \(W_{v}\in\mathcal{W}(\pi_{v},\psi_{v})\), there is a gauge \(\xi\) on \(\mathrm{GL}_{l}(F_{v})\) such that
\[|W_{v}(g)|\leq\xi(g),\quad g\in\mathrm{GL}_{l}(F_{v}). \tag{2.21}\]
Combining (2.21) with Lemma 2.7, we conclude that the archimedean part of the product in (2.16) is convergent. This complete the proof of the convergence of the double integral in (2.13).
## 3. The local integrals
In this section, let \(F\) be a local field. Let \(\psi\) be a nontrivial additive character of \(F\). We still fix a positive integer \(l\) and non-negative integers \(m,n\) such that \(m+n\leq l-1\). For \(0\leq j\leq l-m-n-1\), we set \(k=l-m-n-1-j\).
### Definition of the local zeta integrals
Let \(\pi\) be an irreducible generic representation of \(\mathrm{GL}_{l}(F)\) and let \(\mathcal{W}(\pi,\psi)\) be its Whittaker model. Let \((\tau_{1},V_{\tau_{1}})\) (resp. \((\tau_{2},V_{\tau_{2}})\)) be an irreducible generic representation of \(\mathrm{GL}_{m}(F)\) (resp. \(\mathrm{GL}_{n}(F)\)). As in the last section, we write \(\boldsymbol{\tau}=(\tau_{1},\tau_{2})\) and \(\widehat{\boldsymbol{\tau}}=(\tau_{2},\tau_{1})\). Let \(\mathbf{s}=(s_{1},s_{2})\) be a pair of complex numbers. Then we can consider the induced representation
\[\mathrm{I}(\mathbf{s},\boldsymbol{\tau}):=\mathrm{Ind}_{P_{m,n}(F)}^{\mathrm{ GL}_{m+n}(F)}(\tau_{1}||^{s_{1}-\frac{1}{2}}\boxtimes\tau_{2}||^{-s_{2}+1/2}).\]
We fix \(\psi^{-1}\)-Whittaker functionals \(\lambda_{i}\) of \(\tau_{i}\). Recall that a section \(f_{\mathbf{s}}\in\mathrm{I}(\mathbf{s},\boldsymbol{\tau})\) is a function \(f_{\mathbf{s}}:\mathrm{GL}_{m+n}(F)\to V_{\tau_{1}}\boxtimes V_{\tau_{2}}\) satisfying certain quasi-invariance properties. We consider the \(\mathbb{C}\)-valued function
\[\xi_{f_{\mathbf{s}}}:\mathrm{GL}_{m+n}(F)\times\mathrm{GL}_{m}(F)\times \mathrm{GL}_{n}(F)\to\mathbb{C}\]
defined by
\[\xi_{f_{\mathbf{s}}}(h,a_{1},a_{2})=\lambda_{1}\boxtimes\lambda_{2}(\tau_{1}( a_{1})\boxtimes\tau_{2}(a_{2})(f_{\mathbf{s}}(h))).\]
Set \(\mathcal{W}(\mathbf{s},\boldsymbol{\tau},\psi^{-1})=\{\xi_{f_{\mathbf{s}}}:f_ {\mathbf{s}}\in\mathrm{I}(\mathbf{s},\boldsymbol{\tau})\}\). Note that an element \(\xi_{\mathbf{s}}\) satisfies
\[\xi_{\mathbf{s}}\left(\begin{pmatrix}b_{1}&\\ &b_{2}\end{pmatrix}uh,a_{1},a_{2}\right)=|\det(b_{1})|^{s_{1}+\frac{n-1}{2}}| \det(b_{2})|^{-s_{2}-\frac{m-1}{2}}\xi_{\mathbf{s}}(h,a_{1}b_{1},a_{2}b_{2}),\]
for \(a_{1},b_{1}\in\mathrm{GL}_{m}(F),a_{2},b_{2}\in\mathrm{GL}_{n}(F),u\in N_{m,n} (F),h\in\mathrm{GL}_{m+n}(F)\). In particular
\[\xi_{\mathbf{s}}\left(\begin{pmatrix}u_{1}&\\ &u_{2}\end{pmatrix}uh,I_{m},I_{n}\right)=\psi^{-1}(u_{1})\psi^{-1}(u_{2})\xi_{ \mathbf{s}}(h,I_{m},I_{n}),\]
for \(u_{1}\in N_{m}(F),u_{2}\in N_{n}(F),u\in N_{m,n}(F),h\in\mathrm{GL}_{m+n}(F).\) We usually write \(\xi_{\mathbf{s}}(h,I_{m},I_{n})\) as \(\xi_{\mathbf{s}}(h)\) for simplicity.
Similarly, we can consider the space \(\mathcal{W}(1-\widehat{\mathbf{s}},\widehat{\boldsymbol{\tau}},\psi^{-1})= \left\{\xi_{f_{1-\widehat{\mathbf{s}}}}:f_{1-\widehat{\mathbf{s}}}\in\mathrm{ I}(1-\widehat{\mathbf{s}},\widehat{\boldsymbol{\tau}})\right\}\). Note that the intertwining operator on the induced representations gives an intertwining operator
\[M_{w_{m,n}}:\mathcal{W}(\mathbf{s},\boldsymbol{\tau},\psi^{-1})\to\mathcal{W} (1-\widehat{\mathbf{s}},\widehat{\boldsymbol{\tau}},\psi^{-1})\]
defined by
\[M_{w_{m,n}}(\xi_{\mathbf{s}})(h,a_{1},a_{2})=\int_{N_{n,m}(F)}\xi_{\mathbf{s}} (w_{m,n}uh,a_{2},a_{1})du,\]
where \(a_{1}\in\mathrm{GL}_{n}(F),a_{2}\in\mathrm{GL}_{m}(F)\).
For \(W\in\mathcal{W}(\pi,\psi)\), \(\xi_{\mathbf{s}}\in\mathcal{W}(\mathbf{s},\boldsymbol{\tau},\psi^{-1})\), and for \(j\) with \(0\leq j\leq l-m-n-1\), we consider the local zeta integrals
\[\Psi(W,\xi_{\mathbf{s}};j):=\int_{N_{m+n}(F)\setminus\mathrm{GL}_{m+n}(F)}\int _{\overline{U}^{j,m,n}(F)}W\left(\overline{u}\gamma_{m,n}\begin{pmatrix}h&\\ &I_{l-m-n}\end{pmatrix}\right)\xi_{\mathbf{s}}(h)d\overline{u}dh, \tag{3.1}\]
where we recall that
\[\overline{U}^{j,m,n} =\left\{\overline{u}(x,y)=\begin{pmatrix}I_{m}&&&&\\ x&I_{j}&&\\ &&1&&\\ &&&I_{k}&\\ &&&y&I_{n}\end{pmatrix}:\begin{subarray}{c}x\in\operatorname{Mat}_{j\times m}\\ y\in\operatorname{Mat}_{n\times k}\\ \end{subarray}\right\},\] \[\gamma_{m,n} =\eta_{j,m,n}s_{j,m,n}^{-1}=\begin{pmatrix}I_{m}&&&\\ &&I_{l-m-n}\\ I_{n}&&\end{pmatrix}.\]
Here we remark that the natural numbers \(m,n\) appeared in the local zeta integral (3.1) are determined by the section \(\xi_{\mathbf{s}}\), which is an element of \(\operatorname{Ind}_{P_{m,n}(F)}^{\operatorname{GL}_{m+n}(F)}(\tau_{1}||^{s_{ 1}-1/2}\otimes\tau_{2}||^{-s_{2}+1/2})\). In particular, if we take \(\widetilde{\xi}_{1-\widehat{\mathbf{s}}}\in\mathcal{W}(1-\widehat{\mathbf{s }},\widehat{\boldsymbol{\tau}},\psi^{-1})\), we should have
\[\Psi(W,\widetilde{\xi}_{1-\widehat{\mathbf{s}}};j)=\int_{N_{m+n}(F)\setminus \operatorname{GL}_{m+n}(F)}\int_{\overline{U}^{j,n,m}(F)}W\left(\overline{u} \gamma_{n,m}\begin{pmatrix}h&\\ &I_{l-m-n}\end{pmatrix}\right)\widetilde{\xi}_{1-\widehat{\mathbf{s}}}(h)d \overline{u}dh. \tag{3.2}\]
**Remark 3.1**.: In this remark, we assume that \(F\) is a global field. If \(\phi=\otimes\phi_{v}\) is a cusp form on \(\operatorname{GL}_{l}(\mathbb{A})\) and \(f_{\mathbf{s}}=\otimes f_{\mathbf{s},v}\in\operatorname{I}(\mathbf{s}, \boldsymbol{\tau})\) is a pure tensor of a global section, then Theorem 2.4 and Theorem 2.5 imply that
\[I_{j}(\phi,f_{\mathbf{s}})=\prod_{v}\Psi(\rho(s_{j,m,n})W_{v},\xi_{f_{\mathbf{s },v}};j),\quad I_{j}(\phi,\widetilde{f}_{\mathbf{s}})=\prod_{v}\Psi(\rho(s_{j,m,n})W_{v},\xi_{\widetilde{f}_{\mathbf{s},v}};j).\]
Here \(\rho\) denotes the right translation.
**Remark 3.2**.: In this remark, we consider the degenerate case when \(m>0\) and \(n=0\). In this case, \(\boldsymbol{\tau}=\tau_{1}\) is just a representation of \(\operatorname{GL}_{m}(F)\), and \(\mathbf{s}=s\) is a single complex number. Moreover, an element \(\xi_{\mathbf{s}}\) has the form \(\xi_{\mathbf{s}}(h)=W^{\prime}(h)|\det(h)|^{s-1/2}\) and we have \(M_{w_{m,0}}(\xi_{\mathbf{s}})=\xi_{\mathbf{s}}.\) Thus
\[\Psi(W,\xi_{\mathbf{s}};j)=\int_{N_{m}(F)\setminus\operatorname{ GL}_{m}(F)}\int_{\operatorname{Mat}_{j\times m}(F)}W\left(\begin{pmatrix}I_{m} &&\\ x&I_{j}&\\ &&I_{l-m-j}\end{pmatrix}\begin{pmatrix}h&&\\ &I_{l-m}\end{pmatrix}\right)\] \[\cdot W^{\prime}(h)|\det(h)|^{s-1/2}dxdh,\]
and
\[\Psi(W,M_{w_{m,0}}(\xi_{\mathbf{s}});j)=\int_{N_{m}(F)\setminus \operatorname{GL}_{m}(F)}\int_{\operatorname{Mat}_{m\times k}(F)}W\left( \begin{pmatrix}I_{j+1}&&\\ &I_{k}&\\ &y&I_{m}\end{pmatrix}\begin{pmatrix}&I_{l-m}\end{pmatrix}\begin{pmatrix}h&&\\ &I_{l-m}\end{pmatrix}\right)\] \[\cdot W^{\prime}(h)|\det(h)|^{s-1/2}dydh.\]
Here we notice that \(\gamma_{m,0}=I_{l}\) while \(\gamma_{0,m}=\begin{pmatrix}&I_{l-m}\end{pmatrix}.\) A simple change of variable shows that
\[\Psi(W,\xi_{\mathbf{s}};j)=\int_{N_{m}(F)\setminus\operatorname{GL}_{m}(F)} \int_{\operatorname{Mat}_{j\times m}(F)}W\left(\begin{pmatrix}h&&\\ x&I_{j}&\\ &&I_{l-m-j}\end{pmatrix}\right)W^{\prime}(h)|\det(h)|^{s-1/2-j}dxdh.\]
One can compare the above integral with that defined by Jacquet-Piatetski-Shapiro-Shalika in [10] and observe that
\[\Psi(W,\xi_{\mathbf{s}};j)=\Psi^{\mathrm{JPSS}}(s-j+\frac{l-m-1}{2},W,W^{\prime };j), \tag{3.3}\]
where \(\Psi^{\rm JPSS}\) denotes the integral defined in [11, p.387]. On the other hand, for \(W\in\mathcal{W}(\pi,\psi)\), we denote \(\widetilde{W}(g)=W(J_{l}t^{g-1})\), which represents a Whittaker function of the contragredient representation \(\widetilde{\pi}\) of \(\pi\). It is easy to check that
\[\Psi(W,M_{w_{m,0}}(\xi_{\mathbf{s}});j)=\int_{N_{m}(F)\backslash \operatorname{GL}_{m}(F)}\int_{\operatorname{Mat}_{k\times m}(F)} \widetilde{W}\left(\begin{pmatrix}h&\\ y&I_{k}&\\ &I_{j+1}\end{pmatrix}\begin{pmatrix}I_{m}&\\ &J_{l-m}\end{pmatrix}\right)\] \[\widetilde{W}^{\prime}(h)|\det(h)|^{-s+1/2-k}dydh.\]
Thus we get
\[\Psi(W,M_{w_{m,0}}(\xi_{\mathbf{s}});j)=\Psi^{\rm JPSS}\left(1-s-k+\frac{l-m-1 }{2},\rho\left(\begin{pmatrix}I_{m}&\\ &J_{l-m}\end{pmatrix}\right)\widetilde{W},\widetilde{W}^{\prime};k\right). \tag{3.4}\]
**Remark 3.3**.: If \(l=2r+1\) and \(m=n\) with \(1\leq m\leq r\), then the integral \(\Psi(W,\xi_{\mathbf{s}};r-m)\) is the local zeta integral of \(\operatorname{U}_{E/F}(2r+1)\times\operatorname{Res}_{E/F}(\operatorname{GL} _{m})\) at split places as in [1], where \(E/F\) is a quadratic extension of global fields.
**Proposition 3.4**.: _The local zeta integrals \(\Psi(W,\xi_{\mathbf{s}};j)\) are absolutely convergent for \(\operatorname{Re}(s_{i})\gg 0\) for \(i=1,2\). Over nonarchimedean local fields, there exist \(W\) and \(\xi_{\mathbf{s}}\), such that the integral is absolutely convergent and equals 1, for all \(\mathbf{s}\). Over archimedean fields, for any \(\mathbf{s}\), there are choices of data \((W^{i},\xi_{\mathbf{s}}^{i})\) such that \(\sum_{i}\Psi(W^{i},\xi_{\mathbf{s}}^{i};j)\) is holomorphic and nonzero in a neighborhood of \(\mathbf{s}\)._
Proof.: For \(n=0\), this was already proved in [11] over nonarchimedean local fields and in [11] over archimedean fields. Very similar statements can be found in many other places in the literature, for example, [12], [13], [14], [15], and [16]. We provide some details here for completeness.
First, we consider the case where \(F\) is nonarchimedean. By the Iwasawa decomposition, and the fact that smooth vectors are finite under the maximal compact subgroup, we get that \(\Psi(W,\xi_{\mathbf{s}};j)\) is a finite sum of integrals of the form
\[\int_{T_{m+n}(F)}\int_{\widetilde{U}^{m,n}(F)}W^{\prime}(\mathbf{t}_{m,n}(a,b )\overline{u})d\overline{u}W_{\tau_{1}}(a)W_{\tau_{2}}(b)|\det(a)|^{s_{1}+ \frac{n-1}{2}-j}|\det(b)|^{-s_{2}-\frac{m-1}{2}+k}\delta_{B_{m+n}}(t)^{-1}dt\]
where \(W^{\prime}\in\mathcal{W}(\pi,\psi)\), \(W_{\tau_{1}}\in\mathcal{W}(\tau_{1},\psi^{-1})\), \(W_{\tau_{2}}\in\mathcal{W}(\tau_{2},\psi^{-1})\), \(t=\operatorname{diag}(a,b)\) with \(a\in T_{m}(F),b\in T_{n}(F)\) and \(\mathbf{t}_{m,n}(a,b)=\operatorname{diag}(a,I_{l-m-n},b)\). Here the term \(|\det(a)|^{-j}|\det(b)|^{k}\) comes from conjugating \(\mathbf{t}_{m,n}(a,b)\) to the left of \(\overline{u}\) and making a change of variables on \(\overline{u}\). By Lemma 2.6, the last integral is a finite sum of integrals of the form
\[\int_{T_{m+n}(F)}W^{\prime}(\mathbf{t}_{m,n}(a,b)W_{\tau_{1}}(a)W_{\tau_{2}}( b)|\det(a)|^{s_{1}+\frac{n-1}{2}-j}|\det(b)|^{-s_{2}-\frac{m-1}{2}+k}\delta_{B_{m+n }}(t)^{-1}dt. \tag{3.5}\]
Now we recall the asymptotic expansion of Whittaker functions [11, Section 2.5]. There is a finite set \(X_{l}\) of functions on \(T_{l}(F)\) such that for every \(W\in\mathcal{W}(\pi,\psi)\) we have
\[W(t)=\sum_{\chi\in X_{l}}\omega_{\pi}(t_{l})\phi_{\chi}(t_{1},t_{2},\cdots,t_{l -1})\chi(t)\]
where \(t=\operatorname{diag}(t_{1}t_{2}\cdots t_{l},t_{2}\cdots t_{l},\cdots,t_{l-1} t_{l},t_{l})\in T_{l}(F)\) and \(\phi_{\chi}\in\mathcal{S}(F^{l-1})\). Then for every \(W\in\mathcal{W}(\pi,\psi)\), we have
\[|W(t)|\leq\sum_{\eta\in Y_{l}}\phi_{\eta}(t_{1},t_{2},\cdots,t_{l-1})\eta(t) \tag{3.6}\]
where \(\phi_{\eta}\in\mathcal{S}(F^{l-1})\) is non-negative and \(\eta\) varies in another finite set \(Y_{l}\) of finite functions on \(T_{l}(F)\). Applying the majorization (3.6) to \(W^{\prime}\) (and the analogous ones for \(W_{\tau_{1}}\) and \(W_{\tau_{2}}\)), we obtain the absolute convergence of the integral (3.5) for \(\operatorname{Re}(s_{i})\gg 0\) for \(i=1,2\). Hence \(\Psi(W,\xi_{\mathbf{s}};j)\) is absolutely convergent for \(\operatorname{Re}(s_{i})\gg 0\) for \(i=1,2\).
We continue to assume that \(F\) is nonarchimedean. Since \(N_{m+n}(F)T_{m+n}(F)\overline{N}_{m+n}(F)\) is an open dense subset of \(\operatorname{GL}_{m+n}(F)\) whose complement has Haar measure zero, we may rewrite \(\Psi(W,\xi_{\mathbf{s}};j)\)
as
\[\int_{T_{m+n}(F)}\int_{\overline{N}_{m+n}(F)}\int_{\overline{U}^{j,m, n}(F)}W\left(\overline{u}\gamma_{m,n}\begin{pmatrix}t\overline{t}&\\ &I_{l-m-n}\end{pmatrix}\right)\xi_{\mathbf{s}}(\overline{v},a,b)\\ |\det(a)|^{s_{1}+\frac{n-1}{2}}|\det(b)|^{-s_{2}-\frac{m-1}{2}} \delta_{B_{m+n}}(t)^{-1}d\overline{u}d\overline{v}dt, \tag{3.7}\]
where \(t=\operatorname{diag}(a,b)\) with \(a\in T_{m}(F)\), \(b\in T_{n}(F)\). Similar to [14, Proposition 6.1], we choose \(\xi_{\mathbf{s}}\) to have support in \(B_{m+n}(F)\cdot\mathcal{V}_{1}\), where \(\mathcal{V}_{1}\) is a small open compact subgroup of \(\operatorname{GL}_{m+n}(F)\), and such that \(\xi_{\mathbf{s}}(u,b_{1},b_{2})=W_{\tau_{1}}(b_{1})W_{\tau_{2}}(b_{2})\) for \(u\in\mathcal{V}_{1}\), \(b_{1}\in T_{m}(F),b_{2}\in T_{n}(F)\). Here, \(W_{\tau_{i}}\in\mathcal{W}(\tau_{i},\psi^{-1})\) for \(i=1,2\). We choose \(\mathcal{V}_{1}\) so small that \(W\) is fixed by \(\pi(\operatorname{diag}(\overline{v},I_{l-m-n}))\) for \(\overline{v}\in\mathcal{V}_{1}\). Thus, \(\Psi(W,\xi_{\mathbf{s}};j)\) is equal to
\[\operatorname{vol}(\mathcal{V}_{1}\cap\overline{N}_{m+n}(F))\cdot \int_{T_{m+n}(F)}\int_{\overline{U}^{j,m,n}(F)}W\left(\overline{u}\gamma_{m,n }\begin{pmatrix}t&\\ &I_{l-m-n}\end{pmatrix}\right)W_{\tau_{1}}(a)W_{\tau_{2}}(b)\\ |\det(a)|^{s_{1}+\frac{n-1}{2}}|\det(b)|^{-s_{2}-\frac{m-1}{2}} \delta_{B_{m+n}}(t)^{-1}d\overline{u}dt.\]
We conjugate \(\operatorname{diag}(t,I_{l-m-n})\) to the left of \(\overline{u}\) and make a change of variable in \(\overline{u}\) to get
\[\operatorname{vol}(\mathcal{V}_{1}\cap\overline{N}_{m+n}(F))\cdot \int_{T_{m+n}(F)}\int_{\overline{U}^{j,m,n}(F)}\rho(\gamma_{m,n})W\left( \begin{pmatrix}a&\\ &I_{l-m-n}&\\ &b\end{pmatrix}\overline{u}\right)W_{\tau_{1}}(a)W_{\tau_{2}}(b)\\ |\det(a)|^{s_{1}+\frac{n-1}{2}-j}|\det(b)|^{-s_{2}-\frac{m-1}{2}+k} \delta_{B_{m+n}}(t)^{-1}d\overline{u}dt.\]
Now we choose \(W\), \(W_{\tau_{1}}\) and \(W_{\tau_{2}}\) such that the function
\[(a,b,\overline{u})\mapsto\rho(\gamma_{m,n})W\left(\begin{pmatrix}a&\\ &I_{l-m-n}&\\ &b\end{pmatrix}\overline{u}\right)W_{\tau_{1}}(a)W_{\tau_{2}}(b)\]
is the characteristic function of a small neighborhood of \((I_{m},I_{n},I_{l})\). Thus the integral can be made constant.
Now we assume \(F\) is archimedean. Similar to [14, Lemma 5.2], there is a positive integer \(A_{0}\), such that for any \(\xi_{\mathbf{s}}\), there is a constant \(c_{\mathbf{s}}>0\), such that
\[|\xi_{\mathbf{s}}(\operatorname{diag}(a,b)k)|\leq c_{\mathbf{s}}|\det(a)|^{ \operatorname{Re}(s_{1})+\frac{n-1}{2}}|\det(b)|^{-\operatorname{Re}(s_{2})- \frac{m-1}{2}}\|\operatorname{diag}(a,b)\|^{A_{0}},\]
where \(a\in T_{m}(F),b\in T_{n}(F)\), and \(k\) is in the maximal compact subgroup \(K_{l}\) of \(\operatorname{GL}_{l}(F)\). We then use the Iwasawa decomposition, (2.21) and Lemma 2.7 to conclude the absolute convergence of \(\Psi(W,\xi_{\mathbf{s}};j)\).
Now we prove the non-vanishing of the integrals when \(F\) is archimedean. Write \(\Psi(W,\xi_{\mathbf{s}};j)\) in the form (3.7). Choose \(\xi_{\mathbf{s}}\) to have support in \(P_{m,n}(F)\cdot\overline{N}_{m+n}(F)\), and assume
\[\xi_{\mathbf{s}}\left(\begin{pmatrix}b_{1}&\\ &b_{2}\end{pmatrix}u\overline{v},a_{1},a_{2}\right)=|\det(b_{1})|^{s_{1}+\frac{n -1}{2}}|\det(b_{2})|^{-s_{2}-\frac{m-1}{2}}\varphi_{1}(\overline{v})W_{\tau_{ 1}}(a_{1}b_{1})W_{\tau_{2}}(a_{2}b_{2}),\]
for \(a_{1},b_{1}\in\operatorname{GL}_{m}(F),a_{2},b_{2}\in\operatorname{GL}_{n}(F),u \in N_{m,n}(F),\overline{v}\in\overline{N}_{m+n}(F)\), \(W_{\tau_{i}}\in\mathcal{W}(\tau_{i},\psi^{-1})\) for \(i=1,2\), and \(\varphi_{1}\in C_{c}^{\infty}(\overline{N}_{m+n}(F))\). With this choice, \(\Psi(W,\xi_{\mathbf{s}};j)\) is equal to an integral of the form
\[\int_{T_{m+n}(F)}\int_{\overline{N}_{m+n}(F)}\int_{\overline{U}^{ j,m,n}(F)}W\left(\overline{u}\gamma_{m,n}\begin{pmatrix}t\overline{t}&\\ &I_{l-m-n}\end{pmatrix}\right)\varphi_{1}(\overline{v})W_{\tau_{1}}(a)W_{\tau_{ 2}}(b)\\ |\det(a)|^{s_{1}+\frac{n-1}{2}}|\det(b)|^{-s_{2}-\frac{m-1}{2}} \delta_{B_{m+n}}(t)^{-1}d\overline{u}d\overline{v}dt. \tag{3.8}\]
We consider the \(d\overline{v}\) integration first. By the Dixmier-Malliavin Theorem [13], a linear combination of the \(d\overline{v}\) integrals represents a general element of \(\mathcal{W}(\pi,\psi)\). Thus, a suitable linear combination of integrals of the form (3.8) gives an integral of the form
\[\int_{T_{m+n}(F)}\int_{\overline{U}^{j,m,n}(F)}W\left(\overline{u} \gamma_{m,n}\begin{pmatrix}t&\\ &I_{l-m-n}\end{pmatrix}\right)W_{\tau_{1}}(a)W_{\tau_{2}}(b)\\ |\det(a)|^{s_{1}+\frac{n-1}{2}}|\det(b)|^{-s_{2}-\frac{m-1}{2}} \delta_{B_{m+n}}(t)^{-1}d\overline{u}dt.\]
We conjugate \(\operatorname{diag}(t,I_{l-m-n})\) to the left of \(\overline{u}\) to get
\[\int_{T_{m+n}(F)}\int_{\overline{U}^{j,m,n}(F)} \rho(\gamma_{m,n})W\left(\left(\begin{matrix}a&I_{l-m-n}&\\ &b\end{matrix}\right)\overline{u}\right)W_{\tau_{1}}(a)W_{\tau_{2}}(b)\] \[|\det(a)|^{s_{1}+\frac{n-1}{2}-j}|\det(b)|^{-s_{2}-\frac{m-1}{2} +k}\delta_{B_{m+n}}(t)^{-1}d\overline{u}dt.\]
Now we choose \(W\) so that \(\rho(\gamma_{m,n})W(t\overline{u})=\rho(\gamma_{m,n})W(t)\varphi_{2}( \overline{u})\) for \(t\in B_{l}(F)\), \(\overline{u}\in U^{j,m,n}(F)\) and \(\varphi_{2}\in C_{c}^{\infty}(U^{j,m,n}(F))\). Then the above integral becomes
\[\int_{\overline{U}^{j,m,n}(F)}\varphi_{2}(\overline{u})d\overline {u}\cdot\int_{T_{m+n}(F)} \rho(\gamma_{m,n})W\left(\left(\begin{matrix}a&\\ &I_{l-m-n}&\\ &b\end{matrix}\right)\right)W_{\tau_{1}}(a)W_{\tau_{2}}(b)\] \[|\det(a)|^{s_{1}+\frac{n-1}{2}-j}|\det(b)|^{-s_{2}-\frac{m-1}{2} +k}\delta_{B_{m+n}}(t)^{-1}dt.\]
The \(d\overline{u}\) integral is a nonzero constant for appropriate \(\varphi_{2}\). For appropriate \(W,W_{\tau_{1}},W_{\tau_{2}}\), the \(dt\) integral is holomorphic and nonzero in a neighborhood of any given \(\mathbf{s}\). This proves that there is a linear combination of the local integrals \(\Psi(W,\xi_{\mathbf{s}};j)\) which is holomorphic and nonzero in a neighborhood of any given \(\mathbf{s}\).
### Local functional equations
**Proposition 3.5**.: _There exists a meromorphic function \(\Gamma(\mathbf{s},\pi\times\boldsymbol{\tau},\psi)\) such that_
\[\Psi(W,M_{w_{m,n}}(\xi_{\mathbf{s}});0)=\Gamma(\mathbf{s},\pi\times \boldsymbol{\tau},\psi)\Psi(W,\xi_{\mathbf{s}};0),\]
_for any \(W\in\mathcal{W}(\pi,\psi)\) and \(\xi_{\mathbf{s}}\in\mathcal{W}(\mathbf{s},\boldsymbol{\tau},\psi^{-1})\)._
Proof.: Recall that
\[Y_{0,m,n}=\left\{\begin{matrix}I_{m+n+1}&v^{\prime}\\ &v\end{matrix}\right\}:v^{\prime}\in\operatorname{Mat}_{(m+n+1)\times(l-m-n-1 )},v\in N_{l-m-n-1}\right\},\]
and we have defined a character \(\psi_{0}\) on \(Y_{0,m,n}(F)\). One can check that
\[\Psi(\rho(y)W,\xi_{\mathbf{s}};0) =\psi_{0}(y)\Psi(W,\xi_{\mathbf{s}};0),\quad\forall y\in Y_{0,m,n }(F),\] \[\Psi(\rho\left(\begin{pmatrix}h&\\ &I_{l-m-n}\end{pmatrix}\right)W,\rho(h)\xi_{\mathbf{s}};0) =\Psi(W,\xi_{\mathbf{s}};0),\quad\forall h\in\operatorname{GL}_{ m+n}(F). \tag{3.9}\]
Denote by
\[H=\left\{\begin{pmatrix}h&*\\ &1&*\\ &&v\end{pmatrix},v\in N_{l-m-n-1}\right\}=\operatorname{GL}_{m+n}\ltimes Y_{0,m,n}.\]
One can define a representation \(\nu_{\mathbf{s}}\) of \(H(F)\) by \(\nu_{\mathbf{s}}|_{\operatorname{GL}_{m+n}(F)}=\operatorname{I}(\mathbf{s}, \boldsymbol{\tau})\) and \(\nu_{\mathbf{s}}|_{Y_{0,m,n}(F)}=\psi_{0}\). Then (3.9) implies that the bilinear form \((W,\xi_{\mathbf{s}})\mapsto\Psi(W,\xi_{\mathbf{s}})\) defines an element in
\[\operatorname{Hom}_{H(F)}(\pi\otimes\nu_{\mathbf{s}},1).\]
Similarly, \(\Psi(W,M_{w_{m,n}}(\xi_{\mathbf{s}}))\) satisfies the same quasi-invariance property (3.9) and thus
\[(W,\xi_{\mathbf{s}})\mapsto\Psi(W,M_{w_{m,n}}(\xi_{\mathbf{s}}))\]
also defines an element in \(\operatorname{Hom}_{H(F)}(\pi\otimes\nu_{\mathbf{s}},1)\). By the main result of [10] (or by [11, Proposition 2.11] if \(F\) is non-archimedean), we have \(\dim_{\mathbb{C}}\operatorname{Hom}_{H(F)}(\pi\otimes\nu_{\mathbf{s}},1)\leq 1\) excluding a discrete set of \(\mathbf{s}\). This shows the existence of the gamma factor.
**Remark 3.6**.: If \(m>0\) and \(n=0\), by Remark 3.2, we can see that
\[\Gamma(\mathbf{s},\pi\times\boldsymbol{\tau},\psi)=\omega_{\tau_{1}}(-1)^{l-1} \gamma^{\operatorname{JPSS}}(s+\frac{l-m-1}{2},\pi\times\tau_{1},\psi),\]
where \(\boldsymbol{\tau}=\tau_{1}\) and \(\gamma^{\operatorname{JPSS}}\) is the Jacquet-Piatetski-Shapiro-Shalika local gamma factor as defined in [11, Theorem 2.7].
**Remark 3.7**.: To get a local functional equation of \(\Psi(W,\xi_{\mathbf{s}};j)\) for general \(j\) with \(0\leq j\leq l-m-n-1\), one should be able to prove the following analogue of the main result of [10]. Recall that we have defined
\[Y_{j,m,n}=\left\{\begin{pmatrix}u&*&*\\ &I_{m+n+1}&*\\ &&v\end{pmatrix}:u\in N_{j}(F),v\in N_{k}(F)\right\}\]
and a character \(\psi_{j}\) on \(Y_{j,m,n}\). Consider
\[H_{j}=\left\{\begin{pmatrix}u&*&*&*&*\\ &a&&b&*\\ &&1&&*\\ &c&&d&*\\ &&&v\end{pmatrix},u\in N_{j},v\in N_{k},\begin{pmatrix}a&b\\ c&d\end{pmatrix}\in\operatorname{GL}_{m+n}(F)\right\}=\operatorname{GL}_{m+n} \ltimes Y_{j,m,n}.\]
For a representation \(\sigma\) of \(\operatorname{GL}_{m+n}(F)\), we can define a representation \(\nu_{\psi}\) of \(H_{j}(F)\) such that \(\nu|_{\operatorname{GL}_{m+n}}=\sigma\) and \(\nu_{\psi}|_{Y_{j,m,n}}=\psi_{j,m,n}\). Then for an irreducible smooth representation \(\pi\) of \(\operatorname{GL}_{l}(F)\), one should have
\[\dim\operatorname{Hom}_{H_{j}}(\pi,\nu_{\psi})\leq 1.\]
If \(n=0\) or \(m=0\), this was the main result of [10]. If \(m=n\), this is the uniqueness of Bessel model of \(\operatorname{GL}_{l}(F)\), which was proved in [1] using the multiplicity one result of [1]. On the other hand, when \(n=0\), Jacquet-Piatetski-Shapiro-Shalika (see [11, Theorem 4.5] and [11, Theorem 2.7]) directly showed that
\[\Psi(W,M_{w_{m,n}}(\xi_{\mathbf{s}});j)=\Gamma((s_{1}-j,s_{2}+j),\pi\times( \tau_{1},\tau_{2}),\psi)\Psi(W,\xi_{\mathbf{s}};j), \tag{3.10}\]
for any \(W\in\mathcal{W}(\pi,\psi)\) and \(\xi_{\mathbf{s}}\in\mathcal{W}(\mathbf{s},(\tau_{1},\tau_{2}),\psi^{-1})\) and for any \(j\) with \(0\leq j\leq l-m-n-1\). We expect this is true in general even we have trouble to extend the proof of [11] to our case.
### Unramified calculation
In this subsection, let \(F\) be a non-archimedean local field with ring of integers \(\mathcal{O}\). Let \(\varpi\in\mathcal{O}\) be a fixed uniformizer and \(q=|\mathcal{O}/(\varpi)|\). Our goal in this subsection is to compute the local zeta integral (3.1) when everything is unramified. In particular, we assume that \(\pi\) is unramified with Satake parameters \(\alpha=\operatorname{diag}(\alpha_{1},\ldots,\alpha_{l})\in\operatorname{GL} _{l}(\mathbb{C})\) and \(\tau_{1}\) (resp. \(\widetilde{\tau}_{2}\)) is unramified with Satake parameters \(\beta^{1}=\operatorname{diag}(\beta_{1}^{1},\ldots,\beta_{m}^{1})\in \operatorname{GL}_{m}(\mathbb{C})\) (resp. \(\beta^{2}=\operatorname{diag}(\beta_{1}^{2},\ldots,\beta_{n}^{2})\in \operatorname{GL}_{m}(\mathbb{C})\)). Moreover, we assume that \(W\in\mathcal{W}(\pi,\psi)\) is the Whittaker function normalized by \(W(I_{l})=1\), \(\xi_{\mathbf{s}}\) is the Whittaker function associated with the normalized spherical section \(f_{\mathbf{s}}\in\operatorname{I}(\mathbf{s},\boldsymbol{\tau})\). By Iwasawa decomposition \(\operatorname{GL}_{m+n}(F)=N_{m+n}(F)T_{m+n}(F)K_{m+n}\), where \(K_{m+n}=\operatorname{GL}_{m+n}(\mathcal{O})\), we have
\[\Psi(W,\xi_{\mathbf{s}};j) =\int_{T_{m+n}(F)}\int_{\overline{U}^{j,m,n}(F)}W(\overline{u} \gamma_{m,n}\mathrm{diag}(t,I_{l-m-n}))\xi_{\mathbf{s}}(t)\delta_{B_{m+n}}(t )^{-1}d\overline{u}dt\] \[=\int_{T_{m+n}(F)}\int_{\overline{U}^{m,n}(F)}W(\mathbf{t}_{m,n} (a,b)\overline{u})\xi_{\mathbf{s}}(t)|\det(a)|^{-j}|\det(b)|^{k}\delta_{B_{m+n }}(t)^{-1}d\overline{u}dt\] \[=\int_{T_{m+n}(F)}\int_{\overline{U}^{m,n}(F)}W(\mathbf{t}_{m,n} (a,b)\overline{u})W_{\tau_{1}}(a)W_{\tau_{2}}(b)\] \[\quad\cdot|\det(a)|^{s_{1}+\frac{n-1}{2}-j}|\det(b)|^{-s_{2}- \frac{m-1}{2}+k}\delta_{B_{m+n}}(t)^{-1}d\overline{u}dt\]
where \(t=\operatorname{diag}(a,b)\) with \(a\in T_{m}(F),b\in T_{n}(F)\) and \(\mathbf{t}_{m,n}(a,b)=\operatorname{diag}(a,I_{l-m-n},b)\). Here the term \(|\det(a)|^{-j}|\det(b)|^{k}\) comes from a modulus character when we change variables on \(\overline{u}\) and the term \(\delta_{B_{m+n}}(t)^{-1}\) comes from the corresponding Haar measure when we use the Iwasawa decomposition.
By Lemma 2.6, we have
\[\Psi(W,\xi_{\mathbf{s}};j)= \int_{T_{m+n}(F)}W(\mathbf{t}_{m,n}(a,b))W_{\tau_{1}}(a)W_{\tau_{2} }(b)\] \[\cdot|\det(a)|^{s_{1}+\frac{n-1}{2}-j}|\det(b)|^{-s_{2}-\frac{m-1} {2}+k}\delta_{B_{m+n}}^{-1}\left(\begin{pmatrix}a&\\ &b\end{pmatrix}\right)dadb\] \[= \int_{T_{m+n}(F)}W(\mathbf{t}_{m,n}(a,b^{*}))W_{\tau_{1}}(a)W_{ \tau_{2}}(b^{*})\] \[\cdot|\det(a)|^{s_{1}+\frac{n-1}{2}-j}|\det(b)|^{s_{2}+\frac{m-1} {2}-k}\delta_{B_{m+n}}^{-1}\left(\begin{pmatrix}a&\\ &b^{*}\end{pmatrix}\right)dadb \tag{3.11}\]
where \(b^{*}=J_{p}^{t}b^{-1}J_{n}^{-1}\), with \(J_{n}=\begin{pmatrix}&1\\ &\id
If \(n=0\), the above formula is the unramified calculation of the Jacquet-Piatetski-Shapiro-Shalika integral, see [15, Proposition 2.4] and also [14, 16]. If \(l=2r+1,m=n\) and \(j=r-m=k\), the above unramified calculation is done in [16] (when \(r=1\)), in [10] (for general \(r\) when \(m=n=r\)) with slightly different normalization, and in [1] (when \(m=n<r\)), where this was the unramified calculation of \(L\)-functions for \(\mathrm{U}_{2r+1,E/F}\times\mathrm{Res}_{E/F}(\mathrm{GL}_{r})\) at split places for a quadratic extension \(E/F\).
Proof.: Without loss of generality, we assume that \(m\geq n\). Write \(T_{1}=q^{-(s_{1}+\frac{k-j}{2})},T_{2}=q^{-(s_{2}+\frac{j-k}{2})}\). For an \(m\)-tuple \(\mathbf{x}=(x_{1},\ldots,x_{m})\), denote \(|\mathbf{x}|=\sum_{i=1}^{m}x_{i}\). An \(m\)-tuple \(\mathbf{x}\in T^{+}(m)\) can be identified with a partition of \(|\mathbf{x}|\) and can be represented by an Young diagram, see [11, SS4] for example. We can then write (3.13) as
\[\Psi(W,\xi_{\mathbf{s}};j)=\sum_{\begin{subarray}{c}\mathbf{x}\in T^{+}(m)\\ \mathbf{y}\in T^{+}(n)\end{subarray}}S_{(\mathbf{x},0,\mathbf{y}^{*})}( \alpha)S_{\mathbf{x}}(\beta^{1})S_{\mathbf{y}}(\beta^{2})T_{1}^{|\mathbf{x}|} T_{2}^{|\mathbf{y}|}. \tag{3.14}\]
On the other hand, we have
\[L(s_{1}+s_{2},\tau_{1}\times\widetilde{\tau}_{2})=\det(I-\beta^{1}\otimes \beta^{2}T_{1}T_{2})^{-1}=\sum_{e\geq 0}\mathrm{Tr}(\mathrm{Sym}^{e}(\beta^{1} \otimes\beta^{2}))(T_{1}T_{2})^{e}.\]
Thus we get that
\[L(s_{1}+s_{2},\tau_{1}\times\widetilde{\tau}_{2})\Psi(W,\xi_{ \mathbf{s}};j)=\sum_{\mathbf{x}\in T^{+}(m),\mathbf{y}\in T^{+}(n),e\geq 0} S_{(\mathbf{x},0,\mathbf{y}^{*})}(\alpha)S_{\mathbf{x}}(\beta^{1} )S_{\mathbf{y}}(\beta^{2})\] \[\cdot\mathrm{Tr}(\mathrm{Sym}^{e}(\beta^{1}\otimes\beta^{2}))T_{ 1}^{|\mathbf{x}|+e}T_{2}^{|\mathbf{y}|+e}. \tag{3.15}\]
Since
\[L(s_{1}+\frac{k-j}{2},\pi\times\tau_{1})=\sum_{e\geq 0}\mathrm{Tr}(\mathrm{Sym }^{c}(\alpha\otimes\beta^{1}))T_{1}^{c},\]
and
\[L(s_{2}+\frac{j-k}{2},\widetilde{\pi}\times\widetilde{\tau}_{2})=\sum_{d\geq 0 }\mathrm{Tr}(\mathrm{Sym}^{d}(\widetilde{\alpha}\otimes\beta^{2}))T_{2}^{d},\]
where \(\widetilde{\alpha}=\mathrm{diag}(a_{1}^{-1},\ldots,a_{l}^{-1})\) is the Satake parameter for \(\widetilde{\pi}\), we get that
\[L(s_{1}+\frac{k-j}{2},\pi\times\tau_{1})L(s_{2}+\frac{j-k}{2},\widetilde{\pi} \times\widetilde{\tau}_{2})=\sum_{c\geq 0,d\geq 0}\mathrm{Tr}(\mathrm{Sym}^{c}( \alpha\otimes\beta^{1}))\mathrm{Tr}(\mathrm{Sym}^{d}(\widetilde{\alpha} \otimes\beta^{2}))T_{1}^{c}T_{2}^{d}. \tag{3.16}\]
Comparing (3.15) and (3.16), in order to prove Proposition 3.8, it suffices to show
\[\mathrm{Tr}(\mathrm{Sym}^{c}(\alpha\otimes\beta^{1}))\mathrm{Tr }(\mathrm{Sym}^{d}(\widetilde{\alpha}\otimes\beta^{2}))=\sum_{e\geq 0}\sum_{ \begin{subarray}{c}\mathbf{x}\in T^{+}(m),\mathbf{y}\in T^{+}(n),e\geq 0\\ |\mathbf{x}|=c-e,|\mathbf{y}|=d-e\end{subarray}} S_{(\mathbf{x},0,\mathbf{y}^{*})}(\alpha)S_{\mathbf{x}}(\beta^{1})S_{ \mathbf{y}}(\beta^{2}) \tag{3.17}\] \[\cdot\mathrm{Tr}(\mathrm{Sym}^{e}(\beta^{1}\otimes\beta^{2})).\]
By [15, Proposition 2.4], we have
\[\mathrm{Tr}(\mathrm{Sym}^{e}(\beta^{1}\otimes\beta^{2}))=\sum_{\mathbf{z} \in T^{+}(n),|\mathbf{z}|=e}S_{(\mathbf{z},0_{m-n})}(\beta^{1})S_{\mathbf{z}} (\beta^{2}).\]
Here \(\mathbf{z}=(z_{1},\ldots,z_{n})\) can be identified with a partition of \(e=|\mathbf{z}|\) with at most \(n\)-parts (since \(m\geq n\) by our assumption) and \(S_{\mathbf{z}}\) (resp. \(S_{(\mathbf{z},0_{m-n})}\)) is the Schur polynomial defined by \(\mathbf{z}\) with \(n\) (resp. \(m\)) variables. Similarly,
\[\mathrm{Tr}(\mathrm{Sym}^{c}(\alpha\otimes\beta^{1}))=\sum_{ \mathbf{u}\in T^{+}(m),|\mathbf{u}|=c}S_{(\mathbf{u},0_{l-m})}(\alpha)S_{ \mathbf{u}}(\beta^{1}),\] \[\mathrm{Tr}(\mathrm{Sym}^{d}(\widetilde{\alpha}\otimes\beta^{2}))= \sum_{\mathbf{v}\in T^{+}(n),|\mathbf{v}|=d}S_{(\mathbf{v},0_{l-n})}( \widetilde{\alpha})S_{\mathbf{u}}(\beta^{2}).\]
A simple matrix calculation shows that
\[S_{(\mathbf{v},0_{l-n})}(\widetilde{\alpha})=S_{(0_{l-n},\mathbf{v}^{\ast})}( \alpha).\]
See also [11, Exercise 15.50] for a representation theoretic explanation of this formula. Thus the left hand side of (3.17) becomes
\[LHS=\sum_{\mathbf{u}\in T^{+}(m),|\mathbf{u}|=c}\sum_{\mathbf{v}\in T^{+}(n),| \mathbf{v}|=d}S_{(\mathbf{u},0_{l-m})}(\alpha)S_{(0_{l-n},\mathbf{v}^{\ast})} (\alpha)S_{\mathbf{u}}(\beta^{1})S_{\mathbf{v}}(\beta^{2}),\]
while the right side of (3.17) becomes
\[RHS=\sum_{\mathbf{z}\in T^{+}(n)}\sum_{\begin{subarray}{c}\mathbf{x}\in T^{+}( m),\mathbf{y}\in T^{+}(n),e\geq 0\\ |\mathbf{x}|=c-|\mathbf{z}|,|\mathbf{y}|=d-|\mathbf{z}|\end{subarray}}S_{( \mathbf{x},0,\mathbf{y}^{\ast})}(\alpha)S_{\mathbf{x}}(\beta^{1})S_{(\mathbf{ z},0)}(\beta^{1})S_{\mathbf{y}}(\beta^{2})S_{\mathbf{z}}(\beta^{2})\]
By Littlewood-Richardson rule, see [11, (A.8)] or [10, SSI.9], we have
\[S_{\mathbf{x}}(\beta^{1})S_{\mathbf{z}}(\beta^{1}) =\sum_{\mathbf{u}\in T^{+}(m),|\mathbf{u}|=c}c^{\mathbf{u}}_{ \mathbf{x},\mathbf{z}}S_{\mathbf{u}}(\beta^{1}),\] \[S_{\mathbf{y}}(\beta^{2})S_{\mathbf{z}}(\beta^{2}) =\sum_{\mathbf{v}\in T^{+}(n),|\mathbf{v}|=d}c^{\mathbf{v}}_{ \mathbf{y},\mathbf{z}}S_{\mathbf{v}}(\beta^{2}),\]
where in the first equation, \((\mathbf{z},0_{m-n})\) is identified with \(|\mathbf{z}|\) as a partition of \(e=|\mathbf{z}|\) with at most \(n\) parts, and \(c^{\mathbf{u}}_{\mathbf{x},\mathbf{z}},c^{\mathbf{u}}_{\mathbf{y},\mathbf{z}}\) are the Littlewood-Richardson coefficients as defined in [11, page 454] or [10, SSI.9.2]. Thus
\[RHS=\sum_{\mathbf{u}\in T^{+}(m),|\mathbf{u}|=c}\sum_{\mathbf{v}\in T^{+}(n),|\mathbf{v}|=d}\sum_{\begin{subarray}{c}\mathbf{x}\in T^{+}(m),\mathbf{y}, \mathbf{z}\in T^{+}(n)\\ |\mathbf{x}|+|\mathbf{z}|=c,|\mathbf{y}|+|\mathbf{z}|=d\end{subarray}}c^{ \mathbf{u}}_{\mathbf{x},\mathbf{z}}c^{\mathbf{v}}_{\mathbf{y},\mathbf{z}}S_{( \mathbf{x},0,\mathbf{y}^{\ast})}(\alpha)S_{\mathbf{u}}(\beta^{1})S_{\mathbf{v} }(\beta^{2}).\]
Thus in order to prove (3.17) and hence Proposition 3.8, it suffices to prove that for any \(\mathbf{u}\in T^{+}(m),\mathbf{v}\in T^{+}(n),\) one has
\[S_{(\mathbf{u},0_{l-m})}(\alpha)S_{(0_{l-n},\mathbf{v}^{\ast})}(\alpha)=\sum_ {\begin{subarray}{c}\mathbf{x}\in T^{+}(m),\mathbf{y},\mathbf{z}\in T^{+}(n)\\ |\mathbf{x}|+|\mathbf{z}|=c,|\mathbf{y}|+|\mathbf{z}|=d\end{subarray}}c^{ \mathbf{u}}_{\mathbf{x},\mathbf{z}}c^{\mathbf{v}}_{\mathbf{y},\mathbf{z}}S_{( \mathbf{x},0,\mathbf{y}^{\ast})}(\alpha). \tag{3.18}\]
For \(\mathbf{v}=(v_{1},\ldots,v_{n})\in T^{+}(n),\) we write \(\widetilde{\mathbf{v}}=(v_{1},\ldots,v_{1},v_{1}-v_{n},\ldots,v_{1}-v_{2},0) \in T^{+}(l)\). Then \(S_{(0_{l-n},\mathbf{v}^{\ast})}(\alpha)=S_{\mathbf{v}}(\alpha)D_{-v_{1}}(\alpha)\), where \(D_{-v_{1}}(\alpha)=\det^{-v_{1}}(\alpha)\) following the notation of [11, SS15.5]. Thus using Littlewood-Richardson rule again, we have
\[S_{(\mathbf{u},0_{l-m})}(\alpha)S_{(0_{l-n},\mathbf{v}^{\ast})}(\alpha)=D_{-v _{1}}(\alpha)\sum_{\lambda\in T^{+}(l),|\lambda|=|\widetilde{v}|+|\mathbf{u}|} c^{\lambda}_{\widetilde{\mathbf{v}},\mathbf{u}}S_{\lambda}(\alpha).\]
Write \(\lambda=(\lambda_{1},\ldots,\lambda_{l})\). By the definition of Littlewood-Richardson coefficients, if \(c^{\lambda}_{\widetilde{\mathbf{v}},\mathbf{u}}\neq 0,\) we must have \(\lambda_{m+1}=\cdots=\lambda_{l-n-1}=v_{1},\) which means that \(S_{\lambda}\cdot D_{-v_{1}}=S_{(\lambda_{1}-v_{1},\ldots,\lambda_{l}-v_{1})}\) must be of the form \(S_{(\mathbf{x},0_{l-m-n},\mathbf{y}^{\ast})}\) for \(\mathbf{x}\in T^{+}(m)\) and \(\mathbf{y}\in T^{+}(n)\). Thus we get
\[S_{(\mathbf{u},0_{l-m})}(\alpha)S_{(0_{l-n},\mathbf{v}^{\ast})}(\alpha)=\sum_ {\mathbf{x}\in T^{+}(m),\mathbf{y}\in T^{+}(n)}c^{\lambda}_{\widetilde{ \mathbf{v}},\mathbf{u}}S_{(\mathbf{x},0,\mathbf{y}^{\ast})},\]
where \(\lambda=(\lambda_{1},\ldots,\lambda_{l})=(\mathbf{x},0,\mathbf{y}^{\ast})+(v_{ 1},\ldots,v_{1}).\) Note that \(|\mathbf{u}|-|\mathbf{v}|=|\mathbf{x}|-|\mathbf{y}|\). Thus in order to prove (3.18), it suffices to show that for any fixed \(\mathbf{u},\mathbf{x}\in T^{+}(m)\) and \(\mathbf{v},\mathbf{y}\in T^{+}(n)\) with \(|\mathbf{u}|-|\mathbf{x}|=|\mathbf{v}|-|\mathbf{y}|\),
\[c^{\lambda}_{\widetilde{\mathbf{v}},\mathbf{u}}=\sum_{\mathbf{z}\in T^{+}(n)}c^{ \mathbf{u}}_{\mathbf{x},\mathbf{z}}c^{\mathbf{v}}_{\mathbf{y},\mathbf{z}}, \tag{3.19}\]
where \(\lambda=(\lambda_{1},\ldots,\lambda_{l})=(\mathbf{x},0,\mathbf{y}^{\ast})+(v_{ 1},\ldots,v_{1}).\) The formula (3.19) was proved by Professor T. Tao in a MathOverflow answer [10] using the hive model for Littlewood-Richardson coefficients introduced in [12]. A proof of (3.19) based on Tao's MathOverflow answer [10] will be reproduced in SS3.4 after we introduce some necessary notations and tools.
**Remark 3.9**.: Here we give an example of (3.18). We take \(l=4,m=2,n=1\) and \(\mathbf{u}=(2,1),\mathbf{v}=(2)\). One can check that there are 3 choices of \(\mathbf{z}\), which are \(\mathbf{z}=(0),\mathbf{z}=(1),\mathbf{z}=(2)\), and correspondingly, there are 3 choices of \(\mathbf{y}\) given by \(\mathbf{y}=(2),\mathbf{y}=(1),\mathbf{y}=(0)\). When \(\mathbf{z}=(0)\), we must have \(\mathbf{z}=(2,1)\) and when \(\mathbf{z}=(2)\), we must have \(\mathbf{x}=(1)=(1,0)\). But when \(\mathbf{z}=(1)\), there are two choices of \(\mathbf{x}\), which are \(\mathbf{x}=(1,1)\) or \(\mathbf{x}=(2)=(2,0)\). One can check that in each case, \(c_{\mathbf{x},\mathbf{x}}^{\mathbf{u}}c_{\mathbf{y},\mathbf{z}}^{\mathbf{v}}=1\). Thus (3.18) becomes
\[S_{(2,1,0,0)}\cdot S_{(0,0,0,-2)}=S_{(2,1,0,-2)}+S_{(1,0,0,0)}+S_{(1,1,0,-1)}+S _{(2,0,0,-1)},\]
which could be checked directly using Littlewood-Richardson rule by noting that \(S_{(0,0,0,-2)}=S_{(2,2,2,0)}\cdot D_{-2}\), where \(D_{-2}=\det^{-2}\).
### Proof of Tao's formula (3.19)
An integral _n-hive_ is an array of integers \(a_{ij}\) for \(0\leq i,j,i+j\leq n\) placed in the vertices of triangles of the following shape
which satisfies all of the following rhombus inequalities: for each rhombus of the following types
the sum of the two integers at the obtuse vertices must be greater than or equal to the sum of the two integers at the acute vertices.
**Theorem 3.10** (Knutson-Tao, [16]).: _Let \(\mathbf{x}=(x_{1},\ldots,x_{n}),\mathbf{y}=(y_{1},\ldots,y_{n}),\mathbf{z}=(z_ {1},\ldots,z_{n})\) be partitions with \(|\mathbf{z}|=|\mathbf{x}|+|\mathbf{y}|\), then \(c_{\mathbf{x},\mathbf{y}}^{\mathbf{z}}\) is the number of \(n\)-hives with boundary labels_
_Here the arrow and the number \(x_{i}\) (resp. \(y_{j},z_{k}\)) on the arrow indicates that the numbers increase by \(x_{i}\) (resp. \(y_{j},z_{k}\)) along the direction indicated by the arrow. One can normalize the above n-hive by assign any integer to any fixed vertex._
We note that different normalization will give the same number of hives. The above theorem is proved in [16]. See also the appendix of [1] for a different proof given by W. Fulton.
Figure 1. hive
**Remark 3.11**.: We give a simple example which also appeared in [10]. We have \(c_{(2,1),(2,1)}^{(3,2,1)}=2\), which can be computed in the following way. There are exactly two 3-hives with boundary conditions given below,
\[\begin{array}{ccccc}&&3&&\\ &&3&&5&&\\ &&2&&x&&6&\\ &&0&&3&&5&&6\end{array},\]
which are given by \(x=4,5\).
We temporarily call the following object an _anti-n-hive_: an array of integers placed in the vertices of triangles of the shape as Figure 1 which satisfies the "reverse" rhombus inequalities: for each rhombus below
the sum of the two integers at the obtuse vertices must be less than or equal to the sum of the two integers at the acute vertices.
For any \(n\)-hive, if we switch the sign of the number at each vertices, we will get an anti-\(n\)-hive. Note that, this process will change the boundary conditions, which gives us the following direct corollary.
**Corollary 3.12**.: _Let \(\mathbf{x}=(x_{1},\ldots,x_{n}),\mathbf{y}=(y_{1},\ldots,y_{n}),\mathbf{z}=(z _{1},\ldots,z_{n})\) be partitions with \(|\mathbf{z}|=|\mathbf{x}|+|\mathbf{y}|\), then \(c_{\mathbf{x},\mathbf{y}}^{\mathbf{z}}\) is the number of anti-\(n\)-hives with boundary labels_
_Here the arrow and the number \(x_{i}\) (resp. \(y_{j},z_{k}\)) on the arrow indicates that the numbers increase by \(x_{i}\) (resp. \(y_{j},z_{k}\)) along the direction indicated by the arrow. One can normalize the above \(n\)-hive by assign any integer to any fixed vertex._
Now we can prove Tao's formula (3.19), which we restate it below.
**Proposition 3.13**.: _Let \(l,m,n\) be non-negative integers with \(l\leq m+n+1\) and \(m\geq n\). Given \(\mathbf{x},\mathbf{u}\in T^{+}(m),\mathbf{y},\mathbf{v}\in T^{+}(n)\) with \(|\mathbf{u}|-|\mathbf{x}|=|\mathbf{v}|-|\mathbf{y}|\geq 0\), then_
\[c_{\mathbf{v},\mathbf{u}}^{\lambda}=\sum_{\mathbf{z}\in T^{+}(n)}c_{\mathbf{x },\mathbf{z}}^{\mathbf{u}}c_{\mathbf{y},\mathbf{z}}^{\mathbf{v}}.\]
_Here \(\mathbf{u}=(u_{1},\ldots,u_{m}),\mathbf{x}=(x_{1},\ldots,x_{m}),\mathbf{y}=(y_ {1},\ldots,y_{n}),\mathbf{v}=(v_{1},\ldots,v_{n})\), \(\mathbf{y}^{*}=(-y_{n},\ldots,-y_{2},-y_{1})\), \(\widetilde{\mathbf{v}}=(0_{l-n},\mathbf{v}^{*})+(v_{1},\ldots,v_{1})=(v_{1}, \ldots,v_{1},v_{1}-v_{n},\ldots,v_{1}-v_{2},0)\in T^{+}(l)\), and \(\lambda=(\mathbf{x},0_{l-m-n},\mathbf{y}^{*})+(v_{1},\ldots,v_{1})\in T^{+}(l)\). Moreover, \(\mathbf{u}\) in \(c_{\mathbf{v},\mathbf{u}}^{\lambda}\) is viewed as an element in \(T^{+}(l)\) in the obvious way, namely, \(\mathbf{u}=(\mathbf{u},0_{l-n})\)._
Proof.: By Theorem 3.10 and Corollary 3.12, one can see that \(c_{\tilde{\mathbf{v}},\mathbf{u}}^{\lambda}\) is the number of anti-\(l\)-hives with boundary conditions indicated below,
where \(v_{1}\) in the left side boundary and bottom boundary means \((v_{1},\dots,v_{1})\in T^{+}(l)\). Here the two interior line segments are not important here. For each hive above, we assume that its vertex integers are given by \((a_{ij})_{0\leq i,j,i+j\leq l}\) placed as in Figure 1. Then \((a_{ij}-(i+j)v_{1})_{0\leq i,j,i+j\leq l}\) is also an anti-\(l\)-hive which has the boundary conditions as indicated in the following Figure 2. We also normalized the anti-\(l\)-hive so that the top vertex has value \(0\).
Thus \(c_{\tilde{\mathbf{v}},\mathbf{u}}^{\lambda}\) is the number of anti-\(l\)-hives with boundary conditions as in Figure 2. Using the reverse rhombus inequality, we can check that an anti-\(l\)-hive as above must vanish completely in the quadrilateral \(ABEF\) (including each sides) in Figure 3. Moreover, inside the trapezoid \(BCDE\), the values of the hive on each horizontal line are the same. In particular, this means that there exists a \(\mathbf{z}\in T^{+}(n)\) such that the boundary condition on \(CB\) and \(DE\) are both given by \(\mathbf{z}^{*}\).
Figure 3.
Figure 2. boundary condition for anti-hives which represents \(c_{\tilde{\mathbf{v}},\mathbf{u}}^{\lambda}\)
Thus such a hive is uniquely determined by its values in the anti-hives \(BGC\) and \(FDH\), with the indicated boundary conditions as in Figure 3. Conversely, given anti-hives \(BGC\) and \(FDH\) with boundary conditions as in Figure 3, we get an anti-hive with the boundary condition as in Figure 2 using a reverse process. Finally, note that the number of anti-hives \(BGC\) is \(c^{\mathbf{v}}_{\mathbf{y},\mathbf{z}}\) and the number of anti-hives \(FDH\) is \(c^{\mathbf{u}}_{\mathbf{x},\mathbf{z}}\). Thus we get
\[c^{\lambda}_{\mathbf{y},\mathbf{u}}=\sum_{\mathbf{z}\in T^{+}(n)}c^{\mathbf{ u}}_{\mathbf{x},\mathbf{z}}c^{\mathbf{v}}_{\mathbf{y},\mathbf{z}}.\]
This concludes the proof.
## 4. A local converse theorem
In the rest of this paper, we assume that \(F\) is a non-archimedean local field. Let \(\mathcal{O}\) be the ring of integers of \(F\), \(\mathfrak{p}\) be the maximal ideal of \(\mathcal{O}\) and let \(\varpi\in\mathfrak{p}\) be a fixed uniformizer. The purpose of the rest of this paper is to prove the following
**Theorem 4.1**.: _Let \(l\) be a positive integer and let \(\pi_{1},\pi_{2}\) be two irreducible supercuspidal representations of \(\operatorname{GL}_{l}(F)\) with the same central character. If \(\Gamma(\mathbf{s},\pi_{1}\times(\tau_{1},\tau_{2}),\psi)=\Gamma(\mathbf{s}, \pi_{2}\times(\tau_{1},\tau_{2}),\psi)\) for all irreducible generic representations \(\tau_{1}\) (resp. \(\tau_{2}\)) of \(\operatorname{GL}_{m}(F)\) (resp. \(\operatorname{GL}_{n}(F)\)) with \(0\leq n\leq[l/2],0\leq m\leq[l/2]\), then \(\pi_{1}\cong\pi_{2}\)._
**Remark 4.2**.: If \(l=2r\) is even and \(m=n=r\), we have not defined the gamma factor \(\Gamma(\mathbf{s},\pi\times(\tau_{1},\tau_{2}),\psi)\) yet, because our local zeta integral (3.1) and hence our local gamma factor defined from that in Proposition 3.5 require \(m+n<l\). In the case if \(l=2r,m=n=l\), the corresponding local gamma factor used in Theorem 4.1 is the one defined from the local zeta integral of unitary group \(\operatorname{U}_{E/F}(2r)\times\operatorname{Res}_{E/F}(\operatorname{GL}_{ r})\) at a split place, see [1] and [13]. Actually, the properties of this gamma factor is well studied. In particular, it has been shown that it is the product of Jacquet-Piatetski-Shapiro-Shalika local gamma factors after normalization, see [13]. We will review its definition in SS4.1.
**Remark 4.3**.: Note that if \(m=n=0\), then condition \(\Gamma(\mathbf{s},\pi_{1}\times(\tau_{1},\tau_{2}),\psi)=\Gamma(\mathbf{s}, \pi_{2}\times(\tau_{1},\tau_{2}),\psi)\) is empty. If \(m>0\) and \(n=0\), the corresponding gamma factor \(\Gamma(\mathbf{s},\pi_{1}\times(\tau_{1},\tau_{2}),\psi)\) is exactly a Jacquet-Piatetski-Shapiro-Shalika local gamma factor up to a shift, see Remark 3.6.
Here we recall the Jacquet's local converse conjecture
**Conjecture 4.4**.: _Let \(\pi_{1},\pi_{2}\) be two irreducible generic representations of \(\operatorname{GL}_{l}(F)\) with the same central character. If \(\gamma^{\operatorname{JPSS}}(s,\pi_{1}\times\tau,\psi)=\gamma^{\operatorname{ JPSS}}(s,\pi_{2}\times\tau,\psi)\) for any irreducible generic representation \(\tau\) of \(\operatorname{GL}_{m}(F)\) with \(1\leq m\leq[l/2]\), then \(\pi_{1}\cong\pi_{2}\)._
One can assume that \(\pi_{1},\pi_{2}\) are supercuspidal and remove the central character restriction after the work of [10]. The above conjecture was proved in [11] and [12] independently. In the next remark, we will explain that our Theorem 4.1 indeed gives a new proof of Conjecture 4.4 modulo a standard fact on our gamma factors which will be given in a sequel paper [11].
**Remark 4.5**.: We denote by \(\mathcal{C}(0)\) the condition that \(\pi_{1},\pi_{2}\) have the same central character, which is always assumed, and for \(t\geq 1\), we denote \(\mathcal{C}(t):=\mathcal{C}(t;\pi_{1},\pi_{2})\) the following condition for \(\pi_{1},\pi_{2}:\)
\[\Gamma(\mathbf{s},\pi_{1}\times(\tau_{1},\tau_{2}),\psi)=\Gamma(\mathbf{s}, \pi_{2}\times(\tau_{1},\tau_{2}),\psi),\]
_for any irreducible generic representation \(\tau_{1}\) (resp. \(\tau_{2}\)) of \(\operatorname{GL}_{m}(F)\) (resp. \(\operatorname{GL}_{n}(F)\)) with \(0\leq m,n\leq t\)._ To compare our result with Jacquet's local converse conjecture, we also denote by \(\mathcal{C}^{\prime}(t):=\mathcal{C}^{\prime}(t;\pi_{1},\pi_{2})\) the condition: \(\gamma^{\operatorname{JPSS}}(s,\pi_{1}\times\tau,\psi)=\gamma^{\operatorname{ JPSS}}(s,\pi_{2}\times\tau,\psi)\) for any irreducible generic representation \(\tau\) of \(\operatorname{GL}_{m}(F)\) with \(1\leq m\leq t\). The condition \(\mathcal{C}(t)\) is stronger than \(\mathcal{C}^{\prime}(t)\) by Remark 3.6 and Remark 4.3. Thus it seems that the result of Theorem 4.1 is weaker than the Jacquet's local converse conjecture as proved in [11, 12]. However, in a sequel paper [11], we will show that \(\Gamma(\mathbf{s},\pi\times(\tau_{1},\tau_{2}),\psi)\) is the product \(\gamma^{\operatorname{JPSS}}(s_{1}+\frac{l-m-n-1}{2},\pi\times\tau_{1},\psi) \gamma^{\operatorname{JPSS}}(s_{2}-\frac{l-m-n-1}{2},\widetilde{\pi}\times \widetilde{\tau}_{2},\psi)\) up to a normalizing factor which only depends on \(\tau_{1},\tau_{2}\). Note that \(\gamma^{\operatorname{JPSS}}(1-s,\widetilde{\pi}\times\widetilde{\tau},\psi) \gamma^{\operatorname{JPSS}}(s,\pi\times\tau,\psi)=1\). Thus the condition \(\mathcal{C}(t)\) is in fact equivalent to \(\mathcal{C}^{\prime}(t)\). So our proof of Theorem 4.1 gives a new proof of Jacquet's local converse conjecture.
The proof of Theorem 4.1 will be given in the next section. In the rest of this section, we introduce some necessary tools which will be used in the proof of Theorem 4.1.
On the gamma factors for \(\operatorname{GL}_{2r}\times(\operatorname{GL}_{r},\operatorname{GL}_{r})\)
Recall that if \(m+n\leq l-1\), for generic representation \(\pi\) of \(\operatorname{GL}_{l}(F)\), \(\tau_{1}\) (resp. \(\tau_{2}\)) of \(\operatorname{GL}_{m}(F)\) (resp. \(\operatorname{GL}_{n}(F)\)), our local gamma factor \(\Gamma(\mathbf{s},\pi\times(\tau_{1},\tau_{2}),\psi)\) is defined by the local functional equation
\[\Psi(W,M_{w_{m,n}}(\xi_{\mathbf{s}});0)=\Gamma(\mathbf{s},\pi\times(\tau_{1}, \tau_{2}),\psi)\Psi(W,\xi_{\mathbf{s}};0),\]
for all \(W\in\mathcal{W}(\pi,\psi)\) and \(\xi_{\mathbf{s}}\in\mathcal{W}(\mathbf{s},(\tau_{1},\tau_{2}),\psi^{-1}).\) See Proposition 3.5. For \(W\in\mathcal{W}(\pi,\psi)\), we have \(\rho(\gamma_{m,n}^{-1})W\) is also an element in \(\mathcal{W}(\pi,\psi)\). Thus we have
\[\Psi(\rho(\gamma_{m,n}^{-1})W,M_{w_{m,n}}(\xi_{\mathbf{s}});0)=\Gamma(\mathbf{ s},\pi\times(\tau_{1},\tau_{2}),\psi)\Psi(\rho(\gamma_{m,n}^{-1})W,\xi_{ \mathbf{s}};0), \tag{4.1}\]
for all \(\xi_{\mathbf{s}}\in\mathcal{W}(\mathbf{s},(\tau_{1},\tau_{2}),\psi^{-1})\). Here \(\rho\) denotes the right translation and \(\gamma_{m,n}\) is the element in \(\operatorname{GL}_{l}\) as defined after (3.1). The local functional equation (4.1) is the one we will use to prove our local converse theorem.
As explained in Remark 4.2, we also need the local gamma factors for \(\Gamma(\mathbf{s},\pi\times(\tau_{1},\tau_{2}),\psi)\) when \(l=2r\) and \(m=n=r\), which is not covered in our previous sections. This local gamma factor has been defined in [1] and studied in [14]. We recall the definition now.
We first endowed \(F^{2r}\oplus F^{2r}\) a symplectic structure \(\langle\,\ \rangle\) defined by
\[\langle(u_{1},u_{2}),(v_{1},v_{2})\rangle=2(u_{1}J_{2r}v_{2}^{t}-v_{2}J_{2r}u_ {2}^{t}),\]
where \(u_{i},v_{i}\in F^{2r}\) are viewed as row vectors. For a nontrivial additive character \(\psi\) of \(F\) and for a character \(\mu\) of \(F^{\times}\), we can consider the Weil representation \(\omega_{\psi^{-1},\mu,\mu^{-1}}\) of \(\operatorname{GL}_{2r}(F)\), see [14, SS2.2]. Note that we used a little bit different normalization. The Weil representation \(\omega_{\psi^{-1},\mu,\mu^{-1}}\) can be realized on the space \(\mathcal{S}(F^{r}\times F^{r})\), the Bruhat-Schwatz functions on \(F^{2r}\). This is the Schrodinger model of the Weil representation. For example, we have the well-known formula
\[\left(\omega_{\psi^{-1},\mu,\mu^{-1}}\left(\begin{pmatrix}I_{r}&X\\ &I_{r}\end{pmatrix}\right)\phi\right)(x,y)=\psi(xXJ_{r}y^{t})\phi(x,y),X\in \operatorname{Mat}_{r\times r}(F).\]
In the following, we assume that \(\mu\) is understood and omit it from the notation.
Now let \(\pi\) be an irreducible generic representation of \(\operatorname{GL}_{2r}(F)\), \((\tau_{1},\tau_{2})\) be a pair of irreducible generic representations of \(\operatorname{GL}_{r}(F)\) and \(\mathbf{s}=(s_{1},s_{2})\) be a pair of complex numbers. For \(W\in\mathcal{W}(\pi,\psi),\xi_{\mathbf{s}}\in\mathcal{W}(\mathbf{s},(\tau_{1},\tau_{2},),\psi^{-1})\), and \(\phi\in\mathcal{S}(F^{2r})\), we consider the local zeta integral
\[\Psi(W,\xi_{\mathbf{s}},\phi)=\int_{N_{2r}(F)\operatorname{GL}_{2r}(F)}W(g) \xi_{\mathbf{s}}(g)(\omega_{\psi^{-1}}(g)\phi)(e_{r},e_{r})dg,\]
where \(e_{r}\in F^{r}\) is the vector \((0,0,\dots,0,1)\). There exists a meromorphic function \(\Gamma(\mathbf{s},\pi\times(\tau_{1},\tau_{2}),\mu,\psi)\) such that
\[\Psi(W,M_{w_{r},}\xi_{\mathbf{s}},\phi)=\Gamma(\mathbf{s},\pi\times(\tau_{1}, \tau_{2}),\mu,\psi)\Psi(W,\xi_{\mathbf{s}},\phi) \tag{4.2}\]
for any \(W\in\mathcal{W}(\pi,\psi),\xi_{\mathbf{s}}\in\mathcal{W}(\mathbf{s},(\tau_{1},\tau_{2}),\psi^{-1})\) and \(\phi\in\mathcal{S}(F^{r}\times F^{r}).\) Note that, in [1] and [14], there is only a single complex variable involved in the local zeta integral and local gamma factor. Here we still use two variables case.
### Howe vectors
Our strategy of the proof of Theorem 4.1 is along the lines of that given in [11] and [11]. One basic tool for us is the partial Bessel functions associated with Howe vectors as developed in [1]. Here we recall the basic construction. Let \(\psi\) be a fixed unramified additive character of \(F\) and we also view \(\psi\) as a character of the maximal unipotent subgroup \(N_{l}\subset\operatorname{GL}_{l}(F)\) in the usual way. For an integer \(i\geq 0\), we consider the open compact subgroup \(K^{i}_{\operatorname{GL}_{l}}:=I_{l}+\operatorname{Mat}_{l\times l}(\mathfrak{ p}^{i})\) of \(\operatorname{GL}_{l}(F)\). Consider the character \(\theta_{i}\) of \(K^{i}_{\operatorname{GL}_{l}}\) defined by
\[\theta_{i}(k)=\psi(\varpi^{-2i}(\sum_{s=1}^{l-1}k_{s,s+1})),\quad k=(k_{st})_{ 1\leq s,t\leq l}\in K^{i}_{\operatorname{GL}_{l}}.\]
One can check that \(\theta_{i}\) is indeed a character of \(K^{i}_{\operatorname{GL}_{l}}\). Consider the element
\[d_{i}=\operatorname{diag}(\varpi^{-i(l-1)},\varpi^{-i(l-3)},\dots,\varpi^{i(l- 3)},\varpi^{i(l-1)}),\]
and \(H^{i}_{l}=d_{i}K^{i}d_{i}^{-1}\), which is still an open compact subgroup of \(\operatorname{GL}_{l}(F)\). One sees that \(H^{i}_{l}\) has the form
\[H^{i}_{l}=\begin{pmatrix}1+\mathfrak{p}^{i}&\mathfrak{p}^{-i}&\mathfrak{p}^{-3i} &\dots\\ \mathfrak{p}^{3i}&1+\mathfrak{p}^{i}&\mathfrak{p}^{-i}&\dots\\ \mathfrak{p}^{5i}&\mathfrak{p}^{3i}&1+\mathfrak{p}^{i}&\dots\\ \dots&\dots&\dots&\dots\end{pmatrix}.\]
We consider the character \(\psi_{i}\) of \(H^{i}_{l}\) defined by
\[\psi_{i}(h):=\theta_{i}(d_{i}^{-1}hd_{i}),\quad h\in H^{i}_{l}.\]
For a subgroup \(U\subset\operatorname{GL}_{l}(F)\), we denote \(U^{i}:=U\cap H^{i}_{l}\). For example, \(N^{i}_{l}\) denotes \(N_{l}\cap H^{i}_{l}\). We also usually drop \(l\) from the notation if \(l\) is understood. It is easy to see that \(\psi_{i}|_{N^{i}_{l}}=\psi|_{N^{i}_{l}}\).
Let \((\pi,V)\) be an irreducible generic representation of \(\operatorname{GL}_{l}(F)\) and for \(v\in V\), we consider
\[v_{i}=\frac{1}{\operatorname{vol}(N^{i}_{l})}\int_{N^{i}_{l}}\psi_{i}^{-1}(u) \pi(u)vdu.\]
If \(W\in\mathcal{W}(\pi,\psi)\) is the Whittaker function associated with \(v\), then we denote \(W_{i}=W_{v_{i}}\). Note that
\[W_{i}(u_{1}gu_{2})=\psi(u_{1})\psi_{i}(u_{2})W_{i}(g),\quad\forall g\in \operatorname{GL}_{l}(F),u_{1}\in N_{l},u_{2}\in N^{i}_{l}.\]
Actually, that exists a positive integer \(C(v)>0\), such that \(W_{i}\) satisfies the additional quasi-invariance property
\[W_{i}(ugh)=\psi(u)\psi_{i}(h)W_{i}(g) \tag{4.3}\]
for all \(u\in N_{l},g\in\operatorname{GL}_{l}(F),h\in H^{i}_{l}\) if \(i>C(v)\), see [1, Lemma 3.2]. According the proof [1, Lemma 3.2], one can take \(C(v)\) to be the integer such that \(v\) is fixed by \(\pi(K^{C(v)}_{l})\).
Let \(\omega\) be a character of \(F^{\times}\) and we consider the space \(C^{\infty}_{c}(\operatorname{GL}_{l}(F),\omega)\) consisting of smooth function \(f\) on \(G\) such that \(f\) is compactly supported modulo \(Z_{l}\), the center of \(\operatorname{GL}_{l}(F)\), and \(f(zg)=\omega(z)f(g)\). If \(\pi\) is supercuspidal, let \(\mathcal{M}(\pi)\) be the space of matrix coefficients of \(\pi\). Then \(\mathcal{M}(\pi)\subset C^{\infty}_{c}(\operatorname{GL}_{l}(F),\omega_{\pi})\). For \(f\in\mathcal{M}(\pi)\), following [1, page 2089], we consider the function
\[W^{f}(g)=\int_{N_{l}}\psi^{-1}(u)f(ug)du.\]
Note that the integral is convergent by assumption and defines an element in \(\mathcal{W}(\pi,\psi)\). Moreover, for an appropriate choice of \(f\), we can assume that \(W^{f}(I_{l})=1\). See [1, page 2089-2090]. Thus we can consider \(W^{f}_{i}\). We also use the notation
\[\mathcal{B}_{i}(g,f)=W^{f}_{i}(g),\quad g\in\operatorname{GL}_{l}(F).\]
### Weyl elements which support Bessel functions
Let \(\Delta=\Delta(\operatorname{GL}_{l})\) be the set of simple roots of \(\operatorname{GL}_{l}(F)\). Then \(\Delta=\{\alpha_{k}:1\leq k\leq l-1\}\), where
\[\alpha_{k}(\operatorname{diag}(t_{1},\dots,t_{l}))=t_{k}/t_{k+1},\quad \operatorname{diag}(t_{1},\dots,t_{l})\in T_{l}(F).\]
Let \(\mathbf{W}=\mathbf{W}(\operatorname{GL}_{l})\) be the Weyl group of \(\operatorname{GL}_{l}(F)\). We sometimes identify \(\mathbf{W}\) with the permutation matrix in \(\operatorname{GL}_{l}(F)\). Denote by \(e\) the identiy element in \(\mathbf{W}\), which is represented by \(I_{l}\in\operatorname{GL}_{l}(F)\). For \(w\in\mathbf{W}\), denote \(C(w)=BwB\), where \(B=B_{l}\) is the upper triangular subgroup of \(\operatorname{GL}_{l}(F)\). There is a Bruhat order on \(\mathbf{W}\), which is recalled as follows. Given \(w_{1},w_{2}\in\mathbf{W}\), then \(w_{1}\leq w_{2}\) (or \(w_{2}\geq w_{1}\)) if and only if \(C(w_{1})\subset\overline{C(w_{2})}\). For \(w\in\mathbf{W}\), we denote \(\Omega_{w}=\coprod_{w^{\prime}\geq w}C(w^{\prime})\). Then \(C(w)\) is closed in \(\Omega_{w}\) and \(\Omega_{w}\) is open in \(G\).
Let \(\operatorname{B}(\operatorname{GL}_{l})=\{w\in\mathbf{W}(\operatorname{GL}_{l}): \alpha\in\Delta,w\alpha>0\implies w\alpha\in\Delta\}\), which is the set of Weyl elements that can support partial Bessel functions.
Let \(w_{0}=J_{l}\in\operatorname{GL}_{l}(F)\), which represents the longest Weyl element of \(\operatorname{GL}_{l}(F)\). It is well-known that \(w\in\operatorname{B}(G)\) if and only if \(w_{0}w\) is the longest Weyl element of the Levi subgroup of a standard parabolic subgroup of \(\operatorname{GL}_{l}(F)\). For \(w\in\operatorname{B}(G)\), let \(P_{w}=M_{w}N_{w}\) be the corresponding parabolic subgroup such that \(w_{0}w=w_{0}^{M_{w}}\), where \(M_{w}\) is the Levi subgroup of \(P_{w}\) and \(w_{0}^{M_{w}}\) is the longest Weyl element of \(M_{w}\). Let \(\theta_{w}\) be the subset of \(\Delta\) which consists all simple roots in \(M_{w}\). Then we have the relation
\[\theta_{w}=\{\alpha\in\Delta|w\alpha>0\}\subset\Delta.\]
The assignment \(w\mapsto\theta_{w}\) is a bijection between \(\mathrm{B}(G)\) and subsets of \(\Delta\). Moreover, it is known that the assignment \(w\mapsto\theta_{w}\) is order-reversing, i.e., \(w^{\prime}\leq w\) if and only if \(\theta_{w}\subset\theta_{w^{\prime}}\), see [2, Proposition 2.11]. For example, we have \(\theta_{w_{0}}=\emptyset\) and \(\theta_{e}=\Delta\).
Given a subset \(\theta\subset\Delta\), we will write the corresponding Weyl element in \(\mathrm{B}(\mathrm{GL}_{l})\) by \(w_{\theta}\). For an integer \(k\) with \(1\leq k\leq l-1\), denote
\[\overline{w}_{k}=\begin{pmatrix}&I_{l-k}\\ I_{k}&\end{pmatrix}.\]
**Lemma 4.6**.: _For every \(k\) with \(1\leq k\leq l-1\), we have \(\overline{w}_{k}=w_{\Delta-\{\alpha_{k}\}}\)._
Proof.: We have
\[w_{0}\overline{w}_{k}=\begin{pmatrix}J_{k}&\\ &J_{l-k}\end{pmatrix},\]
which is the longest Weyl element of the Levi subgroup
\[M_{\overline{w}_{k}}=\left\{\begin{pmatrix}a&\\ &b\end{pmatrix}:a\in\mathrm{GL}_{k}(F),b\in\mathrm{GL}_{l-k}(F)\right\}.\]
The set of simple roots in \(M_{\overline{w}_{k}}\) is \(\Delta-\{\alpha_{k}\}\). Thus we have \(\overline{w}_{k}\in\mathrm{B}(\mathrm{GL}_{l})\) and \(\theta_{\overline{w}_{k}}=\Delta-\{\alpha_{k}\}\).
Denote
\[\widetilde{w}_{n,m}=\begin{pmatrix}&I_{n}\\ &I_{l-m-n}\\ I_{m}&\end{pmatrix}.\]
**Lemma 4.7**.: _For positive integers \(m,n\) with \(1\leq m+n\leq l-1\), we have \(\theta_{\widetilde{w}_{n,m}}=\Delta-\{\alpha_{m},\alpha_{l-n}\}\)._
Proof.: We have
\[w_{0}\widetilde{w}_{n,m}=\begin{pmatrix}J_{m}&&\\ &J_{l-m-n}&\\ &&J_{n}\end{pmatrix},\]
which is the longest Weyl element in the Levi subgroup
\[M_{\widetilde{w}_{n,m}}=\begin{pmatrix}a&&\\ &b&\\ &&c\end{pmatrix},a\in\mathrm{GL}_{n},b\in\mathrm{GL}_{l-m-n},c\in\mathrm{GL}_{m}.\]
Thus \(\theta_{\widetilde{w}_{n,m}}=\Delta-\{\alpha_{m},\alpha_{l-n}\}\).
Given \(w,w^{\prime}\in\mathrm{B}(\mathrm{GL}_{l})\) with \(w>w^{\prime}\), define (following Jacquet [1])
\[d_{B}(w,w^{\prime})=\max\left\{m|\text{ there exist }w^{\prime}_{i}\in B(G) \text{ with }w=w^{\prime}_{m}>w^{\prime}_{m-1}>\cdots>w^{\prime}_{0}=w^{\prime}\right\}.\]
The number \(d_{B}(w,w^{\prime})\) is called the Bessel distance of \(w,w^{\prime}\). By [2, Proposition 2.1] and Lemma 4.6, the set of elements in \(\mathrm{B}(G)\) which has Bessel distance \(1\) with the element \(e\in\mathrm{B}(G)\) are \(\{\overline{w}_{k},1\leq k\leq l-1\}\), i.e.,
\[\{w|d_{B}(w,e)=1\}=\{\overline{w}_{k}|1\leq k\leq l-1\}\,. \tag{4.4}\]
For \(w,w^{\prime}\in\mathbf{W}\) with \(w<w^{\prime}\), we denote by \([w,w^{\prime}]\) the closed Bruhat interval \(\{w^{\prime\prime}\in\mathbf{W}(\mathrm{GL}_{l})|w\leq w^{\prime\prime}\leq w ^{\prime}\}\).
### Cogdell-Shahidi-Tsai's theory on partial Bessel functions
In this subsection, we review certain basic properties of partial Bessel functions developed by Cogdell-Shahidi-Tsai recently in [17].
For \(w\in\mathrm{B}(\mathrm{GL}_{l})\), we denote
\[A_{w}=\left\{a\in T_{l}(F)|\alpha(a)=1\text{ for all }\alpha\in\theta_{w}\right\}. \tag{4.5}\]
The set \(A_{w}\) is in fact the center of \(M_{w}\).
**Theorem 4.8** (Cogdell-Shahidi-Tsai).: _Let \(\omega\) be a character of \(F^{\times}\)._
1. _Let_ \(w\in\mathbf{W}\)_,_ \(m>0\) _and_ \(f\in C_{c}^{\infty}(\Omega_{w},\omega)\)_. Suppose_ \(\mathcal{B}_{i}(wa,f)=0\) _for all_ \(a\in A_{w}\)_. Then there exists_ \(f_{0}\in C_{c}^{\infty}(\Omega_{w}-C(w),\omega)\)_, such that for sufficiently large_ \(i\) _depending only on_ \(f\)_, we have_ \(\mathcal{B}_{i}(g,f)=\mathcal{B}_{i}(g,f_{0})\) _for all_ \(g\in\mathrm{GL}_{l}(F)\)
2. _Let_ \(w\in\mathrm{B}(\mathrm{GL}_{l})\)_. Let_ \(\Omega_{w,0}\) _and_ \(\Omega_{w,1}\) _be_ \(N_{l}\times N_{l}\) _and_ \(T_{l}\)_-invariant open sets of_ \(\Omega_{w}\) _such that_ \(\Omega_{w,0}\subset\Omega_{w,1}\) _and_ \(\Omega_{w,1}-\Omega_{w,0}\) _is a union of Bruhat cells_ \(C(w^{\prime})\) _such that_ \(w^{\prime}\) _does not support a Bessel function, i.e.,_ \(w^{\prime}\notin\mathrm{B}(\mathrm{GL}_{l})\)_. Then for any_ \(f_{1}\in C_{c}^{\infty}(\Omega_{w,1},\omega)\) _there exists_ \(f_{0}\in C_{c}^{\infty}(\Omega_{w,0},\omega)\) _such that for all sufficiently large_ \(i\) _depending only on_ \(f_{1}\)_, we have_ \(\mathcal{B}_{i}(g,f_{0})=\mathcal{B}_{i}(g,f_{1})\)_, for all_ \(g\in\mathrm{GL}_{l}(F)\)_._
Proof.: Part (1) is [12, Lemma 5.13] and part (2) is [12, Lemma 5.14].
**Corollary 4.9**.: _Let \(f_{1},f_{2}\in C_{c}^{\infty}(\mathrm{GL}_{l}(F),\omega)\) with \(W^{f_{1}}(I_{l})=W^{f_{2}}(I_{l})=1\). Then there exist functions \(f_{\overline{w}_{k}}\in C_{c}^{\infty}(\Omega_{\overline{w}_{k}},\omega)\) for all \(k\) with \(1\leq k\leq l-1\) such that for sufficiently large \(i\) (depending only on \(f_{1},f_{2}\)) we have_
\[\mathcal{B}_{i}(g,f_{1})-\mathcal{B}_{i}(g,f_{2})=\sum_{k=1}^{l-1}\mathcal{B}_ {i}(g,f_{\overline{w}_{k}}),\quad\forall g\in G.\]
This is essentially [12, Proposition 5.3], see [12, page 2115] for a similar identity. Almost identical proofs in similar situations are given in [16, Corollary 4.7] and [16, Corollary 2.7]. We omit the proof here and just remark that each term in the expansion of the right side comes from the Weyl elements which has Bessel distance \(1\) from the trivial Weyl element \(e\in\mathbf{W}(\mathrm{GL}_{l})\), namely the elements in the set (4.4).
### Construction of certain sections of induced representations
Let \(m,n\) be two positive integers and \(\tau_{1}\) (resp. \(\tau_{2}\)) be an irreducible generic representation of \(\mathrm{GL}_{m}(F)\) (resp. \(\mathrm{GL}_{n}(F)\)) and let \(\mathbf{s}=(s_{1},s_{2})\). Consider
\[N_{m,n}=\left\{u_{m,n}(x)=\begin{pmatrix}I_{m}&x\\ &I_{n}\end{pmatrix},x\in\mathrm{Mat}_{m\times n}\right\},\overline{N}_{m,n}= \left\{\overline{u}_{m,n}(x):=\begin{pmatrix}I_{m}&\\ x&I_{n}\end{pmatrix},x\in\mathrm{Mat}_{n\times m}\right\},\]
and
\[\overline{N}_{m,n}^{k}=\left\{\overline{u}_{m,n}(x)\left|\begin{pmatrix}I_{m}& \\ x&I_{l-m-n}&\\ &I_{n}\end{pmatrix}\in H_{l}^{k}\right.\right\}.\]
Here we identify \(N_{m,n}\) etc. with its \(F\)-rational points and recall that \(H_{l}^{k}\) is defined in Section 4.2.
Let \(D\) be a compact open subset of \(N_{m,n}\). For \(x\in D\) and a positive integer \(i\), we consider the set
\[S(x,k)=\left\{\overline{y}\in\overline{N}_{m,n}:\overline{y}x\in P_{m,n} \overline{N}_{m,n}^{k}\right\}.\]
**Lemma 4.10**.:
1. _For any positive integer_ \(c\)_, there exists a positive integer_ \(k_{1}=k_{1}(D,c)\) _such that for all_ \(k\geq k_{1},x\in D,\overline{y}\in S(x,k)\)_, we can write_ \[\overline{y}x=u_{m,n}(x_{1})\mathrm{diag}(a,b)\overline{u}_{m,n}(y_{1}),\] _with_ \(a\in K_{\mathrm{GL}_{m}}^{c},b\in K_{\mathrm{GL}_{n}}^{c}\)_. Here_ \(u_{m,n}(x_{1})\in N_{m,n},\overline{u}_{m,n}(y_{1})\in\overline{N}_{m,n}^{k}\)_. We recall that_ \(K_{\mathrm{GL}_{m}}^{c}=I_{m}+\mathrm{Mat}_{m\times m}(\mathfrak{p}^{c})\)_._
2. _There exists an integer_ \(k_{2}=k_{2}(D)\) _such that_ \(S(x,k)=\overline{N}_{m,n}^{k}\) _for all_ \(x\in D\) _and_ \(k\geq k_{2}\)_._
Proof.: This is an analogue of [16, Lemma 4.1], [16, Lemma 5.1] and the proof is also similar. We provide a sketch below. For \(x\in D\) and \(\overline{y}\in S(x,k)\), we assume that \(\overline{y}x=u_{m,n}(x_{1})\mathrm{diag}(a,b)\overline{u}_{m,n}(y_{1})\) for some \(a\in\mathrm{GL}_{m}(F),b\in\mathrm{GL}_{n}(F),x_{1}\in\mathrm{Mat}_{m\times n },y_{1}\in\mathrm{Mat}_{n\times m}\) with \(\overline{u}_{m,n}(y_{1})\in\overline{N}_{m,n}^{k}\). By abuse of notation, we also write \(\overline{y}=\overline{u}_{m,n}(y),x=u_{m,n}(x)\). Then from the equation
\[\overline{y}^{-1}u_{m,n}(x_{1})\mathrm{diag}(a,b)=x\overline{u}_{m,n}(-y_{1}),\]
we get
\[\begin{pmatrix}a&x_{1}b\\ -ya&(I_{n}-yx_{1})b\end{pmatrix}=\begin{pmatrix}I_{m}-xy_{1}&x\\ -y_{1}&I_{n}\end{pmatrix}. \tag{4.6}\]
We can solve that \(a=I_{m}-xy_{1}\) and \(b=I_{n}+y_{1}a^{-1}x\). Since when \(x\in D\), the entries of \(x\) are bounded, and the entries of \(y_{1}\) go to zero as \(k\to\infty\), we can take \(k\) large enough such that \(a=I_{m}-xy_{1}\in K_{\mathrm{GL}_{m}}^{c}\) and \(b=I_{n}+y_{1}a^{-1}x\in K_{\mathrm{GL}_{n}}^{c}\). This proves (1).
By (4.6), we have \(y=y_{1}a^{-1}=y_{1}(I_{m}-xy_{1})^{-1}=y_{1}(I_{m}+xy_{1}+(xy_{1})^{2}+\dots)\). Again, since each entry of \(x\) is bounded, we may take \(k\) large such that the entries of \(y_{1}(xy_{1})^{t}\) are so large so that \(\overline{u}_{m,n}(y_{1}(xy_{1})^{t})\in\overline{N}_{m,n}^{k}\) for \(t\geq 0\). This shows that for \(k\) large, we have \(\overline{u}_{m,n}(y)\in\overline{N}_{m,n}^{k}\) and thus \(S(x,k)\subset\overline{N}_{m,n}^{k}\) since \(\overline{y}=\overline{u}_{m,n}(y)\) is arbitrarily chosen. See [15, Lemma 5.1] for a similar and more detailed argument.
Take \(x\in D\), we need to show \(\overline{N}_{m,n}^{k}\subset S(x,k)\) for \(k\) large. As above, we write \(x=u_{m,n}(x)\) by abuse of notation. We first assume that \(k\) is so large such that if \(\overline{u}_{m,n}(y)\in\overline{N}_{m,n}^{k}\), then \(I_{n}+yx\) is invertible and \(I_{n}-x(I_{n}+yx)^{-1}y\) is also invertible. This can be done because \(x\) has bounded entries and \(y\) has small entries if \(\overline{u}_{m,n}(y)\in\overline{N}_{m,n}^{k}\) when \(k\) large. Then we have
\[\overline{u}_{m,n}(y)u_{m,n}(x)=u_{m,n}(x_{1})\mathrm{diag}(a,b)\overline{u}_{ m,n}(y_{1}),\]
with \(b=I_{n}+yx,a=I_{n}-b^{-1}y,x_{1}=xb^{-1}\) and \(y_{1}=(I_{n}+yx)^{-1}y\). In particular, \(\overline{u}_{m,n}(y)u_{m,n}(x)\in P_{m,n}\overline{N}_{m,n}\). To show \(\overline{u}_{m,n}(y)\in S(x,k)\) for \(k\) large, it suffices to show that one can choose \(k\) large so that the above \(\overline{u}_{m,n}(y_{1})\in\overline{N}_{m,n}^{k}\). Notice that \(y_{1}=(I_{n}+yx)^{-1}y\) with bounded entries in \(x\) and small entries in \(y\), the argument is the same the above step. We are done.
Given \(v_{j}\in V_{\tau_{j}}\), the space of \(\tau_{j}\), for \(j=1,2\), we consider the following \(\tau_{1}\boxtimes\tau_{2}\)-valued function on \(\mathrm{GL}_{m+n}(F)\).
\[f_{\mathbf{s}}^{k,v_{1},v_{2}}(g)=\left\{\begin{array}{ll}|\det(a)|^{s_{1}+ \frac{n-1}{2}}|\det(b)|^{-s_{2}-\frac{m-1}{2}}\tau_{1}(a)v_{1}\boxtimes\tau_{ 2}(b)v_{2},&\text{ if }g=u_{m,n}(x)\mathrm{diag}(a,b)\overline{u}_{m,n}(y)\\ &\text{ with }\overline{u}_{m,n}(y)\in\overline{N}_{m,n}^{k},\\ 0,&\text{ otherwise.}\end{array}\right.\]
**Proposition 4.11**.: _For any \(v_{1},v_{2}\), there exists an integer \(k_{3}(v_{1},v_{2})\) such that \(f_{\mathbf{s}}^{k,v_{1},v_{2}}\) defines a section in \(\mathrm{I}(\mathbf{s},(\tau_{1},\tau_{2}))\) for any \(k\geq k_{3}(v_{1},v_{2})\)._
Proof.: This is an analogue of [15, Lemma 5.2] and we only give a sketch of the proof. We first take a positive integer \(c=c(v_{1},v_{2})\) such that \(v_{1}\) is fixed by \(K^{c}_{\mathrm{GL}_{m}}\) under the action of \(\tau_{1}\) and \(v_{2}\) is fixed by \(K^{c}_{\mathrm{GL}_{m}}\) under the action of \(\tau_{2}\). Now take
\[k_{3}(v_{1},v_{2})=\max\left\{c,k_{1}(K^{c}_{\mathrm{GL}_{m+n}}\cap N_{m,n},c), k_{2}(K^{c}_{\mathrm{GL}_{m+n}}\cap N_{m,n})\right\}.\]
For \(k\geq k_{3}(v_{1},v_{2})\), we need to check
\[f_{\mathbf{s}}^{k,v_{1},v_{2}}(u_{m,n}(x)\mathrm{diag}(a,b)g)=|\det(a)|^{s_{1}+ \frac{n-1}{2}}|\det(b)|^{-s_{2}-\frac{m-1}{2}}\tau_{1}(a)\boxtimes\tau_{2}(b) f_{\mathbf{s}}^{k,v_{1},v_{2}}(g), \tag{4.7}\]
for all \(x\in\mathrm{Mat}_{m\times n}(F)\), \(a\in\mathrm{GL}_{m}(F),b\in\mathrm{GL}_{n}(F),g\in\mathrm{GL}_{m+n}(F)\), and there exists an open compact subgroup \(K^{\prime}\subset\mathrm{GL}_{m+n}(F)\) such that
\[f_{\mathbf{s}}^{k,v_{1},v_{2}}(gh)=f_{\mathbf{s}}^{k,v_{1},v_{2}}(g),\forall g \in\mathrm{GL}_{m+n}(F),h\in K^{\prime}. \tag{4.8}\]
The first property (4.7) is from the definition and we only address the second one (4.8).
Take a positive integer \(t\geq k\) such that \(\overline{N}_{m,n}\cap K^{t}_{\mathrm{GL}_{m+n}}\subset\overline{N}_{m,n}^{k}\). We take \(K^{\prime}=K^{t}_{\mathrm{GL}_{m+n}}\) in (4.8). We have the decomposition
\[K^{t}_{\mathrm{GL}_{m+n}}=(K^{t}_{\mathrm{GL}_{m+n}}\cap N_{m,n})(K^{t}_{ \mathrm{GL}_{m+n}}\cap M_{m,n})(K^{t}_{\mathrm{GL}_{m+n}}\cap\overline{N}_{m,n }).\]
For \(h\in(K^{t}_{\mathrm{GL}_{m+n}}\cap\overline{N}_{m,n})\), we have \(f_{\mathbf{s}}^{k,v_{1},v_{2}}(gh)=f_{\mathbf{s}}^{k,v_{1},v_{2}}(g)\) since \(h\in\overline{N}_{m,n}^{k}\) by assumption on \(t\). For \(h\in(K^{t}_{\mathrm{GL}_{m+n}}\cap M_{m,n})\), we write \(h=\mathrm{diag}(a_{0},b_{0})\). We first notice that \(h^{-1}\overline{N}_{m,n}^{k}h\subset\overline{N}_{m,n}^{k}\), and thus \(f_{\mathbf{s}}^{k,v_{1},v_{2}}(g)=0\) if and only if \(f_{\mathbf{s}}^{k,v_{1},v_{2}}(gh)=0\). Next, we assume that \(g=u_{m,n}(x)\mathrm{diag}(a,b)\overline{u}_{m,n}(y)\) with \(\overline{u}_{m,n}(y)\in\overline{N}_{m,n}^{k}\). Then \(gh=u_{m,n}(x)\mathrm{diag}(aa_{0},bb_{0})\overline{u}_{m,n}(b_{0}^{-1}ya_{0})\). Thus
\[f_{\mathbf{s}}^{k,v_{1},v_{2}}(gh) =|\det(aa_{0})|^{s_{1}+\frac{n-1}{2}}|\det(bb_{0})|^{-s_{2}-\frac {m-1}{2}}\tau_{1}(aa_{0})v_{1}\boxtimes\tau_{2}(bb_{0})v_{2}\] \[=f_{\mathbf{s}}^{k,v_{1},v_{2}}(g),\]
where in the last step we used \(\det(a_{0})=\det(b_{0})=1\) and \(\tau_{1}(a_{0})v_{1}=v_{1},\tau_{2}(b_{0})v_{2}=v_{2}\) (because \(a_{0}\in K^{t}_{\mathrm{GL}_{m}}\subset K^{c}_{\mathrm{GL}_{m}}\) by the assumption \(t\geq k\geq c\)). Finally, we take
\(N_{m,n})\subset K^{c}_{\mathrm{GL}_{m+n}}\cap N_{m,n}.\) Thus by Lemma 4.10, we have \(S(h,k)=S(h^{-1},k)=\overline{N}_{m,n}^{k}.\) In particular, for \(\overline{u}_{m,n}(y)\in\overline{N}_{m,n}^{k}\), we have \(\overline{u}_{m,n}(y)h\in P_{m,n}\overline{N}_{m,n}^{k}\) and \(\overline{u}_{m,n}(y)h^{-1}\in P_{m,n}\overline{N}_{m,n}^{k}\). Thus \(f_{\mathbf{s}}^{k,v_{1},v_{2}}(g)=0\) if and only if \(f_{\mathbf{s}}^{k,v_{1},v_{2}}(gh)=0\). Moreover, by Lemma 4.10 (1), we can write \(\overline{u}_{m,n}(y)h=u_{m,n}(x_{1})\mathrm{diag}(a_{1},b_{1})\overline{u}_{m,n}(y_{1})\) with \(a_{1}\in K^{c}_{\mathrm{GL}_{m}},b_{1}\in K^{c}_{\mathrm{GL}_{m}}\). Thus for \(g=u_{m,n}(x)\mathrm{diag}(a,b)\overline{u}_{m,n}(y)\), we have
\[gh=u_{m,n}(x)\mathrm{diag}(a,b)\overline{u}_{m,n}(y)h=u_{m,n}(x+ax_{1}b^{-1}) \mathrm{diag}(aa_{1},bb_{1})\overline{u}_{m,n}(y_{1}).\]
From the definition, we see that \(f_{\mathbf{s}}^{k,v_{1},v_{2}}(gh)=f_{\mathbf{s}}^{k,v_{1},v_{2}}(g)\) because \(\det(a_{1})=\det(b_{1})=1\), \(\tau_{1}(a_{1})v_{1}=v_{1}\), and \(\tau_{2}(b_{1})v_{2}=v_{2}.\) This concludes the proof.
We also consider the action of the intertwining operator \(M_{w_{m,n}}\) on \(f_{\mathbf{s}}^{i,v_{1},v_{2}}\):
\[\widehat{f}_{1-\widehat{\mathbf{s}}}^{k,v_{1},v_{2}}(g):=M_{w_{m,n}}(f_{ \mathbf{s}}^{k,v_{1},v_{2}})(g)=\int_{N_{n,m}(F)}f_{\mathbf{s}}^{k,v_{1},v_{2} }(w_{m,n}ug)du.\]
**Lemma 4.12**.: _Let \(D\) be an open compact subset of \(N_{m,n}\). Then there is an integer \(k_{0}(D,v_{1},v_{2})\geq k_{3}(v_{1},v_{2})\) such that_
\[\widehat{f}_{1-\widehat{\mathbf{s}}}^{k,v_{1},v_{2}}(w_{m,n}^{-1}x)=\mathrm{ vol}(\overline{N}_{m,n}^{k})v_{1}\boxtimes v_{2}.\]
Proof.: We take \(c\) to be a common conductor of \(v_{1}\) and \(v_{2}\) (namely, \(v_{1}\) is fixed by \(\tau_{1}(K^{c}_{\mathrm{GL}_{m}})\) and \(v_{2}\) is fixed by \(\tau_{2}(K^{c}_{\mathrm{GL}_{n}})\)) and we take \(k_{0}(D,v_{1},v_{2})=\max\left\{k_{3}(v_{1},v_{2}),k_{1}(D,c),k_{2}(D)\right\}\). Assume \(k\geq k_{0}(D,v_{1},v_{2})\). Then we have \(S(x,k)=\overline{N}_{m,n}^{k}\) by Lemma 4.10. By definition
\[\widehat{f}_{1-\widehat{\mathbf{s}}}^{k,v_{1},v_{2}}(w_{m,n}^{-1}x)=M_{w_{m,n }}(f_{\mathbf{s}}^{k,v_{1},v_{2}})(g)=\int_{N_{n,m}(F)}f_{\mathbf{s}}^{k,v_{1},v_{2}}(w_{m,n}uw_{m,n}^{-1}x)du.\]
For \(u\in N_{n,m}\), we have \(\overline{u}:=w_{m,n}uw_{m,n}^{-1}\in\overline{N}_{m,n}.\) By definition of \(f_{\mathbf{s}}^{k,v_{1},v_{2}}\), we have \(f_{\mathbf{s}}^{k,v_{1},v_{2}}(\overline{u}x)\neq 0\) if and only if \(\overline{u}x\in P_{m,n}\overline{N}_{m,n}^{k}\) if and only if \(\overline{u}\in S(x,k)=\overline{N}_{m,n}^{k}.\) Moreover, by Lemma 4.10 (1), we have
\[\overline{u}x=u_{m,n}(x_{1})\mathrm{diag}(a_{1},b_{1})\overline{u}_{m,n}(y_{ 1}),\]
with \(x_{1}\in\mathrm{Mat}_{m\times n}(F),\overline{u}_{m,n}(y_{1})\in\overline{N}_{m,n}^{k}\), \(a_{1}\in K^{c}_{\mathrm{GL}_{m}},b_{1}\in K^{c}_{\mathrm{GL}_{m}}\). By definition, we have
\[\widehat{f}_{1-\widehat{\mathbf{s}}}^{k,v_{1},v_{2}}(w_{m,n}^{-1}x)=\mathrm{ vol}(N_{m,n}^{k})v_{1}\boxtimes v_{2}.\]
This finishes the proof.
In the above lemma, notice that \(w_{m,n}^{-1}=w_{n,m}\). As we did in Subsection 3.1, we can consider the corresponding \(\mathbb{C}\)-valued function: \(\xi_{\mathbf{s}}^{k,v_{1},v_{2}}=\xi_{f_{\mathbf{s}}^{k,v_{1},v_{2}}}\in \mathcal{W}(\mathbf{s},(\tau_{1},\tau_{2}),\psi^{-1})\) and \(\widetilde{\xi}_{1-\widehat{\mathbf{s}}}=\xi_{f_{1-\widehat{\mathbf{s}}}^{k,v_{1},v_{2}}}\in\mathcal{W}(1-\widehat{\mathbf{s}},(\tau_{2},\tau_{1}),\psi^{-1})\). By Lemma 4.12, for \(x\in D\) and \(k\geq k_{0}(D,v_{1},v_{2})\), we have
\[\widetilde{\xi}_{1-\widehat{\mathbf{s}}}^{k,v_{1},v_{2}}(u_{n,m}(x _{1})\mathrm{diag}(b,a)w_{n,m}x)= \mathrm{vol}(\overline{N}_{m,n}^{k})|\det(b)|^{1-s_{2}+\frac{m-1}{2}}| \det(a)|^{-(1-s_{1})-\frac{n-1}{2}}\] \[W_{v_{1}}(a)W_{v_{2}}(b), \tag{4.9}\]
for \(x_{1}\in\mathrm{Mat}_{n\times m}(F),a\in\mathrm{GL}_{m}(F),b\in\mathrm{GL}_{n}(F).\) Here \(W_{v_{1}}(a)=\lambda_{1}(\tau_{1}(a)v_{1})\) for a fixed \(\lambda_{1}\in\mathrm{Hom}_{N_{m}}(\tau_{1},\psi^{-1})\) as in Subsection 3.1, and \(W_{v_{2}}\) is defined similarly. Notice that \(W_{v_{1}}\in\mathcal{W}(\tau_{1},\psi^{-1})\) and \(W_{v_{2}}\in\mathcal{W}(\tau_{2},\psi^{-1})\).
### A result of Jacquet-Shalika
**Proposition 4.13**.: _Let \(W^{\prime}\) be a smooth function on \(\mathrm{GL}_{n}(F)\) which satisfies \(W^{\prime}(ug)=\psi(u)W^{\prime}(g)\) for all \(u\in N_{n}\) and for each \(m\), the set \(\{g\in\mathrm{GL}_{n}(F)|W^{\prime}(g)\neq 0,|\det(g)|=q^{m}\}\) is compact modulo \(U_{\mathrm{GL}_{n}}\). Assume, for all irreducible generic representation \(\tau\) of \(\mathrm{GL}_{n}(F)\) and for all Whittaker functions \(W\in\mathcal{W}(\tau,\psi^{-1})\), the following integral_
\[\int_{U_{\mathrm{GL}_{n}}\setminus\mathrm{GL}_{n}}W^{\prime}(g)W(g)|\det(g)|^{s-k }dg\]
_vanishes, where \(k\) is a fixed number, then \(W^{\prime}\equiv 0\)._
This is a corollary of [13, Lemma 3.2]. See also [14, Corollary 2.1] or [1, Lemma 5.2] for a proof of the current version.
## 5. Proof of the local converse theorem
In this section, we prove Theorem 4.1. We fix our notations here. Consider two irreducible generic representations \(\pi_{1},\pi_{2}\) of \(\operatorname{GL}_{l}(F)\) with the same central character, say \(\omega\). We pick \(f_{j}\in\mathcal{M}(\pi_{j})\) (for \(j=1,2\)) such that \(W^{f_{j}}(I_{l})=1\).
**Theorem 5.1**.: _Let \(m\) be an integer with \(0\leq m\leq[l/2]\). The condition \(\mathcal{C}(m)\) implies that there exist functions \(f_{\overline{w}_{j}}\in C_{c}^{\infty}(\Omega_{\overline{w}_{j}},\omega)\) for each \(j\) with \(m+1\leq i\leq l-1-m\) such that,_
\[\mathcal{B}_{i}(g,f_{1})-\mathcal{B}_{i}(g,f_{2})=\sum_{j=m+1}^{l-m-1} \mathcal{B}_{i}(g,f_{\overline{w}_{j}}),\]
_for all \(i\gg 0\) depending only on \(f_{1},f_{2}\) and for all \(g\in\operatorname{GL}_{l}(F)\)._
We first show that Theorem 5.1 implies Theorem 4.1.
**Theorem 5.1** _implies Theorem 4.1._ By Theorem 5.1, the condition \(\mathcal{C}([l/2])\) implies that \(\mathcal{B}_{i}(g,f_{1})=\mathcal{B}_{i}(g,f_{2})\) for all \(g\in\operatorname{GL}_{l}(F)\) and for \(i\) large enough. This implies that \(W^{f_{1}}_{i}=W^{f_{2}}_{i}\) as a function on \(\operatorname{GL}_{l}(F)\) and thus \(\mathcal{W}(\pi_{1},\psi)\cap\mathcal{W}(\pi_{2},\psi)\neq\emptyset\). By the uniqueness of Whittaker model, we get that \(\pi_{1}\cong\pi_{2}\).
**Remark 5.2**.: Theorem 5.1 seems stronger than Theorem 4.1. It seems that it will be useful in the following question: Given an integer \(t\) with \(t\leq[l/2]\), determine irreducible supercuspidal representation \(\pi\) of \(\operatorname{GL}_{l}(F)\) such that \(\pi\) is determined by \(\gamma(s,\pi\times\tau,\psi)\) for all irreducible generic representation \(\tau\) of \(\operatorname{GL}_{m}(F)\) with \(1\leq m\leq t\).
We prove Theorem 5.1 by induction. Note that the base case when \(m=0\) of Theorem 5.1 is just Corollary 4.9. Next, we assume the following
**Inductive Hypothesis 5.3**.: _We fix a positive integer \(m\) with \(m\leq[l/2]\). We assume that the condition \(\mathcal{C}(m-1)\) implies that there exist functions \(f_{\overline{w}_{j}}\in C_{c}^{\infty}(\Omega_{\overline{w}_{j}},\omega)\) for each \(j\) with \(m\leq j\leq l-m\) such that,_
\[\mathcal{B}_{i}(g,f_{1})-\mathcal{B}_{i}(g,f_{2})=\sum_{j=m}^{l-m}\mathcal{B} _{i}(g,f_{\overline{w}_{j}}), \tag{5.1}\]
_for all \(g\in\operatorname{GL}_{l}(F)\) and all \(i\gg 0\) depending only on \(f_{1},f_{2}\)._
Assuming the above inductive hypothesis, we will use another inductive argument to show that \(\mathcal{C}(m)\) implies that there exist functions \(f_{\overline{w}_{j}}\in C_{c}^{\infty}(\Omega_{\overline{w}_{j}},\omega)\) for each \(i\) with \(m+1\leq i\leq l-1-m\) such that,
\[\mathcal{B}_{i}(g,f_{1})-\mathcal{B}_{i}(g,f_{2})=\sum_{j=m+1}^{l-m-1} \mathcal{B}_{i}(g,f_{\overline{w}_{j}}), \tag{5.2}\]
for all \(i\gg 0\) depending only on \(f_{1},f_{2}\) and for all \(g\in\operatorname{GL}_{l}(F).\) Here \(f_{\overline{w_{j}}}\) might be different from those obtained from the \((m-1)\)-th step (5.1). But we did not distinguish them from notations here.
To proceed using another induction argument, for an integer \(n\) with \(0\leq n\leq m\), we denote \(\mathcal{C}(m,n)\) the following condition on \(\pi_{1},\pi_{2}\): \(\pi_{1},\pi_{2}\)_satisfies the condition \(\mathcal{C}(m-1)\) and the following condition_
\[\Gamma(\mathbf{s},\pi_{1}\times(\tau_{1},\tau_{2}),\psi)=\Gamma(\mathbf{s}, \pi_{2}\times(\tau_{1},\tau_{2}),\psi)\]
_for any irreducible generic representations \(\tau_{1}\) of \(\operatorname{GL}_{m}(F)\), \(\tau_{2}\) of \(\operatorname{GL}_{k}(F)\) with \(0\leq k\leq n\); and for any irreducible generic representations \(\tau_{2}\) of \(\operatorname{GL}_{m}(F)\), \(\tau_{1}\) of \(\operatorname{GL}_{k}(F)\) with \(0\leq k\leq n\)._
Notice that the condition \(\mathcal{C}(m,0)\) is stronger than \(\mathcal{C}(m-1)\) and the condition \(\mathcal{C}(m,m)\) is the same as \(\mathcal{C}(m)\). For a positive integer \(m\) with \(m\leq[l/2]\). Recall that if \(j\) is a positive integer such that \(m+j<l\), we have defined an element
\[\widetilde{w}_{j,m}=\begin{pmatrix}&I_{j}\\ &I_{l-m-j}&\end{pmatrix}\]
in SS4.3. Moreover, we know that \(\widetilde{w}_{j,m}\in\mathrm{B}(\mathrm{GL}_{l})\) and \(\theta_{\widetilde{w}_{j,m}}=\Delta-\{\alpha_{m},\alpha_{l-j}\}\) by Lemma 4.7.
**Theorem 5.4**.: _Let \(m\) be a positive integer with \(m\leq[l/2]\) and \(n\) be an integer with \(0\leq n\leq m\). Then the condition \(\mathcal{C}(m,n)\) implies that there exist functions_
* \(f_{\overline{w}_{j}}\in C^{\infty}_{c}(\Omega_{\overline{w}_{j}},\omega)\) _for each_ \(j\) _with_ \(m+1\leq j\leq l-m-1\)_;_
* \(f^{\prime}_{j,m}\in C^{\infty}_{c}(\Omega_{\widetilde{w}_{j,m}},\omega)\)_, for each_ \(j\) _with_ \(n+1\leq j\leq m\)_; and_
* \(f^{\prime\prime}_{m,j}\in C^{\infty}_{c}(\Omega_{\widetilde{w}_{m,j}},\omega)\)_, for each_ \(j\) _with_ \(n+1\leq j\leq m\)_,_
_such that_
\[\mathcal{B}_{i}(g,f_{1})-\mathcal{B}_{i}(g,f_{2})=\sum_{j=m+1}^{l-m-1} \mathcal{B}_{i}(g,f_{\overline{w}_{j}})+\sum_{j=n+1}^{m}\mathcal{B}_{i}(g,f^{ \prime}_{j,m})+\sum_{j=n+1}^{m}\mathcal{B}_{i}(g,f^{\prime\prime}_{m,j}), \tag{5.3}\]
_for all \(g\in\mathrm{GL}_{l}(F)\) and for all \(i\) large enough depending only on \(f_{1},f_{2}\)._
**Remark 5.5**.: If \(n=m-1\), then both \(f^{\prime}_{m,m}\) and \(f^{\prime\prime}_{m,m}\) are in \(C^{\infty}_{c}(\widetilde{w}_{m,m})\) and we can absorb \(f^{\prime\prime}_{m,m}\) into \(f^{\prime}_{m,m}\). Thus the statement of Theorem 5.4 is: the condition \(\mathcal{C}(m,m-1)\) implies the expansion
\[\mathcal{B}_{i}(g,f_{1})-\mathcal{B}_{i}(g,f_{2})=\sum_{j=m+1}^{l-m-1} \mathcal{B}_{i}(g,f_{\overline{w}_{j}})+\mathcal{B}_{i}(g,f^{\prime}_{m,m}),\]
with certain \(f_{\overline{w}_{j}}\in C^{\infty}_{c}(\Omega_{\overline{w}_{j}},\omega)\) and \(f_{m,m}\in C^{\infty}_{c}(\Omega_{\widetilde{w}_{m,m}},\omega)\).
Note that by Theorem 5.4, the condition \(\mathcal{C}(m,m)=\mathcal{C}(m)\) implies that
\[\mathcal{B}_{i}(g,f_{1})-\mathcal{B}_{i}(g,f_{2})=\sum_{j=m+1}^{l-m-1} \mathcal{B}_{i}(g,f_{\overline{w}_{j}}),\]
which is exactly what we need to prove. Thus Theorem 5.4 implies Theorem 5.1 and hence Theorem 4.1. We will prove Theorem 5.4 in the rest of this section.
### Proof of the base case of Theorem 5.4
In this subsection, we prove the base case of Theorem 5.4, namely, the case when \(n=0\).
Let \(k\) be a positive integer with \(k<l\) and we consider the parabolic subgroup \(P_{k,l-k}\) of \(\mathrm{GL}_{l}\). A typical element of \(M_{k,l-k}\), the Levi of \(P_{k,l-k}\), is denoted by
\[\mathbf{t}_{k}(a,b):=\begin{pmatrix}a&\\ &b\end{pmatrix},a\in\mathrm{GL}_{k}(F),b\in\mathrm{GL}_{l-k}(F).\]
For \(y\in\mathrm{Mat}_{m\times(l-m-1)}(F)\), we denote
\[u_{1}(y)=\begin{pmatrix}I_{m}&&y\\ &1&&\\ &&I_{l-m-1}\end{pmatrix}.\]
**Lemma 5.6**.: _We fix the notations as in Inductive Hypothesis 5.3._
1. _We have_ \(\mathcal{B}_{i}(h,f_{\overline{w}_{j}})=0,\forall h\in P_{k,l-k}\)_. In particular, the inductive hypothesis (_5.1_) implies that_ \[\mathcal{B}_{i}(h,f_{1})=\mathcal{B}_{i}(h,f_{2}),\] _for all_ \(h\in P_{k,l-k}\) _and_ \(i\) _large._
2. _For positive integer_ \(j\) _with_ \(m+1\leq j\leq l-m\)_, we have_ \[\mathcal{B}_{i}(\overline{w}_{m}\mathbf{t}_{m}(a,I_{l-m})u_{1}(y),f_{\overline{w} _{j}})=0,\forall a\in\operatorname{GL}_{m}(F),\forall y\in\operatorname{Mat}_{m \times(l-m-1)}(F).\] _In particular, the inductive hypothesis (_5.1_) implies that_ \[\mathcal{B}_{i}(\overline{w}_{m}\mathbf{t}_{m}(a,I_{l-m})u_{1}(y ),f_{1})-\mathcal{B}_{i}(\overline{w}_{m}\mathbf{t}_{m}(a,I_{l-m})u_{1}(y),f_{2})\] \[\qquad=\mathcal{B}_{i}(\overline{w}_{m}\mathbf{t}_{m}(a,I_{l-m})u _{1}(y),f_{\overline{w}_{m}}),\] _for all_ \(a\in\operatorname{GL}_{m}(F),y\in\operatorname{Mat}_{m\times(l-m-1)}(F)\)_._
3. _For any_ \(a\in\operatorname{GL}_{m}(F)\)_, we can take_ \(i\) _large enough (which only depends on_ \(f_{\overline{w}_{m}}\)_, and hence only on_ \(f_{1},f_{2}\)_), such that_ \[\mathcal{B}_{i}(\overline{w}_{m}\mathbf{t}_{m}(a,I_{l-m})u_{1}(y),f_{ \overline{w}_{m}})=\left\{\begin{array}{ll}\mathcal{B}_{i}(\overline{w}_{m} \mathbf{t}_{m}(a,I_{l-m}),f_{\overline{w}_{m}}),&\text{ if }u_{1}(y)\in H_{l}^{i},\\ 0,&\text{ otherwise.}\end{array}\right.\]
4. _For a fixed integer_ \(k\) _and_ \(i\)_, the set_ \(\left\{a\in N_{m}(F)\backslash\operatorname{GL}_{m}(F):\mathcal{B}_{i}( \overline{w}_{m}\mathbf{t}_{m}(a,I_{l-m}))\neq 0,|a|=q^{k}\right\}\) _is compact._
Proof.: (1) Recall that
\[\mathcal{B}_{i}(g,f_{\overline{w}_{j}})=\frac{1}{\operatorname{vol}(N_{l}^{i} )}\int_{N_{l}^{i}}\int_{N_{l}}f_{\overline{w}_{j}}(u_{1}gu_{2})\psi^{-1}du_{2} du_{1}.\]
Since \(\operatorname{Supp}(f_{\overline{w}_{j}})\subset\Omega_{\overline{w}_{j}}\), it suffices to show that \(P_{k,l-k}\cap\Omega_{\overline{w}_{j}}=\emptyset\). Suppose that \(P_{k,l-k}\cap\Omega_{\overline{w}_{j}}\) is not empty, then their intersection must contain a Bruhat cell, namely, there exists a \(w\in\mathbf{W}\) such that \(w\geq\overline{w}_{j}\) and \(C(w)\subset P_{k,l-k}\). Since \(P_{k,l-k}\) is closed in \(\operatorname{GL}_{l}\), we get \(\overline{C(w)}\subset P_{k,l-k}\). The condition \(w\geq\overline{w}_{j}\) implies that \(C(\overline{w}_{j})\subset\overline{C(w)}\subset P_{k,l-k}\). In particular, we have \(\overline{w}_{j}\in P_{k,l-k}\). This is a contradiction.
(2) Consider the set
\[S =\left\{w\in\mathbf{W}:w=\overline{w}_{m}\mathbf{t}_{m}(a,I_{l-m }),\text{for some }a\in\operatorname{GL}_{m}\right\}\] \[=\left\{\overline{w}_{m}\mathbf{t}_{m}(w^{\prime},I_{l-m}):w^{ \prime}\in\mathbf{W}(\operatorname{GL}_{m})\right\}.\]
Here we don't distinguish a Weyl element its rerepresentative. Denote \(w_{\max}^{m}=\overline{w}_{m}\text{diag}(J_{m},I_{l-m})=\begin{pmatrix}&I_{l- m}\\ J_{m}&\end{pmatrix}\). Since the Weyl element in \(\operatorname{GL}_{m}\) forms a Bruhat interval \([1,J_{m}]\), the set \(S\) is in fact the Bruhat interval \([\overline{w}_{m},w_{\max}^{m}]\). Since
\[\left\{\overline{w}_{m}\mathbf{t}_{m}(a,I_{l-m})u_{1}(y),a\in\operatorname{GL} _{m}(F),y\in\operatorname{Mat}_{m\times(l-m-1)}(F)\right\}\subset\cup_{w\in S} C(w),\]
it suffices to show that for any \(w\in S\), \(C(w)\cap\Omega_{\overline{w}_{j}}=\emptyset\) if \(m+1\leq j\leq l-m\). Suppose that \(C(w)\cap\Omega_{\overline{w}_{j}}\) is non-empty, then \(w\geq\overline{w}_{i}\). In particular, \(w_{\max}^{m}\geq\overline{w}_{j}\). Note that
\[w_{0}w_{\max}=\begin{pmatrix}I_{m}&\\ &J_{l-m}\end{pmatrix},\]
which is the longest Weyl element of the Levi subgroup
\[M_{w_{\max}^{m}}=\left\{\text{diag}(a_{1},\dots,a_{m},a):a_{i}\in\operatorname{ GL}_{1},a\in\operatorname{GL}_{l-m}\right\}.\]
Note that the set \(\theta_{w_{\max}^{m}}\) is the set of all Weyl elements in \(M_{w_{\max}^{m}}\), which is \(\Delta-\left\{\alpha_{1},\dots,\alpha_{m}\right\}\). The condition \(w_{\max}^{m}\geq\overline{w}_{j}\) implies that \(\theta_{w_{\max}^{m}}\subset\theta_{\overline{w}_{j}}\), namely, \(\Delta-\left\{\alpha_{1},\dots,\alpha_{m}\right\}\subset\Delta-\left\{\alpha_{ j}\right\}\). This is impossible because \(j>m\).
(3) This can be done using a root killing argument as in Lemma 2.6, or using a support argument as in [18, Lemma 6.3 (3)]. Since the proof is similar/easier than that of [18, Lemma 6.3 (3)], we omit the details.
(4) This is an analogue of [18, Lemma 6.3 (4)] and the proof is similar. We omit the details.
Notice that if \(m>0,n=0\), we have defined a gamma factor \(\Gamma(\mathbf{s},\pi\times(\tau_{1},0),\psi)\) for an irreducible generic representation \(\tau_{1}\) of \(\operatorname{GL}_{m}(F)\), which is just a shift of Jacquet-Piatetski-Shapiro-Shalika's local gamma factor. Here we write a \(0\) in the second place of the pair \((\tau_{1},0)\) to emphasize that it is a pair of representation of \(\operatorname{GL}_{m}(F)\times\operatorname{GL}_{n}(F)\) when \(n=0\) even \(\operatorname{GL}_{n}(F)\) is nothing when \(n=0\). See Remark 3.2 and Remark 3.6.
**Proposition 5.7**.: _The condition \(\mathcal{C}(m,0)\) implies that_
\[\mathcal{B}_{i}(\overline{w}_{m}\mathbf{t}_{m}(a,I_{l-m}),f_{1})=\mathcal{B}_{i} (\overline{w}_{m}\mathbf{t}_{m}(a,I_{l-m}),f_{2}), \tag{5.4}\]
_and_
\[\mathcal{B}_{i}(\overline{w}_{l-m}\mathbf{t}_{l-m}(I_{l-m},a),f_{1})=\mathcal{ B}_{i}(\overline{w}_{l-m}\mathbf{t}_{m}(I_{l-m},a),f_{2}) \tag{5.5}\]
_for all \(a\in\operatorname{GL}_{m}(F)\)._
This is roughly [10, Proposition 3.1]. Since the proof in [10] depends highly on the Kirillov model and our treatment depends on partial Bessel function, we give some details of the proof here.
Proof.: For any irreducible generic representation \(\tau_{1}\) of \(\operatorname{GL}_{m}(F)\) and any \(\xi_{\mathbf{s}}=W^{\prime}|\ |^{s-1/2}\) with \(W^{\prime}\in\mathcal{W}(\tau_{1},\overline{\psi})\), we can consider the integral \(\Psi(\rho(\gamma_{m,0}^{-1})\mathcal{B}_{i}^{f},\xi_{\mathbf{s}};0)\) for \(f=f_{1},f_{2}\), which is
\[\Psi(\mathcal{B}_{i}^{f},\xi_{\mathbf{s}};0)=\int_{N_{m}(F)\setminus \operatorname{GL}_{m}(F)}\mathcal{B}_{i}^{f}\left(\mathbf{t}_{m}(a,I_{l-m}) \right)W^{\prime}(a)|\det(a)|^{s-1/2}dh.\]
Here we notice that \(\gamma_{m,0}=I_{l}\). See also Remark 3.2. By inductive hypothesis 5.3 and Lemma 5.6 (1), we have
\[\mathcal{B}_{i}^{f_{1}}\left(\mathbf{t}_{m}(a,I_{l-m})\right)=\mathcal{B}_{i} ^{f_{2}}\left(\mathbf{t}_{m}(a,I_{l-m})\right).\]
Thus
\[\Psi(\mathcal{B}_{i}^{f_{1}},\xi_{\mathbf{s}};0)=\Psi(\mathcal{B}_{i}^{f_{2}},\xi_{\mathbf{s}};0).\]
By the assumption on local gamma factors and the local functional equation (4.1), we have
\[\Psi(\mathcal{B}_{i}^{f_{1}}-\mathcal{B}_{i}^{f_{2}},M_{w_{m,n}}(\xi_{ \mathbf{s}});0)=0.\]
Plugin the definitions, see (3.2) or Remark 3.2, we have
\[0= \int_{[\operatorname{GL}_{m}]}\int_{\operatorname{Mat}_{m\times( l-m-1)}}\left(\mathcal{B}_{i}^{f_{1}}-\mathcal{B}_{i}^{f_{2}}\right)\left( \begin{pmatrix}1&I_{l-m-1}\\ &y&I_{m}\end{pmatrix}\begin{pmatrix}&I_{l-m}\\ I_{m}&\end{pmatrix}\begin{pmatrix}a&\\ &I_{l-m}\end{pmatrix}\right)\] \[\quad\quad\cdot W^{\prime}(a)|\det(a)|^{s-1/2}dydh\] \[= \int_{[\operatorname{GL}_{m}]}\int_{\operatorname{Mat}_{m\times(l- m-1)}}\left(\mathcal{B}_{i}^{f_{1}}-\mathcal{B}_{i}^{f_{2}}\right)\left(\overline{w}_{m} \mathbf{t}_{m}(a,I_{l-m})u_{1}(y)\right)W^{\prime}(a)|\det(a)|^{s^{*}}dydh,\]
where we identify an algebraic group over \(F\) with its \(F\)-rational points, \([\operatorname{GL}_{m}]\) is the abbreviation of \(N_{m}(F)\backslash\operatorname{GL}_{m}(F)\) and \(s^{*}=s-\frac{1}{2}+l-m-1\). By Lemma 5.6 (2) and (3), we get
\[\int_{N_{m}(F)\backslash\operatorname{GL}_{m}(F)}\left(\mathcal{B}_{i}^{f_{1}} -\mathcal{B}_{i}^{f_{2}}\right)(\overline{w}_{m}\mathbf{t}_{m}(a,I_{l-m}))W^{ \prime}(a)|\det(a)|^{s^{*}}dh=0.\]
Note that this is true for all irreducible representation \(\tau_{1}\) of \(\operatorname{GL}_{m}(F)\) and for all \(W^{\prime}\in\mathcal{W}(\tau_{1},\psi^{-1})\). Thus by Proposition 4.13 and Lemma 5.6 (4), we get that
\[\mathcal{B}_{i}(\overline{w}_{m}\mathbf{t}_{m}(a,I_{l-m}),f_{1})=\mathcal{B}_ {i}(\overline{w}_{m}\mathbf{t}_{m}(a,I_{l-m}),f_{2}).\]
To get the second assertion, we need to use the local gamma factor \(\Gamma(\mathbf{s},\pi\times(0,\tau_{2}),\psi)\) for a generic representation \(\tau_{2}\) of \(\operatorname{GL}_{m}(F)\). Here \(\mathbf{s}=s\) is a complex number used to do twist on \(\tau_{2}\). The cacluation is almost identical to the above. In fact, if we take \(\xi_{\mathbf{s}}=W^{\prime}|\ |^{s-1/2}\) with \(W^{\prime}\in\mathcal{W}(\tau_{2},\psi^{-1})\), we can check that
\[\Psi(\rho(\gamma_{0,m}^{-1})\mathcal{B}_{i}^{f},\xi_{\mathbf{s}};0)=\int_{[ \operatorname{GL}_{m}]}\int\mathcal{B}_{i}^{f}\left(\begin{pmatrix}1&\\ &I_{l-m-1}&\\ &y&I_{m}\end{pmatrix}\mathbf{t}_{l-m}(I_{l-m},a)\right)W^{\prime}(a)|\det(a)|^{ s-1/2}dyda.\]
By Lemma 5.6 (1), we have \(\Psi(\rho(\gamma_{0,m}^{-1})\mathcal{B}_{i}^{f_{1}},\xi_{\mathbf{s}};0)=\Psi( \rho(\gamma_{0,m}^{-1})\mathcal{B}_{i}^{f_{2}},\xi_{\mathbf{s}};0)\). By the local functional equation (4.1), we get that
\[\Psi(\mathcal{B}_{i}^{f_{1}}-\mathcal{B}_{i}^{f_{2}},M_{w_{m,n}}(\xi_{\mathbf{s }});0)=0.\]
By (3.2), the above equation becomes
\[\int_{[\operatorname{GL}_{m}]}(\mathcal{B}_{i}^{f_{1}}-\mathcal{B}_{i}^{f_{2}}) (\overline{w}_{l-m}\mathbf{t}_{l-m}(I_{l-m},a))|a|^{s^{*}}da=0,\]
where \(s^{*}\) is a translation of \(s\) and its precise form is not important here. Then using Proposition 4.13 again, we get that
\[(\mathcal{B}_{i}^{f_{1}}-\mathcal{B}_{i}^{f_{2}})(\overline{w}_{l-m}\mathbf{t}_ {l-m}(I_{l-m},a))=0,\forall a\in\operatorname{GL}_{m}(F).\]
This finishes the proof.
**Corollary 5.8**.: _Assume the condition \(\mathcal{C}(m,0)\). Then there exists_
* \(f_{\overline{w}_{j}}\in C_{c}^{\infty}(\Omega_{\overline{w}_{j}},\omega)\) _for each_ \(j\) _with_ \(m+1\leq j\leq l-m-1\)_;_
* \(f_{j,m}^{\prime}\in C_{c}^{\infty}(\Omega_{\overline{w}_{j,m}},\omega)\)_, for each_ \(j\) _with_ \(1\leq j\leq m\)_; and_
* \(f_{m,j}^{\prime\prime}\in C_{c}^{\infty}(\Omega_{\overline{w}_{m,j}},\omega)\)_, for each_ \(j\) _with_ \(1\leq j\leq m\)_,_
_such that_
\[\mathcal{B}_{i}(g,f_{1})-\mathcal{B}_{i}(g,f_{2})=\sum_{j=m+1}^{l-m-1} \mathcal{B}_{i}(g,f_{\overline{w}_{j}})+\sum_{j=1}^{m}\mathcal{B}_{i}(g,f_{j, m}^{\prime})+\sum_{j=1}^{m}\mathcal{B}_{i}(g,f_{m,j}^{\prime\prime}),\]
_for all \(g\in\operatorname{GL}_{l}(F)\) and for all \(i\) large enough depending only on \(f_{1},f_{2}\)._
Proof.: By Lemma 5.6 (2), inductive hypothesis (5.1) and (5.4), we get
\[\mathcal{B}_{i}(\overline{w}_{m}\mathbf{t}_{m}(a,I_{l-m}),f_{\overline{w}_{m} })=0. \tag{5.6}\]
As in the proof of Lemma 5.6 (2), we consider \(w_{\max}^{m}=\begin{pmatrix}I_{l-m}\\ J_{m}\end{pmatrix}\). Then for \(w\in[\overline{w}_{m},w_{\max}^{m}]\), we consider the set \(A_{w}\) as defined in (4.5). From \(w\leq w_{\max}^{m}\), we know that \(A_{w}\subset A_{w_{\max}^{m}}\) which is of the form \(\operatorname{diag}(a_{1},\dots,a_{m},aI_{l-m})\), for \(a_{j},a\in\operatorname{GL}_{1}(F)\). Moreover, we know that \(w=\overline{w}_{m}\mathbf{t}_{m}(w^{\prime},I_{l-m})\). Thus, for any \(a\in A_{w}\), we know that there exists an element \(z=zI_{l}\) in the center of \(\operatorname{GL}_{l}(F)\) and an element \(g\in\operatorname{GL}_{m}(F)\) such that \(wa=z\overline{w}_{m}\mathbf{t}_{m}(b,I_{l-m})\). Thus from (5.6), we get that
\[\mathcal{B}_{i}(wa,f_{\overline{w}_{m}})=0, \tag{5.7}\]
for all \(w\in[\overline{w}_{m},w_{\max}^{m}]\) and all \(a\in A_{w}\). Similarly, if we temporarily denote \(w_{\max}^{\prime}=\overline{w}_{l-m}\text{diag}(I_{l-m},J_{m})\), then from (5.5) we have
\[\mathcal{B}_{i}(wa,f_{\overline{w}_{l-m}})=0, \tag{5.8}\]
for all \(w\in\operatorname{B}(\operatorname{GL}_{l})\) with \(\overline{w}_{l-m}\leq w\leq w_{\max}^{\prime}\), and all \(a\in A_{w}\). The result in fact follows from (5.7), (5.8) and Theorem 4.8 directly. We give some details about this implication below.
By the proof of Lemma 5.6 and a simple calculation, we get that
\[\begin{array}{llll}\theta_{\overline{w}_{m}}&=\Delta-\left\{\alpha_{m} \right\},&\theta_{w_{\max}^{m}}&=\Delta-\left\{\alpha_{1},\dots,\alpha_{m} \right\},\\ \theta_{\overline{w}_{l-m}}&=\Delta-\left\{\alpha_{l-m}\right\},&\theta_{w_{ \max}^{\prime}}&=\Delta-\left\{\alpha_{l-m},\dots,\alpha_{l-1}\right\}.\end{array}\]
Denote
\[\Omega_{\overline{w}_{m}}^{\circ}=\bigcup_{\begin{subarray}{c}w\in \operatorname{B}(\operatorname{GL}_{l}),w\geq\overline{w}_{m}\\ d(w,\overline{w}_{m})=1\end{subarray}}\Omega_{w}.\]
By applying Theorem 4.8 and (5.7) to \(\overline{w}_{m}\), we get a function \(\overline{f}_{m}\in C_{c}^{\infty}(\Omega_{\overline{w}_{m}}^{\circ},\omega)\) such that, after increasing \(i\) if necessary, we have
\[\mathcal{B}_{i}(g,f_{\overline{w}_{m}})=\mathcal{B}_{i}(g,\overline{f}_{m}).\]
Note that the set \(\left\{w\in\operatorname{B}(\operatorname{GL}_{l}):w>\overline{w}_{m},d(w, \overline{w}_{m})=1\right\}=\left\{w_{\Delta-\left\{\alpha_{m},\alpha_{j}\right\} },1\leq j\leq l-1,j\neq m\right\}.\) By a partition of unity argument on \(f_{m}\), there exists a function \(f_{\Delta-\left\{\alpha_{j},\alpha_{m}\right\}}\in C_{c}^{\infty}(\Omega_{w_{ \Delta-\left\{\alpha_{m},\alpha_{j}\right\}}},\omega)\) such that
\[\mathcal{B}_{i}(g,f_{\overline{w}_{m}})=\mathcal{B}_{i}(g,\overline{f}_{m})= \sum_{j\neq m}\mathcal{B}_{i}(g,f_{\Delta-\left\{\alpha_{j},\alpha_{m}\right\} }). \tag{5.9}\]
We consider \(j\) in \(3\) separate ranges. If \(m+1\leq j\leq l-m-1\), since \(w_{\Delta-\left\{\alpha_{m},\alpha_{j}\right\}}\geq\overline{w}_{j}\), \(f_{\Delta-\left\{\alpha_{j},\alpha_{m}\right\}}\) can be viewed as an element of \(C_{c}^{\infty}(\Omega_{\overline{w}_{j}},\omega)\) and thus can be absorbed into \(f_{\overline{w}_{j}}\) in (5.1). In other words, we can assume that \(f_{\Delta-\left\{\alpha_{j},\alpha_{m}\right\}}=0\) after replacing \(f_{\overline{w}_{j}}\) by \(f_{\overline{w}_{j}}+f_{\Delta-\left\{\alpha_{j},\alpha_{m}\right\}}\) in (5.1). If
\(l-1\geq j\geq l-m\), we have \(f_{\Delta-\{\alpha_{j},\alpha_{m}\}}\in C_{c}^{\infty}(\Omega_{\widetilde{w}_{l-j, m}},\omega)\). We write \(f_{\Delta-\{\alpha_{j},\alpha_{m}\}}\) as \(f^{\prime}_{\widetilde{w}_{l-j},m}\). Thus (5.9) becomes
\[\mathcal{B}_{i}(g,f_{\overline{w}_{m}})=\mathcal{B}_{i}(g,\overline{f}_{m})= \sum_{j=1}^{m-1}\mathcal{B}_{i}(g,f_{\Delta-\{\alpha_{j},\alpha_{m}\}})+\sum_{ j=1}^{m}\mathcal{B}_{i}(g,f^{\prime}_{\widetilde{w}_{j,m}}). \tag{5.10}\]
If \(j<m\), then \(\overline{w}_{m}\leq w_{\Delta-\{\alpha_{m},\alpha_{j}\}}\leq w_{\max}\), the formula (5.7) and the above decomposition of \(f_{\overline{w}_{m}}\) (5.9) imply that
\[\mathcal{B}(wa,f_{\Delta-\{\alpha_{j},\alpha_{m}\}})=0,w=w_{\Delta-\{\alpha_{ m},\alpha_{j}\}},a\in A_{w}.\]
We then apply Theorem 4.8 to \(w=w_{\Delta-\{\alpha_{m},\alpha_{j}\}}\) and repeat the above process. We can get that for each \(k\) with \(k\neq j,m\), there exists a function \(f_{\Delta-\{\alpha_{j},\alpha_{k},\alpha_{m}\}}\in C_{c}^{\infty}(\Omega_{w_{ \Delta-\{\alpha_{j},\alpha_{k},\alpha_{m}\}}},\omega)\) such that
\[\mathcal{B}(g,f_{\Delta-\{\alpha_{j},\alpha_{m}\}})=\sum_{k\neq j,m}\mathcal{B }(g,f_{\Delta-\{\alpha_{j},\alpha_{k},\alpha_{m}\}}).\]
Similarly as above, if \(m+1\leq k\leq l-m-1\), we can assume that \(f_{\Delta-\{\alpha_{j},\alpha_{k},\alpha_{m}\}}=0\) after replacing \(f_{\overline{w}_{k}}\) in (5.1) by \(f_{\overline{w}_{k}}+f_{\Delta-\{\alpha_{j},\alpha_{k},\alpha_{m}\}}\). If \(l-1\geq k\geq l-m\), we have \(f_{\Delta-\{\alpha_{j},\alpha_{k},\alpha_{m}\}}\in C_{c}^{\infty}(\Omega_{ \widetilde{w}_{l-k,m}},\omega)\). We can thus absorb \(f_{\Delta-\{\alpha_{j},\alpha_{k},\alpha_{m}\}}\) to \(f^{\prime}_{\widetilde{w}_{l-k,m}}\) in (5.10) and assume that \(f_{\Delta-\{\alpha_{j},\alpha_{k},\alpha_{m}\}}=0\). Then (5.10) becomes
\[\mathcal{B}_{i}(g,f_{\overline{w}_{m}})=\mathcal{B}_{i}(g,\overline{f}_{m})= \sum_{1\leq j<k\leq m-1}\mathcal{B}_{i}(g,f_{\Delta-\{\alpha_{j},\alpha_{k} \alpha_{m}\}})+\sum_{j=1}^{m}\mathcal{B}_{i}(g,f^{\prime}_{\widetilde{w}_{j,m}}) \tag{5.11}\]
We continue to repeat the above process. In each time, we increase \(i\) if necessary, and replacing \(f_{\overline{w}_{j}}\) for \(m+1\leq j\leq l-m-1\) in (5.1) and \(f^{\prime}_{\widetilde{w}_{j,m}}\) in (5.10) by a new function in the same corresponding space if necessary. After repeating the above process at most \(m\)-times, we can get
\[\mathcal{B}_{i}(g,f_{\overline{w}_{m}})=\mathcal{B}_{i}(g,\overline{f}_{m})= \sum_{j=1}^{m}\mathcal{B}_{i}(g,f^{\prime}_{\widetilde{w}_{j,m}}),f_{ \widetilde{w}_{j,m}}\in C_{c}^{\infty}(\Omega_{\widetilde{w}_{j,m}},\omega). \tag{5.12}\]
Similarly, using (5.8) and Theorem 4.8, there exists functions \(f_{\widetilde{w}^{\prime\prime}_{m,j}}\in C_{c}^{\infty}(\Omega_{\widetilde{w} _{m,j}},\omega)\) such that
\[\mathcal{B}_{i}(g,f_{\overline{w}_{l-m}})=\sum_{j=1}^{m}\mathcal{B}_{i}(g,f^{ \prime\prime}_{\widetilde{w}_{m,j}}). \tag{5.13}\]
Now the result follows from the inductive hypothesis (5.1), equations (5.12) and (5.13).
### Proof of Theorem 5.4
Note that Corollary 5.8 gives the base case of Theorem 5.4. Given a positive integer \(n\) with \(1\leq n\leq m\), we assume that we have proved Theorem 5.4 for \(n-1\), namely, we assume the following
**Inductive Hypothesis 5.9**.: _The condition \(\mathcal{C}(m,n-1)\) implies that there exist functions_
* \(f_{\overline{w}_{j}}\in C_{c}^{\infty}(\Omega_{\overline{w}_{j}},\omega)\) _for each_ \(j\) _with_ \(m+1\leq j\leq l-m-1\)_;_
* \(f^{\prime}_{j,m}\in C_{c}^{\infty}(\Omega_{\widetilde{w}_{j,m}},\omega)\)_, for each_ \(j\) _with_ \(n\leq j\leq m\)_; and_
* \(f^{\prime\prime}_{m,j}\in C_{c}^{\infty}(\Omega_{\widetilde{w}_{m,j}},\omega)\)_, for each_ \(j\) _with_ \(n\leq j\leq m\)_,_
_such that_
\[\mathcal{B}_{i}(g,f_{1})-\mathcal{B}_{i}(g,f_{2})=\sum_{j=m+1}^{l-m-1} \mathcal{B}_{i}(g,f_{\overline{w}_{j}})+\sum_{j=n}^{m}\mathcal{B}_{i}(g,f^{ \prime}_{j,m})+\sum_{j=n}^{m}\mathcal{B}_{i}(g,f^{\prime\prime}_{m,j}), \tag{5.14}\]
_for all \(g\in\mathrm{GL}_{l}(F)\) and for all \(i\) large enough depending only on \(f_{1},f_{2}\). If \(n=m\), then we just absorb \(f^{\prime\prime}_{m,m}\) into \(f^{\prime}_{m,m}\) and write (5.14) as_
\[\mathcal{B}_{i}(g,f_{1})-\mathcal{B}_{i}(g,f_{2})=\sum_{j=m+1}^{l-m-1}\mathcal{ B}_{i}(g,f_{\overline{w}_{j}})+\mathcal{B}_{i}(g,f^{\prime}_{m,m}). \tag{5.15}\]
_See Remark 5.5._
We first prepare a lemma. For \(a\in\operatorname{GL}_{m}(F),b\in\operatorname{GL}_{n}(F)\), we denote
\[\mathbf{t}_{m,n}(a,b)=\operatorname{diag}(a,I_{l-m-n},b)\]
as before.
**Lemma 5.10**.: _We fix the notations as in the Inductive Hypothesis 5.9._
1. _For each_ \(k\) _with_ \(1\leq k\leq l-1\)_, then for_ \(i\) _large enough which only depends on_ \(f_{1},f_{2}\)_, and for any_ \(h\in P_{k,l-k}\)_, we have_ \[\mathcal{B}_{i}(h,f^{\prime}_{j,m})=0,\mathcal{B}_{i}(h,f^{\prime\prime}_{m,j })=0,\forall j,n\leq j\leq m.\]
2. _For any_ \(a\in\operatorname{GL}_{m}(F),b\in\operatorname{GL}_{n}(F),y\in\operatorname{ Mat}_{m\times(l-m-1)}(F)\)_, we have_ \[\begin{array}{ll}\mathcal{B}_{i}(\widetilde{w}_{n,m}\mathbf{t}_{m}(a,b)u_{1} (y),f_{\overline{w}_{j}})&=0,\ \ m+1\leq j\leq l-m-1,\\ \mathcal{B}_{i}(\widetilde{w}_{n,m}\mathbf{t}_{m}(a,b)u_{1}(y),f^{\prime}_{j,m})&=0,\ \ n<j\leq m,\\ \mathcal{B}_{i}(\widetilde{w}_{n,m}\mathbf{t}_{m}(a,b)u_{1}(y),f^{\prime}_{m,j})&=0,\ \ n\leq j\leq m,\ \mathrm{if}\ n<m.\end{array}\] _In particular, by (_5.14_), we have_ \[\mathcal{B}_{i}(\widetilde{w}_{n,m}\mathbf{t}_{m}(a,b)u_{1}(y),f_{1})- \mathcal{B}_{i}(\widetilde{w}_{n,m}\mathbf{t}_{m}(a,b)u_{1}(y),f_{1})= \mathcal{B}_{i}(\widetilde{w}_{n,m}\mathbf{t}_{m}(a,b)u_{1}(y),f^{\prime}_{n,m}).\]
3. _If_ \(u_{1}(y)\notin H^{i}_{l}\)_, we have_ \[\mathcal{B}_{i}(\widetilde{w}_{n,m}\mathbf{t}_{m}(a,b)u_{1}(y),f^{\prime}_{n,m})=0\] _for_ \(i\) _large enough depending only on_ \(f_{1},f_{2}\)_._
4. _For_ \(k_{1},k_{2}\in\mathbb{Z}\)_, the set_ \[\big{\{}(a,b)\in[\operatorname{GL}_{m}]\times[\operatorname{GL}_{n}]|\mathcal{ B}_{i}(\widetilde{w}_{n,m}\mathbf{t}_{m,n}(a,b),f^{\prime}_{n,m})\neq 0,| \det(a)|=q^{k_{1}},|\det(b)|=q^{k_{2}}\big{\}}\] _is compact. Here_ \([\operatorname{GL}_{m}]\) _stands for_ \(N_{m}(F)\backslash\operatorname{GL}_{m}(F)\)_._
This is an analogue of [21, Lemma 6.3].
Proof.: (1) The proof is the same as the proof of Lemma 5.6 (1) by noticing that \(\widetilde{w}_{m,j}\notin P_{k,l-k}\) and \(\widetilde{w}_{j,m}\notin P_{k,l-k}\).
(2) The proof is also similar to the proof of Lemma 5.6 (2) and we give some details here. We consider the set
\[S_{m,n} =\{w\in\mathbf{W}(\operatorname{GL}_{l}):w=\widetilde{w}_{n,m} \mathbf{t}_{m,n}(a,b),\ \text{for some}\ a\in\operatorname{GL}_{m},b\in\operatorname{GL}_{n}\}\] \[=\{\widetilde{w}_{n,m}\mathbf{t}_{m,n}(w,w^{\prime}),\ \text{for some}\ w\in\mathbf{W}( \operatorname{GL}_{m}),w^{\prime}\in\mathbf{W}(\operatorname{GL}_{n})\}\,.\]
Note that the Weyl elements in \(\operatorname{GL}_{m}\) (resp. \(\operatorname{GL}_{n}\)) form a Bruhat interval \([1,J_{m}]\) (resp. \([1,J_{n}]\)). Thus for any \(w\in S_{m,n}\) we have \(\widetilde{w}_{n,m}\leq w\leq\widetilde{w}_{\max}\), where
\[\widetilde{w}_{\max}=\widetilde{w}_{n,m}\mathbf{t}_{m,n}(J_{m},J_{n})=\begin{pmatrix} &J_{n}\\ J_{m}&\end{pmatrix}.\]
Notice that
\[\big{\{}\widetilde{w}_{n,m}\mathbf{t}_{m,n}(a,b)u_{1}(y):a\in \operatorname{GL}_{m}(F),b\in\operatorname{GL}_{n}(F),y\in\operatorname{Mat}_{m \times(l-m-1)}\big{\}}\subset\cup_{w\in S_{m,n}}C(w).\]
We have
\[\theta_{\widetilde{w}_{\max}} =\Delta-\{\alpha_{1},\dots,\alpha_{m},\alpha_{l-n},\dots,\alpha_{ l-1}\}\,,\] \[\theta_{\widetilde{w}_{j}} =\Delta-\{\alpha_{j}\}\,,\] \[\theta_{\widetilde{w}_{j,m}} =\Delta-\{\alpha_{m},\alpha_{l-j}\}\] \[\theta_{\widetilde{w}_{m,j}} =\Delta-\{\alpha_{j},\alpha_{l-m}\}\,.\]
From these relations, we can see that \(C(\widetilde{w}_{\max})\cap\Omega_{\overline{w}_{j}}=\emptyset\), for all \(j\) with \(m+1\leq j\leq l-m-1\); \(C(\widetilde{w}_{\max})\cap\Omega_{\widetilde{w}_{j,m}}=\emptyset\), for all \(j\) with \(n<j\leq m\); and \(C(\widetilde{w}_{\max})\cap\Omega_{w^{\prime\prime}_{j}}=\emptyset\), for all \(j\) with \(n\leq j\leq m\) except the case \(n=j=m\). As in the proof of Lemma 5.6 (2), this gives the conclusion. The "in particular" part follows from the expansion (5.14) and (5.15) in the inductive hypothesis 5.9.
(3) This is an analogue of [21, Lemma 6.3 (3)] and the proof is similar. We omit the details.
(4) This is an analogue of [21, Lemma 6.3 (4)]. We also omit the details here.
**Proposition 5.11**.: _Assume that \(1\leq n\leq m\leq[l/2]\) and \(m+n\leq l-1\). The condition \(\mathcal{C}(m,n)\) implies that_
\[\mathcal{B}_{i}(\widetilde{w}_{n,m}\mathbf{t}_{m,n}(a,b),f_{1})=\mathcal{B}_{i }(\widetilde{w}_{n,m}\mathbf{t}_{m,n}(a,b),f_{2}), \tag{5.16}\]
_and_
\[\mathcal{B}_{i}(\widetilde{w}_{m,n}\mathbf{t}_{n,m}(b,a),f_{1})=\mathcal{B}_{ i}(\widetilde{w}_{m,n}\mathbf{t}_{n,m}(b,a),f_{2}), \tag{5.17}\]
_for all \(a\in\mathrm{GL}_{m}(F),b\in\mathrm{GL}_{n}(F)\)._
Proof.: Given any irreducible generic representation \(\tau_{1}\) of \(\mathrm{GL}_{m}(F)\) and \(\tau_{2}\) of \(\mathrm{GL}_{n}(F)\), the assumption says that
\[\Gamma(\mathbf{s},\pi_{1}\times(\tau_{1},\tau_{2}),\psi)=\Gamma(\mathbf{s}, \pi_{2}\times(\tau_{1},\tau_{2}),\psi).\]
We use the local functional equation of the form in (4.1). We first compute
\[\Psi(\rho(\gamma_{m,n}^{-1})\mathcal{B}_{i}^{f},\xi_{\mathbf{s}}^{k,v_{1},v_{ 2}};0)\]
for the section \(\xi_{\mathbf{s}}^{k,v_{1},v_{2}}\) as defined in Subsection 4.5 and \(f=f_{1},f_{2}\). Here \(v_{j}\in\tau_{j}\) are arbitrary vectors and we take \(k\geq i\) large enough. We have
\[\Psi(\rho(\gamma_{m,n}^{-1})\mathcal{B}_{i}^{f},\xi_{\mathbf{s}}^{k,v_{1},v_{ 2}};0)=\int_{[\mathrm{GL}_{m+n}]}\int_{\overline{\mathcal{D}}^{0,m,n}} \mathcal{B}_{i}^{f}\left(\overline{u}\gamma_{m,n}\begin{pmatrix}h&&\\ &I_{l-m-n}\end{pmatrix}\gamma_{m,n}^{-1}\right)\xi_{\mathbf{s}}^{k,v_{1},v_{2}} (h)d\overline{u}dh.\]
Here \([\mathrm{GL}_{m+n}]\) stands for \(N_{m+n}(F)\backslash\mathrm{GL}_{m+n}(F)\) and we will use similar notation below. Since \(N_{m,n}M_{m,n}\overline{N}_{m,n}\) is dense in \(\mathrm{GL}_{m+n}(F)\), the above integral over \(N_{m+n}(F)\backslash\mathrm{GL}_{m+n}(F)\) can be replaced by \(N_{m+n}\backslash N_{m,n}M_{m,n}\overline{N}_{m,n}=(N_{m}\backslash\mathrm{GL} _{m}\times N_{n}\backslash\mathrm{GL}_{n})\overline{N}_{m,n}\), where an algebraic group is identified with its \(F\)-rational points. For \(h=\mathrm{diag}(a,b)\overline{u}_{m,n}(y_{2})\in(N_{m}\backslash\mathrm{GL}_{m }\times N_{n}\backslash\mathrm{GL}_{n})\overline{N}_{m,n}\) with \(y_{2}\in\mathrm{Mat}_{n\times m}\), we can take the Haar measure \(dh=|\det(a)|^{-n}|\det(b)|^{m}d\overline{v}dadb.\) A simple calculation on the conjugation by \(\gamma_{m,n}\) shows that
\[\Psi(\rho(\gamma_{m,n}^{-1})\mathcal{B}_{i}^{f},\xi_{\mathbf{s}} ^{k,v_{1},v_{2}};0)= \int_{[\mathrm{GL}_{m}]\times[\mathrm{GL}_{n}]}\int_{\overline{N} _{m,n}}\int_{\overline{\mathcal{D}}^{0,m,n}}\mathcal{B}_{i}^{f}\left( \mathbf{t}_{m,n}(a,b)\begin{pmatrix}I_{m}&&\\ &1&&\\ &&I_{l-m-n-1}&\\ y_{2}&y_{1}&I_{n}\end{pmatrix}\right)\] \[\xi_{\mathbf{s}}^{k,v_{1},v_{2}}\left(\mathrm{diag}(a,b) \overline{u}_{m,n}(y_{2})\right)|\det(a)|^{-n}|\det(b)|^{l-n-1}dy_{2}dy_{1}dadb.\]
If \(\overline{u}_{m,n}(y_{2})\notin\overline{N}_{m,n}^{k}\), then \(\xi_{\mathbf{s}}\left(\mathrm{diag}(a,b)\overline{u}_{m,n}(y_{2})\right)=0\) by the definition of \(\xi_{\mathbf{s}}^{k,v_{1},v_{2}}\), see SS4.5. If \(\overline{u}_{m,n}(y_{2})\in\overline{N}_{m,n}^{k}\), then \(\begin{pmatrix}I_{m}&&\\ &I_{l-m-n}&\\ y_{2}&&I_{n}\end{pmatrix}\in\overline{N}_{l}\cap H_{l}^{i}\) because \(k\geq i\). See the definition of \(\overline{N}_{m,n}^{k}\) in SS4.5. By (4.3), we have
\[\mathcal{B}_{i}^{f}\left(\mathbf{t}_{m,n}(a,b)\begin{pmatrix}I_{m} &&&&\\ &1&&\\ &&I_{l-m-n-1}&\\ y_{2}&&y_{1}&I_{n}\end{pmatrix}\right)\] \[=\mathcal{B}_{i}^{f}\left(\mathbf{t}_{m,n}(a,b)\begin{pmatrix}I_{m }&&&&\\ &1&&\\ &&I_{l-m-n-1}&\\ &&y_{1}&I_{n}\end{pmatrix}\right).\]
Note that by the expansion (5.14), Lemma 5.6 (1) and Lemma 5.10 (1), we have
\[\mathcal{B}_{i}^{f_{1}}\left(\begin{pmatrix}a&&\\ &I_{l-m-n}&\\ &&b\end{pmatrix}\begin{pmatrix}I_{m}&&\\ &1&&\\ &&I_{l-m-n-1}&\\ &&y_{1}&I_{n}\end{pmatrix}\right)\] \[=\mathcal{B}_{i}^{f_{2}}\left(\begin{pmatrix}a&&\\ &I_{l-m-n}&\\ &&b\end{pmatrix}\begin{pmatrix}I_{m}&&\\ &1&&\\ &&I_{l-m-n-1}&\\ &&y_{1}&I_{n}\end{pmatrix}\right).\]
Thus we get
\[\Psi(\rho(\gamma_{m,n}^{-1})\mathcal{B}_{i}^{f_{1}},\xi_{i}^{k,v_{1},v_{2}};0) =\Psi(\rho(\gamma_{m,n}^{-1})\mathcal{B}_{i}^{f_{2}},\xi_{i}^{k,v_{1},v_{2}};0).\]
Then by the local functional equation (4.1) and the assumption on the local gamma factors, we have
\[\Psi(\rho(\gamma_{m,n}^{-1})\mathcal{B}_{i}^{f_{1}},\widetilde{\xi}_{1- \overline{\mathbf{s}}}^{k,v_{1},v_{2}};0)=\Psi(\rho(\gamma_{m,n}^{-1}) \mathcal{B}_{i}^{f_{2}},\widetilde{\xi}_{1-\overline{\mathbf{s}}}^{k,v_{1},v _{2}};0),\]
or
\[\Psi(\rho(\gamma_{m,n}^{-1})(\mathcal{B}_{i}^{f_{1}}-\mathcal{B}_{i}^{f_{2}}),\widetilde{\xi}_{1-\overline{\mathbf{s}}}^{k,v_{1},v_{2}};0)=0. \tag{5.18}\]
Here \(\widetilde{\xi}_{1-\overline{\mathbf{s}}}^{k,v_{1},v_{2}}\) denotes \(M_{w_{m,n}}(\xi_{1-\overline{\mathbf{s}}}^{k,v_{1},v_{2}})\) as usual. In the following, we write \(\mathcal{B}_{i}^{f_{1}}-\mathcal{B}_{i}^{f_{2}}\) as \(\mathcal{B}_{i}\) for simplicity. We have
\[\Psi(\rho(\gamma_{m,n}^{-1})\mathcal{B}_{i},\widetilde{\xi}_{1- \overline{\mathbf{s}}}^{v};0)=\int_{[\mathrm{GL}_{m+n}]}\int_{\overline{U}^{0,n,n}}\mathcal{B}_{i}\left(\overline{u}\gamma_{n,m}\begin{pmatrix}h&&\\ &I_{l-m-n}\end{pmatrix}\gamma_{m,n}^{-1}\right)\widetilde{\xi}_{1-\overline{ \mathbf{s}}}^{k,v_{1},v_{2}}(h)d\overline{u}dh.\]
Since \(N_{n+m}\backslash P_{n,m}w_{n,m}N_{m,n}\subset N_{n+m}\backslash\mathrm{GL}_ {n+m}\) is open and dense, we can replace the integral above over \(N_{n+m}\backslash\mathrm{GL}_{n+m}\) by \(N_{n+m}\backslash P_{n,m}w_{n,m}N_{m,n}\). If \(h=\mathrm{diag}(b,a)w_{n,m}u_{m,n}(x)\in N_{n+m}\backslash P_{n,m}w_{n,m}N_{m,n}\) with \(a\in\mathrm{GL}_{m},b\in\mathrm{GL}_{n},x\in\mathrm{Mat}_{m\times n}\), we can take the quotient measure to be
\[dh=|\det(b)|^{-m}|\det(a)|^{n}dxdadb.\]
Thus we have
\[\Psi(\rho(\gamma_{m,n}^{-1})\mathcal{B}_{i},\widetilde{\xi}_{1- \overline{\mathbf{s}}}^{c};0)= \int_{[\mathrm{GL}_{m}]\times[\mathrm{GL}_{m}]}\int_{U^{0,n,m}} \tag{5.19}\] \[\mathcal{B}_{i}\left(\overline{u}\gamma_{n,m}\begin{pmatrix}b&\\ a&\\ &I_{l-m-n}\end{pmatrix}\begin{pmatrix}I_{m}&x&\\ &I_{n}&\\ &&I_{l-m-n}\end{pmatrix}\gamma_{m,n}^{-1}\right)\] \[\widetilde{\xi}_{1-\overline{\mathbf{s}}}^{k,v_{1},v_{2}}( \mathrm{diag}(b,a)w_{n,m}u_{m,n}(x))|\det(b)|^{-m}|\det(a)|^{n}d\overline{u} dxdadb.\]
A matrix calculation shows that
\[\gamma_{n,m}\begin{pmatrix}b&&\\ a&&\\ &&I_{l-m-n}\end{pmatrix}\begin{pmatrix}I_{m}&x&\\ &I_{n}&\\ &&I_{l-m-n}\end{pmatrix}\gamma_{m,n}^{-1}\] \[=\begin{pmatrix}&&b\\ &I_{l-m-n}&\\ a&&ax\end{pmatrix}\] \[=\widetilde{w}_{n,m}\mathbf{t}_{m,n}(a,b)u_{1}^{\prime}(ax),\]
where
\[u_{2}^{\prime}(ax):=\begin{pmatrix}I_{m}&&ax\\ &I_{l-m-n}&\\ &&I_{n}\end{pmatrix}.\]
On the other hand, for \(\overline{u}\in\overline{U}^{0,n,m}\), we can write
\[\overline{u}=\begin{pmatrix}I_{n+1}&&\\ &I_{l-m-n-1}&\\ &y&I_{m}\end{pmatrix},\text{ for }y\in\mathrm{Mat}_{m\times(l-m-n-1)}.\]
We have
\[\overline{u}\gamma_{n,m}\begin{pmatrix}b\\ a\\ \end{pmatrix}\begin{pmatrix}I_{m}&x\\ I_{n}\\ &&I_{l-m-n}\end{pmatrix}\gamma_{m,n}^{-1}\] \[=\widetilde{w}_{n,m}\mathbf{t}_{m,n}(a,b)u_{1}((a^{-1}y,ax)),\]
where recall that
\[u_{1}((a^{-1}y,ax))=\begin{pmatrix}I_{m}&&a^{-1}y&ax\\ &1&\\ &&I_{l-m-n-1}&\\ &&I_{n}\end{pmatrix}.\]
After changing variables on \(x\) and \(y\), (5.19) becomes
\[\Psi(\rho(\gamma_{m,n}^{-1})\mathcal{B}_{i},\widehat{\xi}_{1- \bar{\mathbf{s}}}^{k,v_{1},v_{2}};0)= \int_{[\mathrm{GL}_{m}]\times[\mathrm{GL}_{n}]}\int_{y\in \mathrm{Mat}_{m\times(l-m-n-1)}}\int_{x\in\mathrm{Mat}_{m\times n}}\mathcal{B}_ {i}(\widetilde{w}_{n,m}\mathbf{t}_{m,n}(a,b)u_{1}((y,x)))\] \[\widehat{\xi}_{1-\bar{\mathbf{s}}}^{k,v_{1},v_{2}}(\mathrm{diag} (b,a)w_{n,m}u_{m,n}(x))|\det(b)|^{-m}|\det(a)|^{l-m-n-1}dydxdbda.\]
Set
\[D_{i}=\left\{(y,x)\in\mathrm{Mat}_{m\times(l-m-n-1)}\times\mathrm{Mat}_{m\times n }:u_{1}((y,x))\in H_{l}^{i}\cap N_{l}\right\},\]
as in Lemma 5.10 (3). By Lemma 5.10 (2) and (3), we have
\[\mathcal{B}_{i}(\widetilde{w}_{n,m}\mathbf{t}_{m,n}(a,b)u_{1}((y,x)))=0,\text { if }((y,x))\notin D_{i}.\]
If \((y,x)\in D_{i}\), by (4.3), we have
\[\mathcal{B}_{i}(\widetilde{w}_{n,m}\mathbf{t}_{m,n}(a,b)u_{1}((y,x)))= \mathcal{B}_{i}(\widetilde{w}_{n,m}\mathbf{t}_{m,n}(a,b)).\]
Moreover, by Subsection 4.5, in particular, (4.9), for \(k\geq k_{0}(D,v_{1},v_{2})\), we have
\[\widetilde{\xi}_{1-\bar{\mathbf{s}}}^{k,v_{1},v_{2}}(\mathrm{diag}(b,a)w_{n,m }u_{m,n}(x))=\mathrm{vol}(\overline{N}_{m,n}^{k})|\det(b)|^{1-s_{2}+\frac{m-1} {2}}|\det(a)|^{-(1-s_{1})-\frac{n-1}{2}}W_{v_{1}}(a)W_{v_{2}}(b).\]
Thus we get
\[\Psi(\rho(\gamma_{m,n}^{-1})\mathcal{B}_{i},\widetilde{\xi}_{1- \bar{\mathbf{s}}}^{k,v_{1},v_{2}};0)= \mathrm{vol}(D_{i})\mathrm{vol}(\overline{N}_{m,n}^{k})\int_{[ \mathrm{GL}_{m}]\times[\mathrm{GL}_{n}]}\mathcal{B}_{i}(\widetilde{w}_{n,m} \mathbf{t}_{m,n}(a,b))\] \[W_{v_{1}}(a)W_{v_{2}}(b)|\det(b)|^{s_{2}^{*}}|\det(a)|^{s_{1}^{*} }dbda,\]
where \(s_{2}^{*}=1-s_{2}-\frac{m+1}{2},s_{1}^{*}=-(1-s_{1})-\frac{n-1}{2}+l-m-n-1\). The explicit form of \(s_{1}^{*},s_{2}^{*}\) is not important here. By (5.18), we get
\[\int_{[\mathrm{GL}_{m}]\times[\mathrm{GL}_{n}]}\mathcal{B}_{i}(\widetilde{w}_ {n,m}\mathbf{t}_{m,n}(a,b))W_{v_{1}}(a)W_{v_{2}}(b)|\det(b)|^{s_{2}^{*}}|\det(a )|^{s_{1}^{*}}dbda=0,\]
Note that the above formula holds for every \(v_{1}\in\tau_{1},v_{2}\in\tau_{2}\). Thus by Proposition 4.13 and Lemma 5.10 (4), we get that
\[\mathcal{B}_{i}(\widetilde{w}_{n,m}\mathbf{t}_{m,n}(a,b))=0,\forall a\in \mathrm{GL}_{m}(F),b\in\mathrm{GL}_{n}(F).\]
This proves the first equation (5.16). The second equation (5.17) follows from the same proof by switching \(m\) and \(n\) and using the local gamma factor \(\gamma(\mathbf{s},\pi\times(\tau_{2},\tau_{1}),\psi)\) for an irreducible generic representation \(\tau_{1}\) of \(\mathrm{GL}_{m}(F)\) and \(\tau_{2}\) of \(\mathrm{GL}_{n}(F)\). This finishes the proof.
**Remark 5.12**.: If we further require that \(\pi\) is unitarizable1, by [10, Proposition 3.3], we have
Footnote 1: There is no harm to do so if our goal is to prove Jacquet’s local converse conjecture, see [11].
\[\overline{\mathcal{B}_{i}(g,f)}=\mathcal{B}_{i}(g^{*},f), \tag{5.20}\]
for \(f=f_{1},f_{2}\). Here \(g^{*}=J_{l}^{t}g^{-1}J_{l}\). The equation (5.17) can be deduced from (5.16) using (5.20) because \((\widetilde{w}_{n,m}\mathbf{t}_{m,n}(a,b))^{*}=\widetilde{w}_{m,n}\mathbf{t}_ {n,m}(b^{*},a^{*})\). The formula (5.20) reflects a symmetry between \(\mathcal{B}_{i}(\widetilde{w}_{n,m}\mathbf{t}_{m,n}(a,b),f)\) and \(\mathcal{B}_{i}(\widetilde{w}_{m,n}\mathbf{t}_{n,m}(b,a),f)\). In our approach, this symmetry is reflected in the corresponding definition of local gamma factors: \(\mathcal{B}_{i}(\widetilde{w}_{n,m}\mathbf{t}_{m,n}(a,b),f)\) appeared naturally in \(\Gamma(\mathbf{s},\pi\times(\tau_{1},\tau_{2}),\psi)\), while \(\mathcal{B}_{i}(\widetilde{w}_{m,n}\mathbf{t}_{n,m}(b,a),f)\) appeared naturally in \(\Gamma(\mathbf{s},\pi\times(\tau_{2},\tau_{1}),\psi)\), where \(\tau_{1}\) (resp. \(\tau_{2}\)) is an irreducible generic representation of \(\mathrm{GL}_{m}(F)\) (resp. \(\mathrm{GL}_{n}(F)\)).
**Corollary 5.13**.: _Suppose that \(1\leq n\leq m\leq[l/2]\) and \(n+m\leq l-1\). Then the condition \(\mathcal{C}(m,n)\) implies that there exist functions_
* \(f_{\overline{w}_{j}}\in C_{c}^{\infty}(\Omega_{\overline{w}_{j}},\omega)\) _for each_ \(j\) _with_ \(m+1\leq j\leq l-m-1\)_;_
* \(f_{j,m}^{\prime}\in C_{c}^{\infty}(\Omega_{\widetilde{w}_{j,m}},\omega)\)_, for each_ \(j\) _with_ \(n+1\leq j\leq m\)_; and_
* \(f_{m,j}^{\prime\prime}\in C_{c}^{\infty}(\Omega_{\widetilde{w}_{m,j}},\omega)\)_, for each_ \(j\) _with_ \(n+1\leq j\leq m\)_,_
_such that_
\[\mathcal{B}_{i}(g,f_{1})-\mathcal{B}_{i}(g,f_{2})=\sum_{j=m+1}^{l-m-1} \mathcal{B}_{i}(g,f_{\overline{w}_{j}})+\sum_{j=n+1}^{m}\mathcal{B}_{i}(g,f_{ j,m}^{\prime})+\sum_{j=n+1}^{m}\mathcal{B}_{i}(g,f_{m,j}^{\prime\prime}), \tag{5.21}\]
_for all \(g\in\operatorname{GL}_{l}(F)\) and for all \(i\) large enough depending only on \(f_{1},f_{2}\)._
Proof.: The proof is similar to the proof of Corollary 5.8 and is just simple application of Theorem 4.8. We give some details here. By Lemma 5.10 (2) and Proposition 5.11, the condition \(\mathcal{C}(m,n)\) implies that
\[\mathcal{B}_{i}(\widetilde{w}_{n,m}\mathbf{t}_{m,n}(a,b),f_{n,m}^{\prime})=0, \tag{5.22}\]
for any \(a\in\operatorname{GL}_{m}(F),b\in\operatorname{GL}_{n}(F)\). As in the proof of Lemma 5.10 (2), we consider
\[\widetilde{w}_{\max}=\widetilde{w}_{n,m}\mathbf{t}_{m,n}(J_{m},J_{n})=\begin{pmatrix} &J_{m}\\ J_{n}&I_{l-m-n}\end{pmatrix}.\]
From the description of \(\operatorname{B}(\operatorname{GL}_{l})\) in terms of subsets of \(\Delta\), we can check that any \(w\in\operatorname{B}(\operatorname{GL}_{l})\) with \(\widetilde{w}_{n,m}\leq w\leq\widetilde{w}_{\max}\) has the form \(\widetilde{w}_{n,m}\mathbf{t}_{m,n}(w_{1},w_{2})\) for certain \(w_{1}\in\mathbf{W}(\operatorname{GL}_{m}),w_{2}\in\mathbf{W}(\operatorname{ GL}_{n})\). Moreover, for any such \(w\), we have \(A_{w}\subset A_{\widetilde{w}_{\max}}\). From the definition (4.5), we see that any element \(t\in A_{w}\) has the form
\[z\mathbf{t}_{m,n}(t_{1},t_{2}),\]
with \(z=zI_{l}\) in the center of \(\operatorname{GL}_{l}(F)\), a diagonal element \(t_{1}\) in \(\operatorname{GL}_{m}\) and another diagonal element \(t_{2}\) in \(\operatorname{GL}_{n}\). Thus (5.22) implies that
\[\mathcal{B}_{i}(wt,f_{n,m}^{\prime})=0, \tag{5.23}\]
for all \(w\) with \(\widetilde{w}_{n,m}\leq w\leq w_{\max}\) and all \(t\in A_{w}\). If we denote \(w_{\max}^{\prime}=\widetilde{w}_{m,n}\mathbf{t}_{n,m}(J_{n},J_{m})\), then from (5.17), one can obtain that
\[\mathcal{B}_{i}(wt,f_{n,m}^{\prime})=0,\forall w\in[\widetilde{w}_{m,n},w_{ \max}^{\prime}],t\in A_{w}. \tag{5.24}\]
Similar as in the proof of Corollary 5.8, the result follows from Theorem 4.8, (5.23) and (5.24). Since this argument is almost identical to the proof of Corollary 5.8, we omit the details.
If \(l=2r+1\) is odd, we have completed the proof of Theorem 5.4 and hence Theorem 5.1 and Theorem 4.1. If \(l=2r\) is even, by Corollary 5.13, the conditon \(\mathcal{C}(r,r-1)\) implies that
\[\mathcal{B}_{i}(g,f_{1})-\mathcal{B}_{i}(g,f_{2})=\mathcal{B}_{i}(g,f_{r,r}^{ \prime}), \tag{5.25}\]
for some \(f_{r,r}^{\prime}\in C_{c}^{\infty}(\Omega_{\widetilde{w}_{r,r}},\omega)\). We will show in SS5.3 that the condition \(\mathcal{C}(r,r)\) will force that we can take \(f_{r,r}^{\prime}=0\) after increasing \(i\) if necessary, which will finish the proof of Theorem 5.1 and hence Theorem 4.1 when \(l=2r\).
### Conclude the proof when \(l\) is even
In this final subsection, we assume that \(l=2r\) is even. Recall that for a character \(\mu\) of \(F^{\times}\), we have a Weil representation \(\omega_{\psi^{-1},\mu,\mu^{-1}}\) of \(\operatorname{GL}_{2r}(F)\), see SS4.1 or [10, SS2.2]. For a positive integer \(c\), we consider the function \(\phi^{c}\in\mathcal{S}(F^{\times}\times F^{r})\) defined by
\[\phi^{c}(x,y)=\chi_{\mathfrak{p}^{(2r-1)c}}(x_{1})\dots\chi_{\mathfrak{p}^{3c}} (x_{r-1})\chi_{1+\mathfrak{p}^{c}}(x_{r})\chi_{\mathfrak{p}^{(2r-1)c}}(y_{1}) \dots\chi_{\mathfrak{p}^{3c}}(y_{r-1})\chi_{1+\mathfrak{p}^{c}}(y_{r}),\]
for \(x=(x_{1},x_{2},\dots,x_{r})\in F^{r},y=(y_{1},\dots,y_{r})\in F^{r}\). Here for a set \(A\subset F\), \(\chi_{A}\) denotes the characteristic function of \(A\).
**Proposition 5.14**.: _The condition \(\mathcal{C}(r,r)\) implies that_
\[\mathcal{B}_{i}(w_{r,r}\mathbf{t}_{r,r}(a,b),f_{r,r}^{\prime})\omega_{\psi^{-1 }}(w_{r,r})\phi^{c}(e_{r}b,e_{r}a^{*})\neq 0,\]
_for any \(a,b\in\operatorname{GL}_{r}(F)\), and for large \(c>i\). Here \(a^{*}=J_{r}a^{-1}J_{r}\)._
Proof.: The calculation below is similar to the case given in [22, SS7]. We contend ourselves with a sketch. The corresponding local zeta integrals and local functional equations were recalled in SS4.1. Similarly as the calculation in Proposition 5.11, we have
\[\Psi(\mathcal{B}_{i}^{f_{1}},\xi_{\mathbf{s}}^{k,v_{1},v_{2}},\phi^{c})=\Psi( \mathcal{B}_{i}^{f_{2}},\xi_{\mathbf{s}}^{k,v_{1},v_{2}},\phi^{c}).\]
Thus by the assumption on local gamma factors, we have
\[\Psi(\mathcal{B}_{i}^{f_{1}},\widehat{\xi}_{\mathbf{s}}^{k,v_{1},v_{2}},\phi^ {i})=\Psi(\mathcal{B}_{i}^{f_{2}},\widehat{\xi}_{\mathbf{s}}^{k,v_{1},v_{2}}, \phi^{i}).\]
Again, we denote \(\mathcal{B}_{i}=\mathcal{B}_{i}^{f_{1}}-\mathcal{B}_{i}^{f_{2}}\) for simplicity and we get \(\Psi(\mathcal{B}_{i},\widehat{\xi}_{1-\widehat{\mathbf{s}}}^{k,v_{1},v_{2}}, \phi^{c})=0.\) On the other hand, by definition we have
\[\Psi(\mathcal{B}_{i},\widehat{\xi}_{1-\widehat{\mathbf{s}}}^{k,v _{1},v_{2}},\phi^{c}) =\int_{[\mathrm{GL}_{2r}]}\mathcal{B}_{i}(g)\omega_{\psi^{-1}}(g )\phi^{c}(e_{r},e_{r})\widehat{\xi}_{1-\mathbf{s}}^{k,v_{1},v_{2}}(g)dg\] \[=\int_{[\mathrm{GL}_{r}]\times[\mathrm{GL}_{r}]}\int_{N_{r,r}} \mathcal{B}_{i}(w_{r}\mathbf{t}_{r}(a,b)u_{r}(x))\omega_{\psi^{-1}}(w_{r} \mathbf{t}_{r}(a,b)u_{r}(x))\phi^{c}(e_{r},e_{r})\] \[\widehat{\xi}_{1-\mathbf{s}}^{k,v_{1},v_{2}}(w_{r}\mathbf{t}_{r}( a,b)u_{r}(x))|\det(a)|^{r}|\det(b)|^{-r}dxdadb.\]
Here for simplicity, we write \(\mathbf{t}_{r,r}(a,b)=\mathrm{diag}(a,b)\) as \(\mathbf{t}_{r}(a,b)\), \(w_{r,r}=\begin{pmatrix}&I_{r}\\ I_{r}&\end{pmatrix}\) as \(w_{r}\) and \(u_{r,r}(x)=\begin{pmatrix}I_{r}&x\\ &I_{r}\end{pmatrix}\) as \(u_{r}(x)\). By Lemma 5.10 (2) and (3), we have
\[\mathcal{B}_{i}(w_{r}\mathbf{t}_{r}(a,b)u_{r}(x))=0,\text{ if }u_{r}(x)\notin N _{r,r}\cap H_{2r}^{i}.\]
If \(u_{r}(x)\in N_{r,r}\cap H_{2r}^{i}\) and \(k\gg 0\), by (4.9), we still have
\[\widehat{\xi}_{1-\widehat{\mathbf{s}}}^{k,v_{1},v_{2}}(w_{r}\mathbf{t}_{r}(a,b)u_{r}(x))=\mathrm{vol}(\overline{N}_{r,r}^{k})|\det(b)|^{1-s_{2}+\frac{r-1} {2}}|\det(a)|^{-(1-s_{1})-\frac{r-1}{2}}W_{v_{1}}(a)W_{v_{2}}(b).\]
If \(c>i\), from the Weil representation formula [23, SS2.2], we can check that
\[\omega_{\psi^{-1}}(u_{r}(x))\phi^{c}=\psi^{-1}(x)\phi^{c},u_{r}(x)\in N_{r,r }\cap N_{2r}^{i},\]
see [22, Lemma 5.5] for a very similar calculation. Here \(\psi\) is viewed as a character of the maximal unipotent subgroup \(N_{l}\). Thus we get
\[\omega_{\psi^{-1}}(w_{r}\mathbf{t}_{r}(a,b)u_{r}(x))\phi^{c}(e_{r},e_{r})= \psi^{-1}(x)\mu(\det(ab))|\det(a)\det(b^{-1})|^{1/2}(\omega_{\psi^{-1}}(w_{r}) \phi^{c})(e_{r}b,e_{r}a^{*}),\]
see [23, SS2.2] for the corresponding Weil representation formulas. On the other hand, for \(u_{r}(x)\in N_{r,r}\cap H_{2r}^{i}\), by (4.3), we get that
\[\mathcal{B}_{i}(w_{r}\mathbf{t}_{r}(a,b)u_{r}(x))=\psi(x)\mathcal{B}_{i}(w_{r} \mathbf{t}_{r}(a,b)).\]
Combining the above calculations, we get that
\[\int_{[\mathrm{GL}_{r}]\times[\mathrm{GL}_{r}]}\mathcal{B}_{i}(w_{r}\mathbf{t }_{r}(a,b))\omega_{\psi^{-1}}(w_{r})\phi^{c}(e_{r}b,e_{r}a^{*})W_{v_{1}}(a)W_ {v_{2}}(a)|\det(a)|^{s_{1}^{*}}|\det(b)|^{-s_{2}^{*}}dadb=0.\]
Here \(s_{1}^{*}\) and \(-s_{2}^{*}\) are certain translations of \(s_{1},-s_{2}\) respectively. Now the result follows from Proposition 4.13.
**Corollary 5.15**.: _The condition \(\mathcal{C}(r,r)\) implies that \(\mathcal{B}_{i}(g,f_{1})=\mathcal{B}_{i}(g,f_{2})\) for \(i\) large enough depending only on \(f_{1},f_{2}\)._
Proof.: The proof is along the same line of the proof of Corollary 5.13. Set
\[w_{\max}=w_{r,r}\mathbf{t}_{r,r}(J_{r},J_{r})=\begin{pmatrix}&J_{r}\\ J_{r}\end{pmatrix},\]
which is indeed the longest Weyl element of \(\mathrm{GL}_{2r}\). For an Weyl element \(w\in\mathrm{B}(\mathrm{GL}_{2r})\) such that \(w_{r,r}\leq w\leq w_{\max}\), we can check that it has the form \(w_{r,r}\mathbf{t}_{r,r}(w_{1},w_{2})\) for some \(w_{1},w_{2}\in\mathbf{W}(\mathrm{GL}_{r})\). We claim that \(\mathcal{B}_{i}(tw,f_{r,r}^{\prime})=0\) for all \(t\in T_{2l}(F)\) and all \(w\) with \(w_{r,r}\leq w\leq\widetilde{w}_{\max}\). We write \(t=\mathrm{diag}(a_{1},\dots,a_{2r})\in T_{2r}(F)\). Since \(\mathcal{B}_{i}(\,,f_{r,r}^{\prime})\) has a central character, we can assume that \(a_{r+1}=1\).
From \(w\geq w_{r,r}\), we have \(\theta_{w}\subset\theta_{w_{r,r}}=\Delta-\{\alpha_{r}\}\). In particular, we have \(\alpha_{r}\notin\theta_{w}\) and thus \(\beta:=-w(\alpha_{r})>0\). For a root \(\gamma\), we fix an embedding \(x_{\gamma}:F\to N_{2r}\) such that \(\mathrm{Im}(x_{\gamma})\) is the root
space of \(\beta\). Pick \(y\in\mathfrak{p}^{(2\mathrm{ht}\beta+1)i}\), where \(\mathrm{ht}(\beta)\) denotes the height of \(\beta\). Then \(x_{-\beta}(y)\in H^{i}_{2r}\), see SS4.2. For, we have
\[twx_{-\beta}(y)=x_{\alpha_{r}}(\alpha(t)y)tw.\]
By (4.3), we get that \(\mathcal{B}_{i}(twx_{-\beta}(y),f^{\prime}_{r,r})=\psi(\alpha_{r}(t)y) \mathcal{B}_{i}(tw,f^{\prime}_{r,r})\). Thus if \(\mathcal{B}_{i}(tw,f^{\prime}_{r,r})\neq 0\), we get that \(\alpha_{r}(t)y\in\mathcal{O}\) for any \(y\in\mathfrak{p}^{(2\mathrm{ht}\beta+1)i}\), which implies that \(a_{r}=\alpha_{r}(t)\in\mathfrak{p}^{-(2\mathrm{ht}\beta+1)i}\). If \(\alpha_{r}(t)\in\mathfrak{p}^{-(2\mathrm{ht}\beta+1)i}\), we write
\[tw=tw_{r,r}\mathbf{t}_{r,r}(w,w^{\prime})=w_{r,r}\mathbf{t}_{r,r}(t_{1}w,t_{ 2}w^{\prime}),\]
for some \(w,w^{\prime}\in\mathbf{W}(\mathrm{GL}_{r}).\) Here \(t_{2}=\mathrm{diag}(a_{1},\ldots,a_{r}),t_{1}=\mathrm{diag}(a_{r+1},\ldots,a_{ 2r})\). By Proposition 5.14, we get that
\[\mathcal{B}_{i}(tw,f^{\prime}_{r,r})\omega_{\psi^{-1}}(w_{r,r})\phi^{c}(e_{r} t_{2}w^{\prime},e_{r}t_{1}^{*}w^{*})=0. \tag{5.26}\]
Write \(v_{1}=e_{r}t_{2}w^{\prime}=[0,0,\ldots,0,a_{r}]w^{\prime}=[v_{11},\ldots,v_{1r}]\), where only one \(v_{1j}\) is nonzero, which is \(a_{r}\). Moreover, we write \(v_{2}=e_{r}t_{1}^{*}w^{*}=[0,\ldots,0,1]w^{*}=[v_{21},\ldots,v_{2r}]\), where only one entry \(v_{2j}\) is nonzero, which is \(1\). From the Weil representation formula, we can take \(c\) large enough such that \(\omega_{\psi^{-1}}(w_{r,r})\phi^{c}(e_{r}t_{2}w^{\prime},e_{r}t_{1}^{*}w^{*})\neq 0\), see [18, Lemma 5.5 (2)] for the detailed calculation in a similar situation. From (5.26), we get \(\mathcal{B}_{i}(tw,f^{\prime}_{r,r})=0\) for any \(t\in T_{2r}(F),w\in\mathrm{B}(\mathrm{GL}_{2r})\) with \(w_{r,r}\leq w\leq w_{\max}\). A direct application of Theorem 4.8 shows that \(\mathcal{B}_{i}(g,f^{\prime}_{r,r})=0\) after increasing \(i\) if necessary. This finishes the proof.
This finishes the proof of Theorem 5.4, and thus Theorem 5.1 and Theorem 4.1.
**Remark 5.16**.: Suppose that \(F\) is a finite field. Let \(l,m,n\) be non-negative integers with \(m+n<l\). Let \(\pi\) be an irreducible supercuspidal representation of \(\mathrm{GL}_{l}(F)\), \(\tau_{1},\tau_{2}\) be irreducible generic representations of \(\mathrm{GL}_{m}(F)\) and \(\mathrm{GL}_{n}(F)\) respectively. Then for \(W\in\mathcal{W}(\pi,\psi)\) and \(f\in\mathrm{Ind}_{P_{m,n}(F)}^{\mathrm{GL}_{m+n}(F)}(\tau_{1}\boxtimes\tau_{2})\), we can still define the local zeta integral \(\Psi(W,f)\) and local gamma factor \(\Gamma(\pi\times(\tau_{1},\tau_{2}),\psi)\) as in SS3. As in the \(p\)-adic case, modulo a normalization factor, this gamma factor should be the product of gamma factors \(\gamma(\pi\times\tau_{1},\psi)\) and \(\gamma(\widetilde{\pi}\times\widetilde{\tau}_{2},\psi)\), where these factors were developed in [10] by imitating the Jacquet-Piatetski-Shapiro-Shalika's theory [11]. A similar argument as we did in the last two sections can also give a new proof of the finite field analogue of Jacquet's local converse conjecture, which was originally proved in [14]. For classical groups and the exceptional group \(G_{2}\), the finite field analogue of local converse theorems were proved in [13], [12] and [13].
|
2309.16034 | Analytical Modelling of Raw Data for Flow-Guided In-body Nanoscale
Localization | Advancements in nanotechnology and material science are paving the way toward
nanoscale devices that combine sensing, computing, data and energy storage, and
wireless communication. In precision medicine, these nanodevices show promise
for disease diagnostics, treatment, and monitoring from within the patients'
bloodstreams. Assigning the location of a sensed biological event with the
event itself, which is the main proposition of flow-guided in-body nanoscale
localization, would be immensely beneficial from the perspective of precision
medicine. The nanoscale nature of the nanodevices and the challenging
environment that the bloodstream represents, result in current flow-guided
localization approaches being constrained in their communication and
energy-related capabilities. The communication and energy constraints of the
nanodevices result in different features of raw data for flow-guided
localization, in turn affecting its performance. An analytical modeling of the
effects of imperfect communication and constrained energy causing intermittent
operation of the nanodevices on the raw data produced by the nanodevices would
be beneficial. Hence, we propose an analytical model of raw data for
flow-guided localization, where the raw data is modeled as a function of
communication and energy-related capabilities of the nanodevice. We evaluate
the model by comparing its output with the one obtained through the utilization
of a simulator for objective evaluation of flow-guided localization, featuring
comparably higher level of realism. Our results across a number of scenarios
and heterogeneous performance metrics indicate high similarity between the
model and simulator-generated raw datasets. | Guillem Pascual, Filip Lemic, Carmen Delgado, Xavier Costa-Perez | 2023-09-27T21:26:01Z | http://arxiv.org/abs/2309.16034v2 | # Analytical Modelling of Raw Data for Flow-Guided In-body Nanoscale Localization
###### Abstract
Advancements in nanotechnology and material science are paving the way toward nanoscale devices that combine sensing, computing, data and energy storage, and wireless communication. In precision medicine, these nanodevices show promise for disease diagnostics, treatment, and monitoring from within the patients' bloodstream. Assigning the location of a sensed biological event with the event itself, which is the main proposition of flow-guided in-body nanoscale localization, would be immensely beneficial from the perspective of precision medicine. The nanoscale nature of the nanodevices and the challenging environment that the bloodstream represents, result in current flow-guided localization approaches being constrained in their communication and energy-related capabilities. The communication and energy constraints of the nanodevices result in different features of raw data for flow-guided localization, in turn affecting its performance. An analytical modeling of the effects of imperfect communication and constrained energy causing intermittent operations of the nanodevices on the raw data produced by the nanodevices would be beneficial. Hence, we propose an analytical model of raw data for flow-guided localization, where the raw data is modelled as a function of communication and energy-related capabilities of the nanodevice. We evaluate the model by comparing its output with the one obtained through the utilization of a simulator for objective evaluation of flow-guided localization, featuring comparably higher level of realism. Our results across a number of scenarios and heterogeneous performance metrics indicate high similarity between the model and simulator-generated raw datasets.
+
Footnote †: Corresponding Author.
## I Introduction
Contemporary nanotechnological advancements are creating an opportunity for the development of nanoscale devices that combine sensing, computing, and data and energy storage functionalities [1]. These nanodevices are envisaged to revolutionize a variety of applications in precision medicine [2]. Some applications involve deploying nanodevices in the patients' bloodstream, requiring their physical size to be comparable to that of red blood cells (i.e., up to 5 microns). Due to their minuscule dimensions, these nanodevices will rely on nanoscale energy-harvesting components for harvesting from environmental sources such as heartbeats or ultrasound power transfer [1]. As a consequence, these devices are anticipated to passively circulate within the bloodstream.
Advancements in advanced materials, notably graphene and its derivatives [3], have created possibilities for nanoscale wireless communication at Terahertz (THz) frequencies (i.e., 0.1-10 THz) [4]. The inclusion of wireless communication capabilities enables two-way communication between nanodevices and the external world [5]. Nanodevices with integrated communication abilities are facilitating sensing-based applications such as oxygen sensing in the bloodstream for early cancer diagnosis and actuation-based applications like non-invasive targeted drug delivery for cancer treatment. Additionally, nanodevices with communication capabilities serve as a foundation for flow-guided localization within the bloodstream [4]. Flow-guided in-body localization could enable the association of a location to an event detected by a nanodevice, offering advantages such as non-invasiveness, early and precise diagnostics, and cost reduction [6, 7, 8].
Localization methods from [7, 8] operate in a way that, in each passage through the heart, the nanodevices try to establish communication with an on-body anchor for transmitting their identifiers along with a binary indicator of whether an event was detected. The anchor utilizes this to calculate the elapsed time since the previous transmission, enabling the inference of a cardiovascular path that contains the detected event. At THz frequencies, challenges arise due to molecular absorption in the medium, leading to attenuation, distortion, and additional noise [5]. As a consequence, the ability of nanodevices to establish effective communication with the anchor is affected, as indicated in e.g., [9, 10]. This in turn results in erroneous data being eventually transmitted to the anchor, as the transmitted data would encapsulate multiple iterations through the bloodstream. Note that other nanocommunication approaches potentially suitable for in-body nanocommunication (e.g., molecular, magnetic, or ultrasound), are expected to feature similar communication unreliability due to e.g., significant path loss, high mobility, and energy or size constrains at the nanodevice level, as discussed in e.g., [11].
Localization systems from [6, 7, 8] leverage nanodevics' piezoelectric effect of Zinc Oxide (ZnO) nanowires. Despite its remarkable utility, this energy harvesting mechanism can result in the intermittent operation of harvesting entities [12]. This process is usually dependent on a turn-off energy threshold that determines whether the nanodevice is activated and capable of detection. Consequently, the energy constraints impact the nanodevice's ability to detect events, therefore impeding its capacity to transmit correct information to the anchor.
In summary, it is well-known that flow-guided localization accuracy is strongly dependent on the transmission and energy-related capabilities at the nanodevice level [9, 10]. The main reasons can be found in the fact that these processes introduce
erroneous raw data for flow-guided localization in the form of either compound iteration times or erroneous event detection indicators. Hence, there is a need for modelling of the raw data for capturing the effects of these two sources of stochasticism in flow-guided localization. Toward addressing this issue, we propose an analytical model of raw data for flow-guided in-body nanoscale localization. The model encapsulated nanodevice mobility, in-body communication, and energy constraints of current flow-guided localization systems.
We assess the utility of the proposed model by comparing its raw data output with the data generated under comparable conditions utilizing a state-of-the-art simulator for objective performance evaluation of flow-guided localization [9]. For a variety of relevant evaluation scenarios and heterogeneous performance metrics, we demonstrate that the raw data generated through the model is highly comparable to the corresponding simulator-generated raw data with a significantly higher level of realism.
## II Flow-guided In-Body Nanoscale Localization Overview and Framework
Toward developing the analytical model for flow-guided localization, we consider a flow-guided localization framework and notations as depicted in Figure 1. The framework is adapted from [13], where the authors propose an analytical framework for more traditional fingerprinting-based indoor localization. Within this approach, a distinct feature of an environment is selected as the basis for creating the fingerprint. In the context of flow-guided localization, the environment is an entire bloodstream, modeled as a set of potential cardiovascular paths that a nanodevice might pass through in each of its iterations. The chosen signal feature is denoted as \(S\) and belongs to the feature space \(\mathbf{S}\). Consecutive observations of the signal feature, represented as \(S=(S_{1},...,S_{m})\in\mathbf{S}\), form a random vector that is linked to the location \(u\) through the conditional probability \(P_{S|u}\). Raw data for flow-guided localization corresponds to a fingerprint in fingerprinting-based one, and is constructed based on these observations, capturing the unique characteristics of the signal feature at each respective location. The observed feature is then converted into raw data through a raw data-creating function.
The subsequent stage involves the creation of a training database through measurements of the signal feature \(S\) at various training locations. This database serves as a reference for the subsequent location estimation. To determine the location of a nanodevice at \(u\), a pattern matching function \(g\) is utilized. By comparing the acquired raw data with the instances stored in the training database, the pattern matching function \(g\) estimates the location based on the closest matching instances. The summarized flow-guided localization framework is:
* **Localization space \(\mathbf{R}\):** the cardiovascular system, possible detection regions \(\{R_{1},....,R_{r}\}\), each with a traveling time \(\{T_{1},...,T_{r}\}\).
* **Feature \(\mathbf{S}\) = Raw data** (X): Time between consecutive transmissions and event bit \((t,b)\).
* **Pattern matching function \(\mathbf{g}\):** Machine Learning (ML)-based flow-guided localization algorithm.
Based on the outlined framework, our aim is to derive analytically the conditional probabilities \(P_{S|u}\). The parameters that will be used are:
* **Probability of detection \(\mathbf{P_{det}}\)** corresponds to the probability of an event being detected. This parameter encapsulated the intermittent nature of a nanodevice due to energy-harvesting, which might result in a nanodevice not detecting an event of interest as it was turned off, although it went through the path that contained the event. This results in an erroneous event bit \(b\).
* **Probability of transmission \(\mathbf{P_{trans}}\)** corresponds to the probability of successfully transmitting data to the on-body anchor in the vicinity of the heart. It incorporates factors such as having sufficient energy for communication, being in the range of the anchor, and self-interference between nanodevices. If the data is not communicated properly to the anchor, the iteration time will not be reset, leading to compound iterations.
## III Analytical Modelling of Raw Data for Flow-Guided In-body Nanoscale Localization
Based on the notations established previously, raw data for flow-guided localization corresponds to a tuple \(X=(t,b)\) with \(t\in(0,M]\) and \(b\in\{0,1\}\). \(M\) represents the total duration that the nanodevices spend in the bloodstream. The time elapsed between transmissions will be a composition of the travel times across different regions, accompanied by zero-mean distributed Gaussian noise \(Q\). This noise factor accounts for different variations in iterations times due to factors such as turbulent blood-flow, short blood pressure variability, and similar biological factors. Hence, the iteration times can be modeled as a combination of deterministic travel times and random perturbations captured as the Gaussian noise. Thus, the raw data for flow-guided localization can be expressed as:
\[X=\{(n_{1}T_{1}+...+n_{r}T_{r}+Q,b)\mid n_{i}\in\mathbb{N},b\in\{0,1\}\}. \tag{1}\]
An important observation is that, depending on a region where the event is located, there exists a subset of raw data that can
\begin{table}
\begin{tabular}{l l l l} \hline
**Symbol** & **Description** & **Symbol** & **Description** \\ \hline \(R\) & Localization space & \(P_{det}\) & Detection probability \\ \(S\) & Feature & \(P_{trans}\) & Transmission probability \\ \(X\) & Raw data & \(T_{i}\) & Traveling time \\ \(g\) & Pattern matching function & \(R_{i}\) & Region \\ \hline \end{tabular}
\end{table} TABLE I: Overview of utilized symbols
Fig. 1: Flow-guided in-body nanoscale localization framework.
never occur. This subset corresponds to cases where the event has been detected (\(b=1\)) but the nanodevice has not passed though the region containing the event. Let \(X_{i}\) represent the set of possible raw data instances for an event in region \(R_{i}\), then the mentioned subset must be substracted from the total set of fingerprints \(X\):
\[X_{i}=X\setminus\{(n_{1}T_{1}+...+n_{r}T_{r}+Q,1)\mid n_{i}=0\}. \tag{2}\]
We follow by modeling the sources of stohasticism in flow-guided localization:
* **Compound iterations:** As previously mentioned, if the nanodevice fails to communicate with the anchor, the iteration time will not reset, resulting in longer duration.
* **False negatives:** When the nanodevice fails to detect the target, although it has passed through the affected region.
Given that we are dealing with a binary event bit \(b\), we distinguish cases where the event is detected and vice-versa, and calculate the probabilities of interest as follows.
### _Case 1: Event Detected (\(b=1\))_
Suppose that the event to be detected is in region \(R_{j}\), leading to the following expressions:
\[P(X=(n_{1}T_{1}+...+n_{r}T_{r},1)\mid R_{j})=P(\chi_{(n,1)}\mid R_{j}), \tag{3}\]
where \(\chi_{(n,1)}\) refers to this particular raw data instance. There are multiple ways in which this raw data instance can be obtained taking into account that:
* There are multiple ways in which the nanodevice can travel this number of times through each cardiovascular path, this number is given by the number of permutations of the multiset \(\{P_{R_{1}}^{n_{1}},...,P_{R_{r}}^{n_{r}}\}\), where \(P_{R_{i}}\) is the probability of the nanodevice traveling through each region. The number of permutations corresponds to the expression: \[\binom{n_{1}+...+n_{r}}{n_{1},...,n_{r}}:=\frac{(n_{1}+...+n_{r})!}{n_{1}!... n_{r}!}.\] (4) From that it follows: \[P(\chi_{(n,1)}\mid R_{j})\propto\binom{n_{1}+...+n_{r}}{n_{1},...,n_{r}}P_{R _{1}}^{n_{1}}...P_{R_{r}}^{n_{r}}.\] (5)
* The detection can occur in any iteration through \(R_{j}\) and once it is detected, the event bit will not change. Let \(P_{d_{1}}\) be the probability of detecting the event in iteration \(i\), then: \[P_{d_{i}}=(1-P_{det})^{i-1}P_{det}.\] (6)
It is important to consider that the communication was only successful during the last iteration and not in the previous ones. Otherwise, the time would have been reset when the communication was successful. Thus, a multiplicative factor is applied to account for this condition, defined as follows.
\[P_{t}:=(1-P_{trans})^{(n_{1}+...+n_{r}-1)}P_{trans}. \tag{7}\]
Finally, the total probability of having a certain raw data instance with event bit \(b=1\) is:
\[P(\chi_{(n,1)}\mid R_{j})=\binom{n_{1}+...+n_{r}}{n_{1},...,n_{r}}P_{R_{1}}^{ n_{1}}...P_{R_{r}}^{n_{r}}P_{t}\sum_{i=1}^{n_{j}}P_{d_{i}}. \tag{8}\]
### _Case 2: Event not Detected (\(b=0\))_
The probabilities for cases where no detection has occurred are the following, assuming an event located in region \(R_{j}\):
\[P(X=(n_{1}T_{1}+...+n_{r}T_{r},0)\mid R_{j})=P(\chi_{(n,0)}\mid R_{j}). \tag{9}\]
This expression is comparable to the former case, but here the event is not detected in any of the iterations through \(R_{j}\), which means that the following term will be always multiplying, this term will be defined as \(P_{nd}\), corresponding to the probabilities of not detecting an event in any iteration in which the event could have been detected:
\[P_{nd}=(1-P_{det})^{n_{j}}, \tag{10}\]
leading to expression:
\[P(\chi_{(n,0)}\mid R_{j})=\binom{n_{1}+...+n_{r}}{n_{1},...,n_{r}}P_{R_{1}}^{ n_{1}}...P_{R_{r}}^{n_{r}}P_{nd}P_{t}. \tag{11}\]
### _An Example_
As an example, two arbitrary detection regions \((R_{1},R_{2})\) are considered, with corresponding traveling times \((T_{1},T_{2})=(60,67)\) [sec] and probabilities \((P_{R_{1}},P_{R_{2}})=(0.49,0.51)\). The probability distributions can be computed using the probabilities (\(P_{R_{i}}\)) and traveling times (\(T_{i}\)) of each region, the detection (\(P_{det}\)) and transmission (\(P_{trans}\)) probabilities, the region containing the target event, and the duration of the administration of the nanodevices in the bloodstream. From that, it uses a recursive algorithm to account for all possible combinations of iterations with different times, and from there it computes the probability for every case. Figure 2 shows the probability distribution assuming a target event in \(R_{1}\), with \(P_{det}=0.7\) and \(P_{trans}=0.7\). As visible, the highest probabilities correspond to the traveling times of each region, however there are also compound iterations with certain probabilities, for example the ones corresponding to 120, 130, and 134 s. The probabilities become practically unnoticeable after three compound iterations, suggesting that the system converges after several iterations trough the bloodstream.
Until now, we have considered as singe nanodevice deployed in the bloodstream, although a number of applications envision the administration of a large number of such devices. The presence of more nanodevices leads to an augmented volume of data, which can be assessed by plotting accuracy measures against the sample size. It is anticipated that the frequencies will ultimately converge to the values of the probability distribution. To discern the rate of convergence, we can compute the Mean Squared Error (MSE) between generated data frequencies and the probability distribution while varying the number of nanodevices. Example results are depicted in Figure 3, revealing that the MSE decreases rapidly with the number of nanonodes, indicating that the distribution of raw data does not change significantly with an increase in the number of nanodevices, apart from increasing the frequency of data transmissions, indicating the utility of the proposed model for modelling the raw data obtained by simultaneously utilizing more than one nanodevice.
## IV Evaluation Setup and Results
### _Evaluation Setup_
For assessing the accuracy of the proposed model of raw data for flow-guided in-body nanoscale localization, we will compare the data yielded by the model with the corresponding one generated by the state-of-the-art simulator for objective performance benchmarking of flow-guided localization [9]. The simulator [9] utilized in this study features significantly higher level of realism compared to the model. Specifically, it combines BloodVoyagerS (BVS) [14] for modeling the mobility of the nanodevices in the bloodstream and ns-3-based TeraSim [15] for THz-based nanoscale communication between the nanodevices and the outside world, accounting for the energy-related and other technological constraints (e.g., pulse-based modulation) at the nanodevice level.
The BVS simulator encompasses a comprehensive set of 94 vessels and organs, utilizing a coordinate system centered on the heart. All organs share an identical spatial depth, calibrated to a reference thickness of 4 cm, resembling the dimensions of a kidney. The simulator also assumes a predefined arrangement with arteries positioned anteriorly and veins posteriorly. The transitions between arteries and veins occurs within organs, limbs, and head, which jointly account for 24 regions in the body, as indicated in Table II. In the heart, the blood undergoes a transition from veins to arteries, signifying a shift from posterior to anterior flow. The simulator models the flow rate based on the relationship between pressure differences and flow resistance, yielding average blood velocities of 20 cm/sec in the aorta, 10 cm/sec in arteries, and 2-4 cm/sec in veins. Transitions between arteries and veins are simplified by assuming a constant velocity of 1 cm/sec.
TeraSim [15] is the pioneering simulation platform tailored for modeling THz (nano)communication networks. This platform accurately captures the unique capabilities of nanodevices and distinctive characteristics of THz signal propagation. TeraSim is integrated as a module within ns-3, a discrete-event network simulator, and it incorporates specialized physical and link layer solutions optimized for nanoscale THz communications. Specifically, at the physical layer, TeraSim implements pulse-based communication with omnidirectional antennas, catering to distances shorter than 1 m assuming a single transmission window of nearly 10 THz. At the link layer, TeraSim incorporates two well-established protocols, namely ALOHA and CSMA. Additionally, we have introduced in TeraSim a shared THz channel module that implements a frequency-selective channel simulating in-body wireless nanocommunication [16].
Our analytical model has been instantiated on the 24 regions as modeled by the BVS. Other simulation parameters such as the transmit power, receiver sensitivity, number of nanonodes, operating frequency, and communication bandwidth have been consistently parameterized across the model and simulator. For each of the considered evaluation scenarios, two sets of raw data are generated, one with a varying probability of transmission \(P_{trans}\) and ideal detection probability (i.e., \(P_{det}=1\)), and vice-versa (i.e., \(P_{trans}=1\)). These probabilities have been hard-coded in the simulator to assess the consistency of the raw data outputs between the model and the simulator. Additionally, a realistic scenario is considered in which both probabilities are set to non-ideal values to assess the capabilities of the model in capturing their joint effects on the raw data.
The following metrics are employed to assess the performance similarity across the raw datasets:
**Mann-Whitney (MW) test**: will be used to evaluate the similarity between the distributions of iteration times between the simulator and the model for body regions containing an event (i.e., for event bit \(b=1\)).
**Square difference between Empirical Cumulative Distribution Functions (ECDFs)**: will be generated for event bit values \(b=0\) for all body regions. The maximum vertical distance between ECDFs will be computed and averaged over all regions, and provided graphically utilizing regular boxplots, facilitating qualitative interpretation of the data.
**Kullback-Leibler (KL) divergence**: will be employed to compare the difference between ratios of ones and zeros in each region for varying transmission and detection probabilities. This approach helps identifying regions in which the
Figure 3: MSE of generated frequencies with respect to the probability distribution as a function of nanonodes.
Figure 2: Probability distribution for two arbitrary regions with different traveling times (60,67), with probabilities: \(P_{det}=0.7,P_{trans}=0.7\).
fraction of event bits \(b=1\) is comparable across the simulator and the model. Regions through which the nanonodes pass in each of their iteration through the bloodstream (lungs and right heart) will be excluded from this metric.
### _Evaluation Results_
The MW test results presented in Table III demonstrate that the null hypothesis of both datasets being equally distributed is predominantly accepted across diverse regions, with a particularly strong performance observed for the ideal detection probability. Moreover, upon closer examination, a distinct pattern of higher variability emerges in the regions that do not meet the test's criteria, suggesting the potential influence of unspecified stochastic factors on the model's behavior in a small number of specific cases. Nonetheless, it is evident that the similarity between the model and the simulator distributions for event bit \(b=1\) is notably high across the considered scenarios.
For scenarios involving event bit \(b=0\), insights into the performance similarity between the model and the simulator are assessed utilizing ECDFs. The average maximum vertical distances between ECDFs are shown in Table IV. Particularly, a higher level of similarity between ECDFs is observed for the transmission probability \(P_{trans}=1\) compared to \(P_{det}=1\). This difference can be attributed to the presence of compound iterations when varying the transmission parameter (cf., Figure 2). It is worth noting that this discrepancy predominantly surfaces within the distributions' midsection, with the tails being nearly indistinguishable. An illustrative instance of this behavior is showcased in Figure 4. In summary, despite notable differences in a small number of specific cases, the focal point of dissimilarity is confined to a specific region within the distribution. This implies that, in select instances, the distribution is subtly shifted in a defined segment while otherwise featuring a significant level of similarity.
To further examine the distributions for event bit \(b=0\), the distributions of iteration times in different scenarios and for the head region are shown in Figure 5. As visible, the distributions are shown to be highly comparable, apart in cases when \(P_{trans}\) equals 0.4 and 0.6. Despite the differences in the distributions depicted for these two cases, an underlying similarity in the data distribution is evident. Specifically, while the simulator-generated boxplots exhibit a substantial number of outliers, in the model-related boxplots these outliers are considered within the third quartile. It is important to recognize that these outliers, although visually distinct, are not indicative of a fundamental divergence in the dataset's core characteristics. This can be checked in the case for 0.2 transmission probability where the simulator boxplots are almost equal to the ones originating from the model as the number of outliers is smaller, thus the third quartile is extended. These outliers are the root cause of the difference between ECDFs, as seen in Figure IV.
The KL divergence results assessing the similarity between the event bit \(b=1\) ratio in the overall datasets obtained through the simulator and the model are presented in Figure 6. As depicted, the arrangement of the regions follows a descending order of \(P_{trans}\) probability. The results demonstrate an increase in the similarity between rations as a function of ascending transmission probability for each region, in line with expectations. Conversely, a decrease in \(P_{trans}\) triggers an increase in the observed divergence. Consequently, the KL divergence reaches its minimum in regions with lower detection and higher transmission probability. Notably, the peaks in the KL divergence emerge within less probable regions
\begin{table}
\begin{tabular}{c c c} \hline \hline \multirow{2}{*}{\(P_{trans}/P_{det}\)} & \multicolumn{2}{c}{Fraction of accepted Mann-Whitney tests.} \\ \cline{2-3} & \(P_{det}=1\) & \(P_{trans}=1\) \\ \hline
0.2 & 83\% & 92\% \\
0.4 & 88\% & 79\% \\
0.6 & 94\% & 75\% \\
0.8 & 83\% & 79\% \\ \hline \hline \end{tabular}
\end{table} TABLE III: MAN-Whitney test results for event bit \(b=1\).
\begin{table}
\begin{tabular}{c c c} \hline \hline \multirow{2}{*}{\(P_{trans}/P_{det}\)} & \multicolumn{2}{c}{Mean of vertical distance between ECDFs} \\ \cline{2-3} & \(P_{det}=1\) & \(P_{trans}=1\) \\ \hline
0.2 & 0.11 & 0.081 \\
0.4 & 0.12 & 0.084 \\
0.6 & 0.1 & 0.083 \\
0.8 & 0.1 & 0.08 \\ \hline \hline \end{tabular}
\end{table} TABLE IV: ECDF comparison for event bit \(b=0\).
Figure 4: ECDF comparison between model and simulator for \(P_{trans}=0.4\) (left) and \(P_{det}=0.4\) (right) in thorax.
Figure 5: Iteration times obtained with event bit \(b=1\) for the head region.
\begin{table}
\begin{tabular}{c c c} \hline \hline \multirow{2}{*}{\(P_{trans}/P_{det}\)} & \multicolumn{2}{c}{Fraction of accepted Mann-Whitney tests.} \\ \cline{2-3} & \(P_{det}=1\) & \(P_{trans}=1\) \\ \hline
0.2 & 83\% & 92\% \\
0.4 & 88\% & 79\% \\
0.6 & 94\% & 75\% \\
0.8 & 83\% & 79\% \\ \hline \hline \end{tabular}
\end{table} TABLE III: MAN-Whitney test results for event bit \(b=1\).
coupled with reduced transmission probabilities. These peaks can be attributed to the scarcity of detection of events in these regions. Despite the marginal discrepancies in the ratios, their diminutive nature accentuates the KL divergence, implying a more substantial difference in such cases. Importantly, the observed differences do not exceed 0.04, underlining a rather high level of similarity across scenarios.
Lastly, several scenarios with mixed detection and transmission probabilities are studied to assess the model's accuracy for close-to-real-life cases. Figure 7 depicts the outcomes for three distinct cases, while the depicted results affirm comparable performance of the model and the simulator in realistic scenarios. However, the trends within the KL divergence are not entirely comparable due to the presence of a few outliers in the simulator-generated raw data, indicating the presence of smaller stochastic factors beyond the model's scope.
## V Conclusions
We have proposed a analytical model of raw data for flow-guided in-body nanoscale localization. Based on the nanodevices' communication and energy-related capabilities, the model outputs iteration times and event detection indicators that can be used by existing flow-guided localization approaches for localizing biological events. The model's output was compared with the equally-parameterized one from a simulator for flow-guided localization that features higher level of realism, indicating a significant level of similarity.
Flow-guided localization will be deployed in the cardiovascular systems of different individuals with varying biologies. Such localization is usually based on machine learning, hence requiring significant training and corresponding data. This training data is hard to obtain individually, yet the tunable nature of the model could potentially be used for capturing the differences in the raw data across bloodstreams. We consider evaluating this aspect of the model as a part of our future work, where we envision its utilization in a comparable way as for the administration of anesthesia, i.e., based on physiological indicators such as age, sex, height, and weight.
For an individual patient, temporal differences in the raw data stream for flow-guided localization are to be expected due to for example activities that the individual performs, and biological conditions (e.g., diseases) or environmental changes that the individual experiences (e.g., temperature, humidity). These will result in a change in the raw input data stream for flow-guided localization, in turn degrading its performance. As a part of our future work, we envision the adaptation of the model so that it can capture these slight changes in individual bloodstreams, which we envisage to be used for adapting flow-guided localization based on physiological indicators such as blood pressure or heart rate.
|
2302.14551 | Detecting and stabilizing measurement-induced symmetry-protected
topological phases in generalized cluster models | We study measurement-induced symmetry-protected topological (SPT) order in a
wide class of quantum random circuit models by combining calculations within
the stabilizer formalism with tensor network simulations. We construct a family
of quantum random circuits, generating the out-of-equilibrium version of all
generalized cluster models, and derive a set of non-local string order
parameters to distinguish different SPT phases. We apply this framework to
investigate a random circuit realization of the XZX cluster model, and use the
string order parameter to demonstrate that the phase diagram is stable against
extending the class of unitary gates in the circuit, from Clifford gates to
Haar unitaries. We then turn to the XZZX generalized cluster model, and
demonstrate the coexistence of SPT order and spontaneous symmetry breaking, by
relying on string order parameters and a connected correlation function. | Raúl Morral-Yepes, Frank Pollmann, Izabella Lovas | 2023-02-28T13:17:29Z | http://arxiv.org/abs/2302.14551v2 | Detecting and stabilizing measurement-induced symmetry-protected topological phases in generalized cluster models
###### Abstract
We study measurement-induced symmetry-protected topological (SPT) order in a wide class of quantum random circuit models by combining calculations within the stabilizer formalism with tensor network simulations. We construct a family of quantum random circuits, generating the out-of-equilibrium version of all generalized cluster models, and derive a set of non-local string order parameters to distinguish different SPT phases. We apply this framework to investigate a random circuit realization of the XZX cluster model, and use the string order parameter to demonstrate that the phase diagram is stable against extending the class of unitary gates in the circuit, from Clifford gates to Haar unitaries. We then turn to the XZZX generalized cluster model, and demonstrate the coexistence of SPT order and spontaneous symmetry breaking, by relying on string order parameters and a connected correlation function.
## I Introduction
Topology in quantum many-body systems has been at the forefront of condensed matter research in recent years [1]. Topological invariants allow us to classify the ground states of gapped local Hamiltonians into distinct phases [2; 3; 4; 5; 6], characterized by non-local order parameters, and displaying exotic properties, such as anyonic excitations or gapless edge states. A particularly rich phase diagram arises in the presence of symmetries, displaying various symmetry-protected topological (SPT) phases that cannot be characterized in terms of the spontaneous breaking of a global symmetry [7; 8]. Instead, they are characterized by entanglement patterns between subsystems [2; 9], captured by a topological entanglement entropy [10; 11], as well as non-local "string order" [12; 13].
Recently, the concept of SPT phases has been extended from equilibrium systems to non-equilibrium scenarios [14] in the context of measurement-induced entanglement transitions in quantum random circuits [15; 16; 17; 18; 19]. In these quantum circuits, the time evolution is governed by a competition between random unitary gates, spreading information and tending to scramble the system, and repeated local measurements, reducing entanglement. The interplay of these opposing effects leads to dynamical phase transitions between different stationary states: a highly entangled thermal state characterized by a volume law scaling of sub-system entanglement entropy, and non-thermal area law states [15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42], with different types of measurements generating novel non-equilibrium phases of matter [43; 44; 45; 46; 47; 48]. In particular, in Ref. [14], the authors have demonstrated that the area law stationary state can also exhibit SPT order, similarly to the area entangled ground states of gapped Hamiltonians.
These recent advances raise exciting questions about the measurement-induced topological phases in quantum circuits. The SPT phase found in Ref. [14] emerged in a Clifford quantum random circuit model in the presence of a protecting \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\) symmetry, and was detected through a topological entanglement entropy. Generalizing this construction to other types of topological phases, as well as finding and classifying the topological phases accessible in out-of-equilibrium systems can offer new insights into the properties of dynamical phase transitions. Another interesting aspect concerns the order parameter of the phase transition. Clifford random circuits have a special structure, allowing to simulate them efficiently by relying on the stabilizer formalism [49]. In this setting, the topological entanglement entropy is well-suited for detecting SPT order in the numerical calculations. However, the topological entanglement entropy is very difficult to access in experimental realizations. For this reason, it is important to identify other, more accessible order parameters. Another open question is the stability of the SPT phase in the wider class of Haar random circuits.
In this paper, we take the first steps towards answering these questions. We generalize the construction of Ref. [14] to generate the whole family of generalized cluster models [50; 51]. This extended set of random circuits hosts different types of SPT phases, as well as phases with simultaneous SPT order and spontaneous symmetry breaking (SSB). We also construct a set of non-local string order parameters, and demonstrate that they are capable of distinguishing the different phases realized by the circuits, thereby providing a convenient alternative to topological entanglement entropy that is more accessible both numerically and experimentally [52]. To this end, we analyze two members of the family of generalized cluster models in detail, by combining simulations in the stabilizer formalism with tensor network methods. First, we focus on the XZX model already examined in Ref. [14], and confirm that the string order parameter can be used the determine the full phase diagram. By relying on tensor network simulations, we also show that the phase diagram is remarkably stable against extending the class of unitaries from random Clifford to random Haar, provided that the protecting \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\) symmetry is respected by the
gates. Secondly, we consider the so-called XZZX cluster model, and demonstrate that it hosts a phase with coexisting SPT order and SSB. We determine the full phase diagram of this model within the stabilizer formalism, by evaluating the string order parameters and a connected correlator capturing SSB.
The paper is organized as follows. We first discuss the general theoretical framework in Sec. II. Here we introduce a set of quantum random circuits, realizing an out-of-equilibrium version of the whole family of generalized cluster models, and we also construct string order parameters capturing the SPT order in the area law stationary states of these circuits. We then turn to the XZX cluster model in Sec. III. First, we focus on Clifford random circuits in Sec. III.1, and we validate the string order parameter proposed before, by using it to obtain the full phase diagram, and comparing it to predictions relying on entanglement entropies from Ref. [14]. We test the stability of this phase diagram by extending the class of random unitary gates from Clifford to Haar random gates in Sec. III.2. We then demonstrate the coexistence of SPT order with spontaneous symmetry breaking by studying the XZZX cluster model in Sec. IV. We summarize our main conclusions in Sec. V.
## II General framework
In this section, we construct a family of quantum random circuit models that will be the main focus of this paper. We first review the equilibrium definition of generalized cluster models. Then, relying on the insights gained from Ref. [14], we turn to the non-equilibrium scenario and formulate the random circuit models that realize their out-of-equilibrium counterparts. We argue that these models display various dynamical phases with SPT order and/or SSB, and we also construct a set of non-local string order parameters and SSB local order parameters allowing us to determine the full phase diagram.
The family of generalized cluster models in a one-dimensional spin chain [50; 51] is generated by Hamiltonians of the form
\[H_{\alpha}=-\sum_{n}X_{n}\underbrace{Z_{n+1}...Z_{n+\alpha-1}}_{\alpha-1}X_{n +\alpha},\quad\alpha\geq 1, \tag{1}\]
where \(X_{n}\), \(Y_{n}\), and \(Z_{n}\) denote the Pauli matrices at site \(n\), and \(\alpha\) is a positive integer parametrizing the members of the class. For a given \(\alpha\), the model is symmetric under a set of \(\alpha\) global symmetries,
\[G_{1}=\prod_{k}Z_{\alpha k+1},\,...,\,G_{\alpha}=\prod_{k}Z_{\alpha k+\alpha}. \tag{2}\]
Each of the symmetries \(G_{i}\) is a product of \(Z\) operators, distanced by \(\alpha-1\) sites in the chain.
All terms in the Hamiltonian \(H_{\alpha}\), the so-called cluster operators \(g_{n}^{\alpha}\), commute with each other, and thus the ground states are defined by the condition \(g_{n}^{\alpha}\left|\psi_{0}^{\alpha}\right\rangle=\left|\psi_{0}^{\alpha}\right\rangle\) for every \(n\). Such ground states realize different types of phases, with SSB and/or SPT order [50; 51]. For example, \(\alpha=1\) corresponds to the \(\mathbb{Z}_{2}\) symmetric Ising chain, displaying SSB. The \(\alpha=2\) case is the so-called cluster model that realizes an SPT phase, protected by \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\) symmetry [53; 54; 55]. For \(\alpha=3\), Eq. (1) defines the \(XZZX\) cluster model with coexisting SSB and SPT orders [51; 55]. In general, every odd \(\alpha\) value is characterized by \(\mathbb{Z}_{2}\) symmetry breaking and \(\mathbb{Z}_{2}^{\times(\alpha-1)}\) SPT order (except for \(\alpha=1\)), while even integers yield pure SPT order protected by a \(\mathbb{Z}_{2}^{\times\alpha}\) symmetry.
In Ref. [14], Lavasani et al. showed that the SPT phase of the \(XZX\) cluster model, \(\alpha=2\), can be realized in a quantum random circuit by implementing properly designed measurements. Here we generalize this construction to induce by measurements the SPT phases that are realized by the generalized cluster models (1). To this end, we construct a set of random circuit models in the following way. We consider a chain of \(N\) qubits subject to open boundary conditions, with an initial state that can be an arbitrary eigenstate of the symmetry operators \(G_{1},...,G_{\alpha}\), e.g., the trivial product state \(\left|Z=1\right\rangle^{\otimes N}\). At each step, we update the state by applying a sequence of three different operations:
1. With probability \(p_{t}\), we measure a cluster operator \(X_{i}Z_{i+1}...Z_{i+\alpha-1}X_{i+\alpha}\), with \(i\in\{1,...,N-\alpha\}\) chosen randomly.
2. With probability \(p_{s}\), we measure a single-qubit operator \(Z_{i}\) on a random site \(i\) of the chain.
3. With probability \(p_{u}=1-p_{s}-p_{t}\), a random unitary preserving the protecting symmetries (2) and acting on \(\alpha+1\) neighboring qubits is sampled and applied at a random position. The length of these
Figure 1: A particular realization of the cluster circuit model for (a) \(\alpha=2\) and (b) \(\alpha=3\), showing the first two time steps for system size \(N=6\). The boxes labelled as \(Z\), \(XZX\) and \(XZZX\) represent projective measurements, whereas the ones with label \(U\) denote random unitary gates preserving the symmetries (2).
unitaries, \(\alpha+1\), was chosen as the shortest range such that the gates can create entanglement while preserving the symmetries of the model.
The collection of \(N\) such operations form a single _time step_ in the time evolution of the system. Figure 1 illustrates this construction by showing the first two time steps in a particular realization of the circuit for \(\alpha=2\) (left) and \(\alpha=3\) (right). We denote the _circuit average_ of a quantity \(A\), measured in the steady state of the circuit, by \(\overline{A}\). We note that in this setting, time average is equivalent to ensemble average, therefore we can choose to perform the averaging over time steps and/or over circuit realizations.
This family of circuit models realizes three different phases for each \(\alpha\), depending on the dominant operation. For large enough probability \(p_{u}\), the unitary evolution dominates, resulting in a volume law phase with the entanglement entropy of subsystems scaling with their volume. In contrast, if measurements are applied at a sufficiently high rate, the stationary state is characterized by area law entanglement scaling. In particular, for large enough \(p_{s}\), the \(Z\) measurements tend to collapse the state to a trivial area law phase, whereas the measurements of cluster operators \(X_{i}Z_{i+1}...Z_{i+\alpha-1}X_{i+\alpha}\) may induce area law phases with symmetry-protected topological phases, and/or ordered phases with a local order parameter. For example, the case \(\alpha=1\) has been studied in Ref. [43], and was found to realize the out-of-equilibrium counterpart of the SSB order in the ground state of the Ising Hamiltonian for large enough \(p_{t}\), dubbed a spin glass phase. For \(\alpha=2\), Lavasani et al. demonstrated the emergence of an SPT phase [14].
Before investigating the phase diagram of the first few members of this family of random circuit models in detail, we comment on the order parameters that can distinguish different phases. One indicator that has been successfully applied in previous works is a topological entanglement entropy, readily accessible in numerical simulations for Clifford circuits, relying on the stabilizer formalism. However, in simulations of more general random circuits, for instance in the presence of general Haar unitary gates \(U\), as well as in possible experimental realizations, topological entanglement entropies are challenging to access. Therefore, we propose another way to distinguish the different phases, by extending the concept of string order parameters, designed to detect equilibrium SPT phases [12; 13], to this out-of-equilibrium scenario. In general, for a state that is invariant under a protecting symmetry \(G\) that can be expressed as the product of local unitary operators \(\Sigma_{i}\), \(G=\prod_{i}\Sigma_{i}\), a string order parameter corresponding to boundary operators \(O^{L/R}\) can be defined as follows [12; 13],
\[\mathcal{S}_{\Sigma}^{O^{L},O^{R}}=\lim_{|j-k|\rightarrow\infty}\left\langle \psi_{0}\Bigg{|}O^{L}(j)\left(\prod_{i=j+1}^{k-1}\Sigma_{i}\right)O^{R}(k) \Bigg{|}\psi_{0}\right\rangle. \tag{3}\]
These string order parameters allow to differentiate topologically distinct states, by choosing the operators \(O^{L/R}\) appropriately. Importantly, the bulk part of the string operator is constructed from the symmetry operators \(\Sigma_{i}\). Therefore, for a symmetric state, and for generic operators \(O^{L/R}\), the string order parameter takes a non-zero value and varies smoothly as a function of the parameters of the Hamiltonian. In order to distinguish topologically distinct phases, the boundary operators \(O^{L/R}\) have to be chosen carefully, such that the emerging SPT order gives rise to selection rules that ensure the vanishing of a string order in a certain phase [13]. This method thus allows to detect topological phases through the exact relation \(\mathcal{S}_{\Sigma}^{O^{L},O^{R}}\equiv 0\). Choosing various pairs \(O^{L/R}\), such that the resulting string orders \(\mathcal{S}_{\Sigma}^{O^{L},O^{R}}\) vanish in different phases, grants access to the full phase diagram. Convenient string order parameters for the generalized cluster models are given by
\[\mathcal{S}_{Z}^{1,1}(\alpha)=\lim_{|j-k|\rightarrow\infty}\left\langle\prod_{ i=j+1}^{k}Z_{\alpha i}\right\rangle, \tag{4}\]
vanishing in the ground state of the cluster Hamiltonian, as well as
\[\mathcal{S}_{Z}^{X,X}(\alpha)=\lim_{|j-k|\rightarrow\infty}\left\langle X_{ \alpha j}\left(\prod_{i=j+1}^{k}Z_{\alpha i-\alpha+1}...Z_{\alpha i-1}\right) X_{\alpha k}\right\rangle, \tag{5}\]
yielding zero for a trivial \(Z\) product state.
Besides the SPT order, for odd \(\alpha\) values the ground state of Hamiltonian (1) displays SSB. This type of order can be detected through the connected correlators of a symmetry breaking local order parameter, defined as
\[M(\alpha)=X_{i}\underbrace{(Y_{i+1}X_{i+2})...(Y_{i+\alpha-2}X_{i+\alpha-1})} _{(\alpha+1)/2}. \tag{6}\]
We note that for \(\alpha=1\), \(M(\alpha)\) reduces to the Ising order parameter \(X_{i}\).
While the various order parameters defined in Eqs (4), (5) and (6) are well-suited for determining the phase diagram of the generalized cluster models in equilibrium, they are not directly applicable for the random circuit scenario. The reason for this is that for different realizations of the disordered circuit, the sign of string order parameters and correlation functions fluctuates randomly, yielding a vanishing circuit average in all non-trivial regions of the parameter space. Similarly, the time average of all of these quantities vanishes for any fixed circuit realizations. This property can be understood by noting that all the measured operators have eigenvalues \(\pm 1\), resulting in a randomly changing sign during the time evolution because of the probabilistic measurement outcomes and the repeatedly applied random unitary gates.
The vanishing of these circuit averages is also intimately related to the nature of the measurement-induced entanglement transition. This dynamical phase transition is unconventional in the sense that it relies on the
properties of individual quantum trajectories instead of the disorder averaged quantum state, and it can only be detected through quantities that are non-linear in the density operator of the system, e.g. through entanglement entropies [15; 17]. In contrast, the circuit average of the density matrix is a trivial infinite temperature density matrix. Therefore, the rich entanglement structure of individual quantum trajectories remains hidden at the level of the average density matrix, and consequently, at the level of disorder averaged operator averages, such as the string order parameters or connected correlators considered above.
This difficulty can be overcome by modifying the proposed order parameters in such a way that they become non-linear in the density matrix of the system. This can be easily achieved by considering the time and/or circuit average of the _absolute value_ of the string order parameters, \(\left|\overline{\mathcal{S}}\right|\), with a similar idea applied to the correlator of the local order parameters \(M(\alpha)\). We test these proposed order parameters below by benchmarking them against the behavior of the topological entanglement entropy for various circuit models, and demonstrate that they capture the full phase diagram correctly.
Before turning to the numerical simulations, we briefly comment on the special case of \(p_{u}=0\), resulting in models without unitary evolution. We find that circuits, consisting only of projective measurements, share certain universal properties for any value of \(\alpha\). In particular, they all display a phase transition between the trivial and the SPT and/or SSB area law phase for \(p_{s}=1/2\), i.e., where the rates of both types of measurements are equal. A proof of this statement, relying on a duality argument, is presented in Appendix A.
## III SPT phase in the XZX cluster circuit model
In this section, we revisit the SPT phase of the \(XZX\) cluster circuit model, by applying the framework presented in Sec. II. We first focus on Clifford circuits in Sec. III.1, already studied in Ref. [14]. In this special set of circuits both topological entanglement entropies and string order parameters are readily accessible by relying on the stabilizer formalism. We benchmark the string order parameters (4) and (5) by using them to detect the phase transitions, and comparing the phase diagram to the one obtained from entanglement entropies. We then turn to more general Haar random circuits in Sec. III.2, extending the class of random circuits compared to Ref. [14], and testing the stability of the phase diagram against allowing a wider set of unitary gates during the time evolution. Here the stabilizer formalism is no longer applicable. Instead, we rely on efficient tensor network methods to simulate the dynamics, and extract the string order parameters (4) and (5) in order to determine the full phase diagram. We find that the phase boundaries remain strikingly stable against extending the class of unitary gates, yielding a phase diagram that is essentially identical to the one obtained for Clifford circuits.
### Time evolution with Clifford unitary gates
In this section, we reproduce the phase diagram of Ref. [14], by relying on the string order parameters instead of a topological entanglement entropy. The special structure of Clifford unitary gates allows us to efficiently simulate large system sizes, up to \(N=1024\) qubits, by applying the stabilizer formalism. This method relies on representing the wave function of the system for a given circuit realization as the eigenstate of \(N\) linearly independent (under multiplication) commuting Pauli strings, the so-called stabilizers. Both Clifford unitaries and projective measurements preserve this structure by mapping the stabilizers to another set of \(N\) independent commuting Pauli strings, allowing to simulate the time evolution efficiently [49]. The circuit averages are then obtained by performing the averaging over the stationary states of \(10^{3}\) random circuits. For each of these circuits, the qubits are initialized in the state \(\left|Z=-1\right\rangle^{\otimes N}\), and let to evolve for \(2N\) time steps (corresponding to \(2N^{2}\) operations) to reach the steady state. Then, an additional time averaging is performed by evolving the system for another \(10^{3}\) time steps, calculating the desired string expectation values after each of them, and taking the average of the obtained values.
For the finite systems considered here, we choose string operators of length \(N/2-1\), located in the middle of the chain, as depicted in Fig. 2a. We also make use of sublattice symmetry in the following way. The string illustrated in Fig. 2a, displaced by one site to the right, will lead to the same circuit average. Therefore, we can improve the convergence of disorder averaging by averaging our results for the original and shifted strings, for both string order parameters (4) and (5).
Before turning to the time evolution in the presence of unitary gates, we first comment on the special case of measurement-only dynamics, \(p_{u}=0\), a line in parameter space that is the same for Clifford and Haar random circuits. As shown in Fig. 2b, the string order parameters allow to distinguish the two different area law phases in the model. The SPT phase (purple) is characterized by \(\overline{|\mathcal{S}_{Z}^{X,X}|}>0\) and \(\overline{|\mathcal{S}_{Z}^{\mathbf{1},\mathbf{1}}|}=0\), while the trivial phase is signalled by \(\overline{|\mathcal{S}_{Z}^{X,X}|}=0\) and \(\overline{|\mathcal{S}_{Z}^{\mathbf{1},\mathbf{1}}|}>0.\) Based on duality arguments, this transition happens exactly at \(p_{s}=1/2\), in good agreement with our numerical results. We note that at the critical point \(p_{s}=1/2\), both string order parameters should vanish in the thermodynamic limit. Due to finite size effects, we find a small finite value instead.
Next, we turn to the case involving random Clifford unitary gates, preserving the \(\mathbb{Z}_{2}\crosscross\mathbb{Z}_{2}\) symmetry (2). Fig. 2c shows the behavior of the string order parameters as a function of the single-qubit measurement probabil
ity \(p_{s}\) for a fixed rate of unitary evolution, \(p_{u}=0.3\). We find that the string operators are still well suited for reconstructing the phase diagram. Besides the area law SPT (purple) and trivial (orange) phases already seen for measurement-only dynamics, the volume law phase (green) is clearly distinguished by the vanishing of both string order parameters.
Our results demonstrate that string order parameters provide an accessible alternative way to determine the phase boundaries. To account for finite size effects, we calculate the string order parameters \(\overline{|\mathcal{S}|}(N)\) for various system sizes \(N\). At the critical point, this quantity is expected to decay towards zero as a power law in \(N\), allowing us to implement an extrapolation to the thermodynamic limit, as discussed in Appendix B. The phase diagram determined with this method is shown in Fig. 4a, and is in perfect agreement with the results of Ref. [14].
### Phase diagram for Haar random unitary gates
In this section we check the stability of the phase diagram of the circuit model with respect to a broader class of unitary gates. To this end, we consider the time evolution in the presence of generic random Haar unitaries, with the only restriction that the gates still preserve the protecting symmetries (2). Thereby, we considerably extend the set of allowed gates in the circuit compared to the special Clifford gates considered earlier. In principle, relaxing this constraint on the structure of gates could lead to a more efficient entanglement generation during the time evolution, and might lead to a much broader region of volume law phase in parameter space. Therefore, we examine the sensitivity of phase boundaries towards such an extension of allowed quantum circuits. As we will demonstrate below, in this particular case the phase diagram remains strikingly stable, and our calculations
Figure 3: String order parameters in the presence of Haar random unitary gates in the \(\alpha=2\) cluster circuit model, plotted as a function of single spin measurement probability \(p_{s}\) across the phase boundaries, for fixed \(p_{u}=0.3\). These order parameters distinguish an SPT (purple) and a trivial (orange) area law phase, separated by a volume law phase (green). The vertical dashed lines show the numerically determined phase boundaries, with the grey shadow indicating the uncertainty. We used system size \(N=256\) and bond dimension \(\chi_{\rm max}=128\).
Figure 2: String order parameters in a Clifford circuit for \(\alpha=2\). (a) String operators (5) and (4) in a finite lattice, characterized by the same bulk operator (light shading), and distinguished by the different boundary operators (dark shading). (b) String order parameters for measurement-only dynamics \(p_{u}=0\), shown as a function of the probability of single qubit measurement \(p_{s}\) in the vicinity of the self-dual point \(p_{s}=0.5\). String order parameters distinguish two area law phases, one with SPT order (purple) and a trivial one (orange). The vertical line indicates the exact critical point of the phase transition, \(p_{s}=0.5\). (c) String order parameters in the presence of a finite rate of unitary gates, \(p_{u}=0.3\), shown as a function of \(p_{s}\) across the phase boundaries. Unitary gates are Clifford unitaries respecting the protecting \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\) symmetry. String order parameters distinguish an SPT (purple) and a trivial (orange) area law phase, separated by a volume law region (green). Numerically determined phase boundaries are indicated by vertical dashed lines. In figures (b) and (c) we used system size \(N=1024\).
yield essentially identical results for Clifford and Haar random circuits.
For general Haar random gates, an efficient simulation of the circuit relying on the stabilizer formalism is no longer possible. Instead, we represent the wave function using Matrix Product States (MPS) [56; 57], and implement the Time-Evolving Block Decimation algorithm to calculate the dynamics [58]. This method is well-suited for studying the SPT and trivial area law phases [59], however, the MPS representation with finite bond dimension breaks down in the volume law phase. Nevertheless, we can still detect the volume law phase through the steadily increasing half-chain entanglement entropy, as well as through the vanishing of both string order parameters, upon increasing system size and bond dimension. We also note that measuring the topological entanglement entropy is much more demanding both in our MPS simulations and in an experimental setup, therefore, here we solely rely on more accessible string order parameters to determine the phase diagram.
In our MPS simulations we consider system sizes with up to \(N=256\) qubits, with maximal bond dimension \(\chi_{\text{max}}=128\) (a discussion about convergence with bond dimension can be found in Appendix C). We note that in the SPT phase it is important to choose a bond dimension \(\chi_{\text{max}}\) that is a power of 2, to avoid breaking the symmetry by truncating degenerate Schmidt values, see Appendix D. We obtain the circuit averages by generating \(10^{2}\) random circuits, with the qubits initialized in the state \(\left|Z=-1\right\rangle^{\otimes N}\). We also perform an additional time averaging over \(10^{3}\) time steps, after the system has reached the steady state.
The string order parameters are shown as a function of \(p_{s}\) in Fig. 3, for a fixed probability of unitary gates \(p_{u}=0.3\). For a small enough rate of \(Z\) measurements \(p_{s}\), our simulation yields string order parameters \(\overline{|\mathcal{S}_{Z}^{X,X}|}>0\) and \(\overline{|\mathcal{S}_{Z}^{1,1}|}=0\), confirming the presence of an SPT phase also for random Haar unitary gates (purple region). Similarly, \(\overline{|\mathcal{S}_{Z}^{X,X}|}=0\) and \(\overline{|\mathcal{S}_{Z}^{1,1}|}>0\) indicate a trivial area law phase (orange) for large enough \(p_{s}\). We can detect the volume law phase (green) by looking at the half-chain entanglement entropy and observing that it keeps increasing with system size and bond dimension. Also, numerical data shows that both string order parameters converge quickly to 0 with increasing bond dimension in this phase. We note that the position of phase boundaries remains unchanged with respect to the Clifford circuit within numerical precision (cf. to Fig. 2c, noting the different scale on the vertical axis).
Relying on the string order parameters, we can determine the full phase diagram for the Haar random circuit, by performing an extrapolation to the thermodynamic limit similar to the Clifford case (see Appendix B). The result is shown in Fig. 4b, where it is compared to the phase diagram of the Clifford circuit, Fig. 4a. The estimated numerical uncertainty of the phase boundaries is indicated by grey shading, with the Haar circuit results having a larger error due to the system size limitations of the MPS simulations. Nevertheless, the two phase diagrams are strikingly similar, suggesting that the restrictions on the structure of unitary gates in the Clifford circuits do not change the phase boundaries within numerical precision.
## IV Coexistence of SPT and SSB
Having validated our approach in the previous section, we now turn to other members of the family of generalized cluster models. In this section, we demonstrate that a coexisting SPT order and SSB can be realized in the area law phase of the circuit models for odd \(\alpha\) values. To this end, here we focus on the case with \(\alpha=3\), realizing the \(XZZX\) circuit model shown in Fig. 1b. The ground state of the corresponding generalized cluster model Hamiltonian (1) shows both SPT and SSB orders. As we demonstrate below, the out-of-equilibrium version displays two area law phases: one characterized by coexisting SPT and SSB orders and a trivial phase, separated by a volume law entangled phase. In this section we restrict our attention to Clifford circuits, with the unitary gates preserving the symmetries (2), allowing us to reach large system sizes by applying the stabilizer formalism. The measurement-only limit of this model has been recently studied in Ref. [44].
To differentiate the phases realized by this model, we use both the string order parameters of Eqs. (4) and (5), and the local order parameter of Eq. (6), yielding \(M_{i}=X_{i}Y_{i+1}X_{i+2}\) in this case. The SSB ordering can be detected through the correlators of this local order parameter,
\[\mathcal{C}_{M}=\lim_{|j-k|\rightarrow\infty}\left\langle M_{j}M_{k}\right\rangle. \tag{7}\]
Figure 4: Phase diagram determined from string order parameters, (a) with random Clifford unitary gates, (b) with random Haar unitaries, preserving the \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\) symmetry. Numerical uncertainty is indicated by grey shades. The phase diagram is strikingly stable against changing the set of allowed unitaries, and displays an SPT and a trivial area law phase, separated by a volume law region. Clifford results are consistent with the phase diagram obtained in Ref. [14], relying on entanglement entropies.
The finite size versions of the string order parameters, as well as the correlation function of Eq. (7) used for this model, are illustrated in Fig. 5a. The initial state of the qubits, as well as the procedure for obtaining the circuit averages of various quantities, is the same as the one applied for the XZX model.
We show the behavior of the circuit averaged string order parameters and the correlator of the local order parameter in Fig. 5b, for a fixed rate of unitaries \(p_{u}=0.1\), using system size \(N=768\). For a small rate of \(Z\) measurements \(p_{s}\), the string order parameter \(\mathcal{S}_{Z}^{1,1}\) vanishes, while the other string operator \(\mathcal{S}_{Z}^{X,X}\) and the correlator \(\mathcal{C}_{XYX}\) both take finite values. This indicates an area law phase characterized by the coexistence of SPT and SSB orders (magenta). We note that up to numerical precision, the SPT order and SSB vanish at the same critical point, thus we do not find an area law phase displaying solely SPT or SSB order in this model. As shown in Ref. [44] in the limit of measurement-only dynamics, an SPT phase without SSB order can be generated by adding symmetry breaking operations, such as \(Y\) measurements, to the circuit. For large single qubit measurement probability \(p_{s}\), we find a trivial area law phase (orange) with vanishing string order parameter \(\mathcal{S}_{Z}^{X,X}\) and correlator \(\mathcal{C}_{XYX}\), but a finite value for \(\mathcal{S}_{Z}^{1,1}\). As before, the two area law region are separated by a volume law phase (green), where all order parameters become zero.
The full phase diagram of this model, extracted from the extrapolation to the thermodynamic limit of the three types of order parameters (as detailed in Appendix B), is shown in Fig. 5c. By comparing it to the phase diagram of the \(XZX\) model, Fig. 4, we observe that the volume law phase becomes more extended in the \(XZZX\) model, and the region with SPT and SSB orders decreases in size with respect to the SPT phase of the \(XZX\) circuit. This effect stems from the longer range of the random unitary gates in the \(XZZX\) circuits, leading to more efficient entanglement generation. Similarly to the \(XZX\) model, the numerical results, as well as the analytical arguments presented in Appendix A, suggest that the point \(p_{s}=p_{c}=1/2\) and \(p_{u}=0\) is a tricritical point.
## V Conclusion
We have studied the interplay of symmetry-protected topological phases and measurement-induced entanglement transition by introducing a class of quantum random circuit models, consisting of projective measurements and random unitary gates respecting a set of global symmetries. We showed that the circuits in this family display measurement-induced phase transitions between a thermal volume law phase and different non-thermal area law stationary states, and realize the out-of-equilibrium version of all generalized cluster states in their area law phase. Motivated by the string operators used to detect SPT order in equilibrium settings, we have constructed a set of non-equilibrium string order parameters, well suited for revealing SPT order in this class of circuit models, and accessible to numerical simulations and experimental realizations. We benchmarked
Figure 5: Phases realized by the \(XZZX\) (\(\alpha=3\)) circuit model. (a) String operators and connected correlator of the local order parameter in a finite lattice. The two string operators (top and middle) are characterized by a non-trivial bulk operator (light shading), and boundary operators (dark shading), whereas the local order parameter (bottom) is measured at two distant positions (dark shading) to capture SSB. (b) Order parameters across the two phase transitions versus the probability of single qubit measurement \(p_{s}\) at fixed rate of unitary gates \(p_{u}=0.1\), for system size \(N=768\). The two string operators and the correlator (multiplied by \(50\) for better visibility) reveal an area law phase with coexisting SPT and SSB order (magenta), as well as a trivial area law phase (orange), separated by a volume law phase (green). Vertical lines indicate the numerically determined phase boundaries. (c) Full phase diagram of the \(XZZX\) circuit model obtained via string order parameters.
our framework by studying the string order in the \(XZX\) circuit model, and comparing it to the behavior of the topological entanglement entropy in the special case of Clifford unitary gates. We then tested the stability of the phase diagram by relaxing the strong constraint on the structure of unitary gates, and allowing for a wider set of symmetry preserving Haar random unitaries. Relying on MPS simulations, we found that the phase boundaries are remarkably insensitive to extending the class of unitary gates, providing additional evidence that the rich structure observed in Clifford circuits also appears in more generic quantum circuit models. Finally, we demonstrated in the example of the XZZX circuit model that the out-of-equilibrium generalized cluster states can host simultaneous SPT order and SSB, similarly to their equilibrium Hamiltonian counterparts.
Our results pave the way to study topological phases in a wider range of quantum circuit models. In particular, as we demonstrated in this work, the out-of-equilibrium string order parameters are accessible in MPS simulations, allowing us to study the phase diagram of generic Haar random circuits. One of the interesting open questions in this direction concerns the universality class of various entanglement transitions. Whether the phase transition in Clifford and Haar random circuits belongs to the same universality class is an exciting unresolved problem that could be addressed within the framework developed in this paper. Another interesting direction is characterizing all the possible ordered phases that can arise in the area law stationary states of random circuit models. The family of random circuits considered in this paper provides one example for a class of ordered phases, however, the construction could be extended to give a recipe for realizing other types of phases, such as true topological order in higher dimensional circuits.
**Acknowledgements**. This work was supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme, grant agreement Nos. 851161 and 771537. F.P. acknowledges the support of the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy EXC-2111-390814868. F.P.'s research is part of the Munich Quantum Valley, which is supported by the Bavarian state government with funds from the Hightech Agerada Bayern Plus. I.L. acknowledges support from the Gordon and Betty Moore Foundation through Grant GBMF8690 to UCSB and from the National Science Foundation under Grant No. NSF PHY-1748958.
**Data and materials availability.** Data analysis and simulation codes are available on Zenodo upon reasonable request [60].
## Appendix A Phase transition in measurement-only random circuits
In this appendix we focus on random circuits in the measurement-only regime, \(p_{u}=0\), displaying a single phase transition between two topologically distinct area law phases. Here we prove via a duality argument that the critical point, characterized by a logarithmic scaling of the entanglement entropy, is located at \(p_{s}=1/2\) for any value of \(\alpha\). We note that our reasoning generalizes the proof from Ref. [14], finding the critical point of the measurement-only \(XZX\) circuit cluster model, \(\alpha=2\), by extending it to all values of \(\alpha\).
Consider the circuit model for \(\alpha>0\) and periodic boundary conditions, with initial state \(|\psi_{0}\rangle=|0,...,0\rangle\), described by the generating set of stabilizers \(\{Z_{1},Z_{2},...,Z_{N-\alpha},G_{1},...,G_{\alpha}\}\). Here we make the symmetries of the model explicit, as stabilizers that remain unaltered throughout the evolution of the circuit. We denote the cluster operators by \(g_{i}=X_{i-\alpha/2}Z_{i-\alpha/2+1}...Z_{i+\alpha/2-1}X_{i+\alpha/2}\), with \(i\in\{1,...,N\}\) for \(\alpha\) even and \(i\in\{1/2,3/2,...,N-1/2\}\) for \(\alpha\) odd (note that all sums in indices are understood to the modulo \(N\)). At each step of the circuit we measure \(Z_{i}\) with probability \(p_{s}\) and \(g_{i}\) with probability \(1-p_{s}\). From the Gottesmann-Knill theorem [49; 61], a stabilizer at any time step of the evolution can be written as a product of \(Z\) and \(g\) operators.
For each realization of the circuit model with probability \(p_{s}\) of \(Z\) measurement, we construct a dual version with probability \(1-p_{s}\) in the following way. For \(\alpha\) even, the dual circuit will be defined on the same lattice as the original one, whereas for \(\alpha\) odd we introduce a dual lattice, with lattice sites indexed by half integers \(i\in\{1/2,3/2,...,N-1/2\}\), and perform a mapping to this dual space. First, we set the initial state of the dual circuit to be the one fixed by the stabilizers \(\{g_{1},...,g_{N-\alpha},G_{1},...,G_{\alpha}\}\). Then, we substitute every \(Z_{i}\) measurement by a \(g_{i}\) measurement, and vice versa. Fig. 6 shows the duality transformation for a realization of the circuit at \(\alpha=2\) and \(\alpha=1\). The evolved state of the dual circuit is closely related to the state of the original circuit, as the following Lemma shows.
**Lemma**. Let
\[s=\left(\prod_{i\in I}Z_{i}\right)\left(\prod_{j\in J}g_{j}\right), \tag{10}\]
be a stabilizer of the state of the original circuit at the updating step \(M\), where \(I\subset\{1,...,N\}\) and \(J\subset\{1,...,N\}\) for \(\alpha\) even and \(J\subset\{1/2,...,N/2-1\}\) for \(\alpha\) odd. Then, the state of the dual circuit at updating step \(M\) is stabilized by the operator
\[\tilde{s}=\left(\prod_{i\in I}g_{i}\right)\left(\prod_{j\in J}Z_{j}\right). \tag{11}\]
_Proof_. We prove the lemma by induction. By construction, it is true for the initial state. Suppose it is true at updating step \(M\), and now let us apply a new measurement. If the new measurement commutes with all stabilizers, then the state is unchanged, so the claim is true. Let us consider the case where the measured operator does not commute with all the stabilizers. For concreteness, suppose that we measure the operator \(Z_{i}\) in the original circuit (and thus \(g_{i}\) in the dual one). The measurement anticommutes with all the stabilizers that, when expressed as a product of \(g\) and \(Z\) operators, contain either \(g_{i+\alpha/2}\) or \(g_{i-\alpha/2}\) (but not both). We denote this set of stabilizers by \(\{s_{1},...,s_{n}\}\). When \(Z_{i}\) is measured, this set is updated to \(\{Z_{i},s_{1}s_{2},...,s_{1}s_{n}\}\), by virtue of the Gottesmann-Knill theorem. In the dual circuit, \(g_{i}\) anticommutes with \(Z_{i+\alpha/2}\) and \(Z_{i-\alpha/2}\), so all the stabilizers commuting with these are \(\{\tilde{s}_{1},...,\tilde{s}_{n}\}\) due to the induction hypothesis. The updated state is stabilized by the operators \(\{g_{i},\tilde{s}_{1}\tilde{s}_{2},...,\tilde{s}_{1}\tilde{s}_{n}\}\), so for the modified stabilizers the claim is still true (\(s_{1}s_{j}\rightarrow\tilde{s}_{1}\tilde{s}_{j}\)). The same argument holds for the case where the measured stabilizer is \(g_{i}\). \(\square\)
To prove that the phase transition is located at \(p_{s}=1/2\), we apply a duality argument. Below we show that if the state of one of the circuits has area law entanglement at some time step, then so does its dual counterpart. Therefore, the critical point with logarithmic entanglement scaling must coincide with the self-dual point \(p_{s}=1/2\). Within the stabilizer formalism [49], the entanglement entropy of a stabilizer state \(\ket{\psi}\) for a region \(A\) of the chain is given by [62]
\[S_{A}(\ket{\psi})=n_{A}-\log_{2}|G_{A}|. \tag{10}\]
Here \(n_{A}\) is the number of qubits in \(A\) and \(|G_{A}|\) is the total number of stabilizers with support within \(A\), i.e., of stabilizers that act trivially on qubits outside of \(A\). Let \(\mathcal{S}=\{s_{1},...,s_{n}\}\) denote a generating set of the subgroup \(G_{A}\), with \(n=\log_{2}|G_{A}|\). Without loss of generality, we can assume that for every site \(i\in A\), there are at most two stabilizers in the generating set that start or end at \(i\) (i.e., for which the first or last non-trivial Pauli operator is at \(i\)). Each of the generators \(s_{i}\) can be written as a product of \(g\) and \(Z\) operators contained in \(A\) (for a contiguous region \(A\)).
We now consider the set of operators \(\tilde{\mathcal{S}}=\{\tilde{s}_{1},...,\tilde{s}_{n}\}\), where \(Z\) operators are replaced by \(g\) and vice versa. By the previous Lemma, these operators are stabilizers of the state of the dual circuit, and one can easily check that they are linearly independent with respect to the product of Pauli strings (since the original set is linearly independent). We define the region \(\tilde{A}=A\) in the \(\alpha\) even case and \(\tilde{A}=\{i+1/2|i\in A\}\) in the \(\alpha\) odd case. In order to obtain an upper bound for the entanglement entropy of \(\tilde{A}\), we find a lower bound for the number of stabilizers \(\tilde{s}_{i}\) fully contained inside \(\tilde{A}\). We observe that the transformation \(Z_{i}\to g_{i}\) moves the first and last nontrivial operators by \(\alpha/2\) positions to the left and to the right, respectively. Therefore, \(\tilde{s}_{i}\) is still fully contained inside \(\tilde{A}\) if \(s_{i}\) does not have any non-trivial operator closer than \(\alpha/2\) to one of the edges of \(A\). Since there are at most 2 operators with initial or final non-trivial operator at each site inside \(A\), at least \(\eta-2\alpha\) of the string operators in \(\tilde{\mathcal{S}}\) will be contained in \(\tilde{A}\). Therefore \(\log_{2}|G_{\tilde{A}}|\geq\log_{2}|G_{A}|-2\alpha\). Comparing to equation (10) gives
\[S_{\tilde{A}}\left(\ket{\tilde{\psi}}\right)\leq S_{A}(\ket{\psi})+2\alpha, \tag{11}\]
proving that the state of the dual circuit \(\ket{\tilde{\psi}}\) indeed obeys area law entanglement scaling for any area law entangled state \(\ket{\psi}\) of the original circuit. This completes our proof that the critical point must be self-dual, \(p_{s}^{\rm crit}=1/2\).
## Appendix B Finite size corrections to string order parameters
To determine the critical lines using string order parameters, we examine the finite size corrections that acquire for various system sizes. This procedure allows us to extrapolate our results to the thermodynamic limit, and to determine the phase boundaries with higher numerical precision. In the thermodynamic limit, the area law phases are distinguished by the vanishing of exactly one of the string order parameters, while both of them become zero in the volume law phase, allowing us to extract the critical lines of the system. In more details, we use the following numerical procedure. For a fixed \(p_{u}\), we calculate the relevant string order parameter for different measurement rates \(p_{s}\) close to the transition, for
Figure 6: Duality map between circuit realizations for (a) \(\alpha=2\) and (b) \(\alpha=1\). For \(\alpha\) even the mapping connects circuits defined on the same lattice, whereas for \(\alpha\) odd we introduce a dual space. In both cases, the initial state of the original circuit is given by the stabilizers \(\{Z_{1},Z_{2},...,Z_{N-\alpha},G_{1},...,G_{\alpha}\}\), and the initial state of the dual circuit is given by stabilizers \(\{g_{1},...,g_{N-\alpha},G_{1},...,G_{\alpha}\}\).
several system sizes \(N\). Then, for each \(p_{s}\), we fit the results with a power law function \(f(N)=cN^{-a}+b\). We expect that at the critical point the string order parameter approaches zero as \(cN^{-a}\), thus the finite size results can be well fitted with \(a>0\) and \(b=0\). Therefore, we identify the critical point with the parameter \(p_{s}\) yielding a good fit with \(b=0\). We note that away from the critical point, this fitting of finite size results breaks down, and far enough from the phase boundary the string order parameter might even converge to a constant non-monotonously with \(N\).
We illustrate this procedure for the XZX Clifford circuit model in Fig. 7, where we show the finite size corrections to string order parameters at different single qubit measurement rates \(p_{s}\), for Clifford unitary gate rate \(p_{u}=0.2\). For \(\overline{|\mathcal{S}_{Z}^{X,\mathbf{X}}|}\), the best fit of the form \(cN^{-a}\) is obtained for \(p_{s}=0.27\), with \(a=1.24\), see Fig. 7a. This result is consistent with the critical \(p_{s}\) obtained based on entanglement entropies, see Ref. [14]. For slightly higher values of \(p_{s}\), the string order parameter converges to zero even more rapidly, while for \(p_{s}\) below the critical value, it remains finite in the thermodynamic limit. Similarly, Fig. 7b shows the results for \(\overline{|\mathcal{S}_{Z}^{\mathbf{1},\mathbf{1}}|}\), predicting a transition at \(p_{s}=0.38\), again consistent with the critical value found in Ref. [14]. Similar behavior is observed in the other cluster circuit models, both with Clifford and Haar random unitaries. We note that in the latter case, MPS simulations are limited to smaller system sizes, increasing the numerical uncertainty of the critical point.
## Appendix C Convergence of MPS simulations with bond dimension
In the presence of Haar random unitary gates, we rely on the time-evolving block decimation algorithm to study the time evolution. In this method, the state of the system is represented as an MPS with maximal bond dimension \(\chi_{\text{max}}\). Such an MPS captures the exact state in area law phases for large enough \(\chi_{\text{max}}\), while volume law phases require a bond dimension that diverges with increasing system size in the thermodynamic limit [56; 57]. In this appendix, we examine the convergence of our numerical results with \(\chi_{\text{max}}\) for the XZX Haar random circuit model in the different phases, by focusing on the half-chain entanglement entropy \(S_{N/2}\) at different points in phase space.
Figure 8 shows the averaged steady-state half-chain entanglement entropy in the trivial area law and volume law phases as a function of maximal bond dimension, using different system sizes. In the trivial area law phase, Fig. 8a, \(S_{N/2}\) reaches its steady state value at a bond dimension independent of system size, confirming the proper convergence of our MPS results. In contrast, in the volume law phase, Fig. 8b, the entanglement entropy increases logarithmically with bond dimension and reaches larger values for larger system sizes. Fully converged results can only be obtained for small system sizes, and the state of the system is not captured by the MPS ansatz in the thermodynamic limit. We note, however, that besides detecting the volume law phase via \(S_{N/2}\), we can also identify this phase from the string order parameters calculated at finite bond dimension, as we have demonstrated above.
Ensuring convergence with bond dimension is more tricky in the SPT phase, since truncating the MPS to bond dimension \(\chi_{\text{max}}\) can slightly break the \(\mathbb{Z}_{2}\cross\mathbb{Z}_{2}\) symmetry of the model. This may happen when the Schmidt decomposition of the state has degenerate singular values, an effect already well-known from the MPS representation of SPT phases in equilibrium [2]. For example, truncating the MPS to a sufficiently small odd \(\chi_{\text{max}}\) always breaks the \(\mathbb{Z}_{2}\cross\mathbb{Z}_{2}\) symmetry for the following reason. The symmetry implies a 4-fold degenerate entanglement spectrum (see Appendix D), therefore, breaking the degeneracy also breaks the symmetry, affecting the
Figure 7: Finite size corrections to string order parameters for the XZX Clifford circuit model. We plot (a) \(\overline{|\mathcal{S}_{Z}^{X,\mathbf{X}}|}\) for various measurement rates \(p_{s}\) across the SPT to volume law transition and (b) \(\overline{|\mathcal{S}_{Z}^{\mathbf{1},\mathbf{1}}|}\) across the volume law to trivial area law transition, as a function of system size \(N^{-a}\), for rate of Clifford gates \(p_{u}=0.2\). The exponent \(a\) is obtained by linear regression at the \(p_{s}\) value that gives the best fit of the form \(cN^{-a}\), yielding (a) \(a=1.24\) and (b) \(a=0.86\). The dashed line indicates the linear regression extrapolated to \(N\to\infty\).
averaged long-time values of certain quantities. More generally, in the circuit model the entanglement spectrum can show even higher degeneracies at certain time steps, always in powers of 2. For this reason, it is convenient to always choose \(\chi_{\rm max}\) as a power of 2, leading to better conservation of the symmetry during the truncation step in the evolution, even for relatively small bond dimensions. Figs. 9a and b show the time evolution of the circuit-averaged symmetry operator \(G_{1}\) and of the half-chain entanglement entropy \(S_{N/2}\), respectively, at the point \(p_{u}=0.1\) and \(p_{s}=0.3\), belonging to the SPT phase, for various maximal bond dimensions with a fixed number of qubits \(N=100\). The smallest bond dimension, \(\chi_{\rm max}=16\), is not enough to preserve the symmetry during the time evolution, and we observe that the entanglement entropy is reduced upon breaking the symmetry. However, all of the higher bond dimensions preserve the symmetry, and yield the correct, converged steady state value for the half-chain entanglement entropy. We note that for bond dimensions that are not powers of 2, the symmetry can be broken for higher values of \(\chi_{\rm max}\), yielding an incorrect, reduced \(S_{N/2}\). We show the convergence of \(S_{N/2}\) with \(\chi_{\rm max}\), only using bond dimensions that are powers of 2, for various system sizes \(N\), in Fig. 9c [63]. Similarly to the trivial area law phase, we observe good convergence to the steady state value at a finite bond dimension independent of \(N\).
We note that in contrast to Clifford random circuits, in the presence of Haar random unitary gates the position of the phase boundary between area law and volume law entanglement scaling can vary with the index of the Renyi entropy. In particular, it has been shown [28] that in a certain class of random quantum circuits, the measurement-induced phase transition between area
Figure 8: Half-chain entanglement entropy in the steady state of the XZX Haar random circuit as a function of the maximum bond dimension \(\chi_{\rm max}\), for different system sizes \(N\). (a) In a trivial area law phase, a finite \(\chi_{\rm max}\) independent of the system size is sufficient to ensure convergence. (b) In a volume law phase, the \(\chi_{\rm max}\) required to represent the state increases with system size.
Figure 9: Convergence in the SPT phase of XZX Haar random circuit. Time evolution of the circuit averaged (a) symmetry operator \(G_{1}\) and (b) half-chain entanglement entropy \(S_{N/2}\), for different bond dimensions in a system of size \(N=100\) in the SPT phase, at \(p_{u}=0.1\) and \(p_{s}=0.3\). Truncation can break the symmetry at low bond dimensions, leading to a reduced \(S_{N/2}\). (c) Half-chain entanglement entropy as a function of the maximum bond dimension \(\chi_{\rm max}\) for different system sizes \(N\) for the same point in the SPT phase.
law and volume law entanglement is located at different critical probabilities for Renyi entropies \(n=0\) and \(n\geq 1\), with \(p_{c}^{n=0}\geq p_{c}^{n\geq 1}\). In this case, for measurement probabilities \(p_{c}^{n\geq 1}\leq p\leq p_{c}^{n=0}\), one needs a bond dimension extensive in system size to exactly describe the state with an MPS, even though the von Neumann entanglement entropy follows an area law scaling. For such states, it can not be determined whether the MPS representation is able to efficiently approximate the exact state of the system, based on the entanglement scaling alone [64]. For the family of circuit models studied here, we only relied on the von Neumann entropy and string order parameters to detect the phases, and we have checked the convergence of both these quantities with bond dimension within the area law phases. Both methods indicated the same critical lines. Further study would be required to check the faithfulness of the MPS representation.
## Appendix D Entanglement spectrum of XZX Haar random circuit
The MPS representation of a state gives direct access to the Schmidt coefficients \(\lambda_{\alpha}\) for any partition of the qubit chain into subsystems \(A\) and \(B\). These Schmidt values \(\lambda_{\alpha}\) are defined through
\[\ket{\psi}_{AB}=\sum_{\alpha}\lambda_{\alpha}\ket{\psi_{\alpha}}_{A}\otimes \ket{\psi_{\alpha}}_{B}, \tag{20}\]
where \(\{\ket{\psi_{\alpha}}_{A}\}\) and \(\{\ket{\psi_{\alpha}}_{B}\}\) are orthonormal bases for susbsystems \(A\) and \(B\), respectively. Therefore, we have access to the entanglement spectrum of the system, defined as \(-\ln\lambda_{\alpha}\), at different points of phase space. In this section, we focus on the entanglement spectrum of the half-chain. We find equivalent results for any other non-trivial partition of the system.
In a non-equilibrium system, the entanglement spectrum changes with time. Therefore, we illustrate its structure in the steady state by showing the instantaneous entanglement spectrum at a few selected time steps. The results for the different phases of the model are displayed in Fig. 10.
In the SPT area law phase, displayed in Fig. 10a, we find that each Schmidt coefficient is 4-fold degenerate. A similar fourfold degeneracy of the entanglement spectrum in the SPT phase with \(\mathbb{Z}_{2}\cross\mathbb{Z}_{2}\) symmetry has been observed in equilibrium quantum matter [2], a property that translates to this non-equilibrium setting. In the trivial area law phase, the fourfold degeneracy is lifted, but the Schmidt values at the bottom, most relevant for the entanglement entropy, remain well separated from the rest, see Fig. 10b. Finally, in the volume law phase of Fig. 10c, we find a large number of Schmidt values at the bottom of the spectrum, a sign of a highly entangled state. In this case, choosing a larger \(\chi_{\rm max}\) would add relevant Schmidt values, significantly changing the value of the entanglement entropy, as was shown in Appendix C.
|
2305.00583 | The Art of the Fugue: Minimizing Interleaving in Collaborative Text
Editing | Most existing algorithms for replicated lists, which are widely used in
collaborative text editors, suffer from a problem: when two users concurrently
insert text at the same position in the document, the merged outcome may
interleave the inserted text passages, resulting in corrupted and potentially
unreadable text. The problem has gone unnoticed for decades, and it affects
both CRDTs and Operational Transformation. This paper defines maximal
non-interleaving, our new correctness property for replicated lists. We
introduce two related CRDT algorithms, Fugue and FugueMax, and prove that
FugueMax satisfies maximal non-interleaving. We also implement our algorithms
and demonstrate that Fugue offers performance comparable to state-of-the-art
CRDT libraries for text editing. | Matthew Weidner, Martin Kleppmann | 2023-04-30T21:27:34Z | http://arxiv.org/abs/2305.00583v2 | # The Art of the Fugue: Minimizing Interleaving in Collaborative Text Editing
###### Abstract
Existing algorithms for replicated lists, which are widely used in collaborative text editors, suffer from a problem: when two users concurrently insert text at the same position in the document, the merged outcome may interleave the inserted text passages, resulting in corrupted and potentially unreadable text. The problem has gone unnoticed for decades, and it affects both CRDTs and Operational Transformation. This paper presents Fugue, the first algorithm that guarantees maximal non-interleaving, our new correctness property for replicated lists. We present two variants of the Fugue algorithm, one based on a tree and the other based on a list, and prove that they are semantically equivalent. We also implement Fugue and demonstrate that it offers performance comparable to state-of-the-art CRDT libraries for text editing.
distributed data structures, replica consistency, collaborative text editing, Conflict-free Replicated Data Types (CRDTs), operational transformation [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style ] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style ] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style ] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style ] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style=] [style ] [style=] [style ] [style=] [style=] [style=] [style=] [style=] [style ] [style=] [style=] [style ] [style=] [style ] [style=] [style ] [style=] [style ] [style=] [style ] [style=] [style ] [style=] [style ] [style=] [style ] [style=] [style=] [style ] [style= ] [style=] [style ] [style=] [style ] [style= ] [style=] [style ] [style=] [style ] [style=] [style ] [style= ] [style= ] [style ] [style= ] [stylestyle=] [style ] [style= ] [style= ] [style= ] [style ] [style= ] [style= ] [style ] [style= ] [style ] [style= ] [stylestyle= ] [style ] [style= ] [style ] [style= ] [stylestyle ] [style= ] [stylestyle= ] [style ] [style= ] [stylestyle ] [style= ]
suffer from interleaving. The probability of interleaving occurring varies depending on the algorithm, but we are not aware of any existing algorithm that rules out interleaving entirely.
To address this situation, this paper makes the following contributions:
* We survey a selection of existing CRDT and OT algorithms for list replication and collaborative text editing, and highlight interleaving anomalies with all of them (Section 2.3).
* We demonstrate that the previous attempt to address the interleaving problem [16] is flawed: its definition of non-interleaving is impossible to satisfy, and the proposed algorithm in that paper does not converge (Section 2.4).
* We extend the formal specification of replicated lists by Attiya et al. [1] with a new property, which we call _maximal non-interleaving_ (Section 5.2). This definition is subtle: we show that an alternative, simpler definition is also impossible to satisfy (Section 5.1).
* We introduce _Fugue_, a CRDT algorithm for replicated lists that (to our knowledge) is the first algorithm to satisfy maximal non-interleaving (Section 3). We present two different formulations of Fugue, one based on trees and one based on lists, and we prove that they are semantically equivalent (Section 6).
* We provide an optimized open source implementation of Fugue, and show that it achieves memory, network, and CPU performance comparable to the state-of-the-art Yjs library on a realistic text-editing trace (Section 4).
## 2 Background and Related Work
In collaborative text editors, each user session (e.g., in a web browser) maintains a replica of the list of characters. On user input, the user's local replica of the document is updated by inserting or deleting characters in this list. Local edits are applied immediately, without waiting for network communication with any other nodes, in order to provide a responsive user experience independently of network latency. A user's edits are then asynchronously propagated via the network to collaborators' replicas, which integrate them into their local state. We can also generalize the model beyond text: instead of a list of characters, the replica could represent a list of other objects, such as items on a to-do list.
The expected behavior of such a replicated list was specified by Attiya at al. [1]; we summarize this specification in the proof of Theorem 1 in Section 3. Restated informally, it requires all replicas to converge to the same state; that state must reflect all edits made by users, and it must place the list elements (i.e., characters) in a valid order. The order is valid if list elements remain in the order in which a user inserted them; however, elements concurrently inserted on different replicas may be ordered arbitrarily.
Collaborative text editing originated with the work of Ellis and Gibbs [6], who also introduced Operational Transformation (OT) as a technique for resolving concurrent edits. This approach was formalized by Ressel et al. [25], and further developed by Sun et al. [30] and many other papers. The OT algorithm Jupiter [18] later became the basis for real-time collaboration in Google Docs [5]. Some OT algorithms, including Jupiter, assume a central server, while others allow more flexible network topologies.
Following bugs in several OT algorithms, which failed to converge in some situations [8, 21], Conflict-free Replicated Data Types (CRDTs) were developed as an alternative approach [28]. The first CRDT for a replicated list was WOOT [22], which was followed by Treedoc [24], Logoot [35], RGA [26], and several others. CRDTs do not assume a central server and therefore allow peer-to-peer operation. The algorithms differ in their performance characteristics, but all satisfy the strong list specification [1].
### The interleaving problem
Several replicated list CRDTs, including Treedoc [24], Logoot [35], and LSEQ [17], assign to each list element a unique identifier from a dense, totally ordered set. The sequence of list elements is then obtained by sorting the IDs in ascending order. To insert a new list element between two adjacent elements with IDs _id\({}_{1}\)_ and _id\({}_{2}\)_ respectively, the algorithm generates a new unique ID _id\({}_{3}\)_ such that _id\({}_{1}\)_< _id\({}_{3}\)_< _id\({}_{2}\)_, where < is the total order on identifiers.
Say another user concurrently inserts an element with ID _id\({}_{4}\)_ between the same pair of elements \((id_{1},id_{2})\) such that _id\({}_{1}\)_< _id\({}_{4}\)_< _id\({}_{2}\)_. The minimum requirement of _id\({}_{3}\)_#_id\({}_{4}\)_ is easy to achieve (e.g., by including in each ID the unique name of the replica that generated it), but whether _id\({}_{3}\)_< _id\({}_{4}\)_ or _id\({}_{3}\)_> _id\({}_{4}\)_ is an arbitrary choice.
When two users concurrently insert several new elements in the same ID interval, the result is an effect illustrated in Figure 1. (The figure uses a rational number as each ID, whereas actual algorithms use a path through a tree, but the resulting behavior is similar.) The diagram shows the state of a text document containing a shopping list, initially containing the word "milk" and a newline character. User A inserts a line break and "eggs", while concurrently user B inserts a line break and "bread". The merged result contains "milk", a blank line, and then the word "ebregasd".
We argue that this behavior is obviously undesirable. Nevertheless, none of the affected papers even mention the issue, and although it had been informally known to people in the field for some time, it was not documented in the research literature until 2018 [15, 32]. Attiya et al.'s specification [1] allows interleaving like in Figure 1.
### Interleaving of forward and backward insertions
In some replicated list algorithms, whether interleaving can occur or not depends on the order in which the elements are inserted into the list. In the common case, when a user writes text, they insert characters in forward direction: that is, "bread" is inserted as the character sequence "b", "r", "e", "a", "d". However, not all writing is in forward direction: sometimes users hit backspace to fix a typo, or move their cursor to a different location in the document and continue typing there.
A particular editing pattern that is interesting from an interleaving point of view is insertion of list elements in backward direction. The extreme case of typing text in reverse
Figure 1: Interleaving when character positions are taken from the rational numbers \(\mathbb{Q}\).
order character by character (typing "bread" as "d", "a", "e", "r", "b") is unlikely to occur in practical text editing scenarios. However, a plausible scenario of backward insertion is illustrated in Figure 2. In this example, two users are working offline on a document. Each user appends a text passage to the document, moves the cursor back to the beginning of the passage they inserted, and then adds a heading for their new section. The insertion of the passage and the insertion of the heading occur in backward order.
When the two users in Figure 2 go back online and merge their changes, an algorithm that rules out interleaving of forward insertions but allows interleaving of backward insertions may place the headings and the text passages in a surprising order (for example, placing both headings before both text passages, perhaps in different orders). This behavior is less bad than the fine-grained character-by-character interleaving of Figure 1, but it is nevertheless not ideal. It would be preferable to keep all of each user's insertions as one contiguous string, regardless of the order in which the elements were inserted.
Another reason for avoiding interleaving of backward insertions is that OT/CRDT algorithms for replicated lists are not only used for text, but also for other multi-user applications with ordered sequences, such as the rows of a spreadsheet, or the items in a to-do list. With these applications, backward insertion is more likely to occur: for example, in a spreadsheet or to-do list where new rows/items are regularly inserted at the top, one at a time. If we can avoid both forward and backward interleaving, we also improve the behavior of these multi-user applications.
### Algorithms that exhibit interleaving
The interleaving problem was first noticed in CRDTs such as Logoot and LSEQ because they are particularly prone to the problem; experiments with implementations of these algorithms are easily able to trigger interleaving in practice [11]. However, when we started looking at the problem more closely, we found that interleaving is surprisingly prevalent among both OT and CRDT algorithms for collaborative text editing. Our findings are summarized in Table 1, and examples of each instance of interleaving are detailed in Appendix A.
Occurrence of interleaving is often nondeterministic, and the probability of exhibiting interleaving varies depending on the algorithm: for example, in some algorithms it depends on the exact order in which concurrently sent network messages are received, and in some it depends on random numbers generated as part of the algorithm. But we have not been able
Figure 2: Each user first types a section of text, then moves their cursor back to the start of the section, and adds a heading. When these edits are merged in an algorithm that allows interleaving of backward insertions, the merged result may place the headings and sections in an illogical order.
to find any published algorithm that rules out interleaving entirely.
In some algorithms, interleaving occurs only if multiple replicas participate in one of the concurrent editing sessions; this is indicated in the columns labeled "multi-replica". This can happen, for example, if a user starts some work on one device and then continues on another device (producing an editing session that spans two devices), while independently another user is working offline on the same document on a third device. It can also occur in systems with ephemeral replica IDs, such as a web application that generates a fresh replica ID every time its browser tab is refreshed.
In the cases marked \(\mathsf{O}\) in Table 1 we conjecture non-interleaving, but we have not proved it. Only in the cases marked \(\mathsf{O}\checkmark\) has non-interleaving been proven. RGA forward non-interleaving was proved by Kleppmann et al. [15], Yjs forward non-interleaving is proved in Appendix E of this paper, and our own algorithm Fugue is verified in Section 5.2.
### Previous attempt to ensure non-interleaving
Kleppmann et al. [16] previously identified the interleaving problem. That work has two serious flaws:
1. The definition of non-interleaving in that paper cannot be satisfied by any algorithm.
2. The CRDT algorithm proposed in that paper, which aims to be non-interleaving, is incorrect - it does not converge. Appendix A.3 gives an example found by Chandrassery [4].
That paper defines non-interleaving as follows (paraphrased):
Suppose two sets of list elements \(X\) and \(Y\) satisfy:
* All elements in \(X\) were inserted concurrently to all elements in \(Y\).
* The elements were inserted at the same location in the document, that is: after applying the insertions for \(X\cup Y\) and their causal predecessors, \(X\cup Y\) are contiguous in the list order.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline Family & Algorithm & forward & forward & backward & backward \\ & & interleaving & interleaving & interleaving & interleaving \\ & & (one replica) & (multi-replica) & (one replica) & (multi-replica) \\ \hline OT & adOPTed [25] & \(\bullet\) & \(\bullet\) & \(\mathsf{O}\) & \(\bullet\) \\ & Jupiter [18] & \(\bullet\) & \(\bullet\) & \(\mathsf{O}\) & \(\mathsf{O}\) \\ & GOT [31] & \(\bullet\) & \(\bullet\) & \(\not\!\!1\) & \(\not\!\!1\) \\ & SOCT [29] & \(\bullet\) & \(\bullet\) & \(\bullet\) & \(\bullet\) \\ & TTF [20] & \(\bullet\) & \(\bullet\) & \(\mathsf{O}\) & \(\mathsf{\bullet}\) \\ CRDT & WOOT [22] & \(\bullet\) & \(\bullet\) & \(\mathsf{O}\) & \(\mathsf{O}\) \\ & Logoot [35] & \(\bullet\) & \(\bullet\) & \(\mathsf{\bullet}\) & \(\mathsf{\bullet}\) \\ & LSEQ [17] & \(\bullet\) & \(\bullet\) & \(\mathsf{\bullet}\) & \(\mathsf{\bullet}\) \\ & Treedoc [24] & \(\bullet\) & \(\bullet\) & \(\mathsf{\bullet}\) & \(\mathsf{\bullet}\) \\ & RGA [26] & \(\mathsf{O}\checkmark\) & \(\mathsf{O}\checkmark\) & \(\mathsf{\bullet}\) & \(\mathsf{\bullet}\) \\ & Yjs [10] & \(\mathsf{O}\checkmark\) & \(\mathsf{O}\checkmark\) & \(\mathsf{O}\) & \(\mathsf{\bullet}\) \\ & Fugue (this work) & \(\mathsf{O}\checkmark\) & \(\mathsf{O}\checkmark\) & \(\mathsf{O}\checkmark\) & \(\mathsf{O}\checkmark\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Various algorithms’ susceptibility to interleaving anomalies. Key: \(\bullet=\) interleaving can occur; \(\mathsf{O}=\) we have not been able to find examples of interleaving; \(\mathsf{O}\checkmark=\) proven not to interleave; \(\not\!\!1=\) algorithm may incorrectly reorder characters. Examples of anomalies appear in Appendix A.
Then either \(X\) appears before \(Y\) or vice-versa. That is, either \(\forall x\in X,y\in Y.\,x<y\) or \(\forall x\in X,y\in Y.\,y<x\), where < is the order of elements in the final list.
To show that no replicated list algorithm can satisfy this definition, it is sufficient to give a counterexample. Starting from an empty list, suppose four replicas concurrently each insert one element. After applying these four insertions, the list state must be some ordering of these four elements; let the order be \(abcd\). Then \(X=\{a,c\}\) and \(Y=\{b,d\}\) satisfy the two hypotheses, but they are interleaved. Since this situation could arise with any algorithm, it cannot be prevented.
In Section 5.2 we give a new definition of non-interleaving that can be implemented.
## 3 The Fugue algorithm based on trees
We now introduce _Fugue_ (pronounced [fug]), the first non-interleaving algorithm for replicated lists and collaborative text editing. It is named after a form of classical music in which several melodic lines are interwoven in a pleasing way. We evaluate Fugue implementations in Section 4, and we analyze the algorithm's non-interleaving properties in Section 5.
The internal structure of the Fugue algorithm in this section is a tree, so we also call it _Tree-Fugue_. In Section 6 we show that there is an alternative formulation of the algorithm based on lists, which we call _List-Fugue_. The two formulations have the same API and the same behavior, and differ only in their internal representation. We refer to both collectively as _Fugue_ when the internal structure is not important.
We describe Fugue as an operation-based CRDT, although it can easily be reformulated as a state-based CRDT. The external interface of Fugue is an ordered sequence of values, e.g., the characters in a text document. Since the same value may appear multiple times in a list, we use _element_ to refer to a unique instance of a value. Then the operations on the list are:
* insert\((i,x)\): Inserts a new element with value \(x\) at index \(i\), between the existing elements at indices \(i-1\) and \(i\). All later elements (index \(\geq i\)) shift to an incremented index.
* delete\((i)\): Deletes the element at index \(i\). All later elements (index \(\geq i+1\)) shift to a decremented index.
Note that we omit operations to mutate or move elements; these can be implemented by combining a replicated list with other CRDTs [13]. We also omit optimizations that compress consecutive runs of insertions or deletions; these can be added later without affecting the core algorithm. At a high level the algorithm works as follows:
StateThe state of each replica is a tree in which each non-root node is labeled with a unique ID and a value. Each non-root node is marked as either a _left_ or _right_ child of its parent, but the tree is not necessarily binary: a parent can have multiple left children or right children, as illustrated in Figure 3. The tree does not need to be balanced.
Each non-root node in the tree corresponds to an element in the list (e.g., a character in the text document). The list order is given by the depth-first in-order traversal over this tree: first recursively traverse a node's left children, then visit the node's own value, then traverse its right children. _Same-side siblings_--nodes with the same parent and the same side--are traversed in lexicographic order of their IDs; the exact construction of IDs is not important.
InsertTo implement insert\((i,x)\), a replica creates a new node, labeled with a new unique ID and value \(x\), at an appropriate position in its local tree: if the element at index \(i-1\) has no right children, the new node becomes a right child of the element at index \(i-1\); otherwise, the new node is added as a left child of the next element. Figure 4 illustrates how
this choice is made, and Theorem 1 shows that this approach results in the desired behavior. The replica then uses a causal broadcast protocol to send the new node, its parent, and its side (left or right child) to other replicas, which add the node to their own local trees.
A replica will not create a new node where it already has a same-side sibling, i.e., it will try to keep the tree binary. However, multiple replicas may concurrently insert nodes at the same position, creating same-side siblings like a and b in Figure 3.
DeleteTo implement \(\mathsf{delete}(i)\), a replica looks up the node at index \(i\) in the current list state, then causally broadcasts a message containing that element's ID. All replicas then replace that node's value with a special value \(\bot\), flagging it as deleted (i.e., making it a _tombstone_). Nodes with this value are skipped when computing the external list state (e.g., for the purpose of computing indices of list elements); however, their non-deleted descendants are still traversed normally, and a deleted node may still be used as a parent of a new node.
We cannot remove a deleted element's node entirely: it may be an ancestor to non-deleted nodes, including nodes inserted concurrently. In Section 4 we discuss ways of mitigating memory usage from tombstones.
PseudocodeAlgorithm 1 gives pseudocode for Tree-Fugue. Following the conventional notation for operation-based CRDTs [27], each operation is described in terms of a _generator_ and an _effector_. The generator is called to handle user input on the user's local replica, and it returns a message to be broadcast to the other replicas. Each replica, including the sender, applies the operation by passing this message to the corresponding effector; the sender does
Figure 4: Cases for inserting a new element between two existing, adjacent elements.
Figure 3: One possible Tree-Fugue structure for the list abcdef. Observe that \(\mathbf{a}\) and \(\mathbf{b}\) are both left children of \(\mathbf{c}\); they are sorted lexicographically by their elements’ IDs.
so atomically with the generator call. We assume that messages are received exactly once on each replica, and in causal order: if a replica received message \(m\) before generating message \(m^{\prime}\), then all replicas receive \(m\) before \(m^{\prime}\). This is a standard assumption for operation-based CRDTs [27], and is easily realized in practice using a causal broadcast protocol [2].
Algorithm 1 satisfies the strong list specification of Attiya et al. [1].
For any execution, we must show that there is a total order < on all list elements (across all replicas), such that:
1. At any time, calling values() on a replica returns the list of values corresponding to all elements for which the replica received insert messages, minus the elements for which it received delete messages, in order <.
2. Suppose a replica's values() query yields values corresponding to elements \(\llbracket a_{0},a_{1},\ldots,a_{n-1}\rrbracket\) just before the insert generator \(\mathsf{insert}(i,x)\) is called. Then the inserted element \(e\) satisfies \(a_{0},a_{1},\ldots,a_{i-1}\) < \(e\) < \(a_{i},a_{i+1},\ldots,a_{n-1}\).
Let < be the total order given by the depth-first in-order traversal on the union of all replicas' local trees (with tombstone nodes overriding nodes with the originally inserted value). To show (a), note that by the causal order delivery assumption, a delete message is received after its corresponding insert message. Therefore, on any given replica, the set of tree nodes with \(\mathit{value}\neq\bot\) are those nodes that have been inserted but not deleted on that replica. These are exactly the nodes whose values are returned by values(), in the same order as < because the same traversal is used.
To show (b), note that _leftOrigin_ and _rightOrigin_ are consecutive elements in the tree traversal, and _leftOrigin_\(=a_{i-1}\), the non-tombstone node immediately preceding the insertion position. If _leftOrigin_ has no right children, inserting the new node as a right child of _leftOrigin_ makes the new node the immediate successor of _leftOrigin_ in the tree traversal. If _leftOrigin_ does have right children, _rightOrigin_ must be a descendant of _leftOrigin_, and _rightOrigin_ must have no left children (since otherwise _leftOrigin_ and _rightOrigin_ would not be consecutive), and therefore inserting the new node as a left child of _rightOrigin_ ensures the traversal visits the new child between _leftOrigin_ and _rightOrigin_. In either case, the newly inserted element appears between \(a_{i-1}\) and \(a_{i}\) in the tree traversal, as required.
## 4 Implementation and Evaluation
We implemented several variations of Fugue in TypeScript. Each is written as a custom CRDT for the Collabs library [34]; Collabs then provides causal order delivery and other utilities. All implementations are available as open-source software on GitHub.1 The variations are:
Footnote 1: [https://github.com/mweidner037/fugue](https://github.com/mweidner037/fugue)
* An optimized implementation of Algorithm 1 in 1543 lines of code. It uses practical optimizations inspired by Yjs [9] and RGASplit [3]. In particular, it condenses sequentially-inserted tree nodes into a single "item" object instead of one object per node, and it uses Protocol Buffers to efficiently encode update messages and saved documents. Collabs v0.6.1 uses this implementation for its list CRDTs.
* A direct implementation of Algorithm 1 in 299 lines of code. It represents the state as a doubly-linked tree with one object per node, and it uses JSON encodings.
* A direct implementation of Algorithm 2 in 272 lines of code. It represents the state as a doubly-linked list with one object per element, and it uses JSON encodings.
```
1types:
2RID, type of replica identifiers
3ID:= (RID\(\times\mathbb{N}\))\(\cup\)(null), type of element IDs
4\(\mathbb{V}\), type of values
5\(\bot\), a marker for deleted nodes
6\(\{L,R\}\), type of a child node's side (left or right)
7NODE:= ID\(\times\)(\(\mathbb{V}\cup\{\bot\}\))\(\times\)ID\(\times\{L,R\}\), tree node tuples \((id,value,parent,side)\)
8per-replica CRDT state:
9replicaID: the unique ID of this replica
10\(\mathit{tree}\subseteq\)NODE: a set of tree nodes, initially \(\{\mathit{root}\}\) where \(\mathit{root}\) = (null, \(\bot\),null,null)
11counter\(\in\mathbb{N}\): a counter for generating element IDs, initially 0
12queryvalues():\(\mathbb{V}[]\)
13function\(\mathsf{traverse}(\mathit{nodeID})\):\(\mathbb{V}[]\)
14\(\mathit{values}\leftarrow[]\)
15\(node\leftarrow\) the unique node \(\in\)\(\mathit{tree}\) such that \(node.id\) = \(nodeID\)
16for\(child\in\{(id,v,p,s)\in\mathit{tree}\mid p=\mathit{nodeID}\wedge s=L\}\) ordered by \(id\)do
17\(\mathit{values}\leftarrow\mathit{values}+\mathsf{traverse}(\mathit{child}.id)\)
18if\(node.value\not\)\(\pm\)\(\bot\)then
19\(\mathit{values}\leftarrow\mathit{values}+\{\mathit{node}.value\}\)
20for\(child\in\{(id,v,p,s)\in\mathit{tree}\mid p=\mathit{nodeID}\wedge s=R\}\) ordered by \(id\)do
21\(\mathit{values}\leftarrow\mathit{values}+\mathsf{traverse}(\mathit{child}.id)\)
22return\(\mathsf{traverse}(\mathsf{null})\)
23
24updateinsert
25
26generator\((i,x)\)
27id\(\leftarrow\)(replicaID, \(counter\)); \(counter\leftarrow\)\(counter+1\)
28leftOrigin\(\leftarrow\) node for \((i-1)\)-th value in \(\mathsf{values}\)(), or \(\mathsf{root}\) if \(i\) = 0
29if\(\nexists id^{\prime},v^{\prime}\). \((id^{\prime},v^{\prime},\mathit{leftOrigin.id},R)\in\mathit{tree}\)then
30\(node\leftarrow\)(\(id,x,\mathit{leftOrigin.id},R\)) // right child of _leftOrigin_; see Figure 3(a)
31else
32\(\mathit{rightOrigin}\leftarrow\) next node after _leftOrigin_ in the tree traversal that includes tombstones
33\(node\leftarrow\)(\(id,x,\mathit{rightOrigin.id},L\)) // left child of _rightOrigin_; see Figure 3(b)
34return(\(\mathsf{insert},node\))
35
36effector\((\mathsf{delete},id)\)
37\(node\leftarrow\) the unique node \(\in\)\(\mathit{tree}\) such that \(node.id\) = \(id\)
38\(node.value\leftarrow\bot\)
```
**Algorithm 1** Pseudocode for the Tree-Fugue algorithm.
### Benchmarks
We evaluated our implementations using Jahns's crdt-benchmarks repository.2 All benchmarks were run on a Dell Latitude 7400 with a 1.90GHz Intel i7-8665U processor, 16 GiB of RAM, and Ubuntu 22.04.1. The JavaScript environment was Node.js v16.13.1. For each metric, we performed 5 warmup trials followed by 10 measured trials; tables show mean \(\pm\) standard deviation for the 10 measured trials.
Footnote 2: [https://github.com/monad/crdt-benchmarks/](https://github.com/monad/crdt-benchmarks/)
We also compared to existing implementations in the crdt-benchmarks repository:
**Automerge-Wasm**: (v0.1.6) is a Rust implementation of the Automerge library,3 compiled to WebAssembly for web-based collaborative apps. Its list CRDT is based on RGA [26].
**Yjs**: (v13.5.44) is a JavaScript library for web-based collaborative apps [10]. Its list CRDT is based on YATA [19] and it is known for its good performance [9].
**Y-Wasm**: (v0.12.2) is a Rust-to-WebAssembly variant of Yjs.
Footnote 3: [https://github.com/automerge/automerge](https://github.com/automerge/automerge)
Tables 2 and 3 show results from a benchmark that replays a real-world text-editing trace [12], in which every keystroke of the writing process for the LaTeX source of a 17-page paper [14] was captured. It consists of 182,315 single-character insert operations and 77,463 single-character delete operations, resulting in a final document size of 104,852 characters (not including tombstones). Each implementation processed the full trace sequentially on a single replica. Results for additional benchmarks, including microbenchmarks with concurrent operations, can be found in our code repository.
Table 2 considers the final saved document including CRDT metadata. In a typical collaborative app, this saved document would be saved (possibly on a server) at the end of each user session, and loaded at the start of the next session. Thus save size determines disk/network usage, while save/load time determines user-perceived save and startup latencies.
We see that Tree-Fugue is comparable to state-of-the-art Yjs on all three metrics, and the CRDT metadata is only 60% of the literal text's size. Tree-Fugue Simple and List-Fugue Simple are worse but still usable in practice. The large save sizes of these implementations are due to their inefficient JSON encodings; GZIP compresses the saved documents \(\approx 20\times\).
Table 3 shows performance metrics for live usage by a single user. Memory usage shows the increase in heap used4 from the start to the end of the trace and thus approximates each list CRDT's in-memory size. Network bytes/op shows the average size of the per-op
\begin{table}
\begin{tabular}{l r r r} \hline \hline Implementation & Save size (kB) & Save time (ms) & Load time (ms) \\ \hline Automerge-Wasm & \(142\pm 0\) & \(520\pm 40\) & \(3\),\(281\pm 171\) \\ Yjs & \(160\pm 0\) & \(20\pm 1\) & \(79\pm 8\) \\ Y-Wasm & \(160\pm 0\) & \(2\pm 0\) & \(13\pm 1\) \\ Tree-Fugue & \(168\pm 0\) & \(14\pm 1\) & \(11\pm 1\) \\ Tree-Fugue Simple & \(18\),\(726\pm 0\) & \(140\pm 6\) & \(362\pm 16\) \\ List-Fugue Simple & \(33\),\(751\pm 0\) & \(299\pm 1\) & \(389\pm 3\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Saved document metrics. The plain text (without CRDT metadata or tombstones) is 105 kB in size.
messages sent to remote collaborators. Ops/sec shows the average operation throughput; it reflects the time to process an op and encode a message for remote collaborators. For example, Tree-Fugue achieves 199,000 ops/sec, an average of 5 us per operation.
We again see that Tree-Fugue is practical and comparable to Yjs. In particular, its memory usage is only a few MB--about 23 bytes per character, or 13 bytes per characters-including-combstones. This refutes a common criticism of CRDTs for collaborative text editing: namely, that they have too much per-character memory overhead [33]. The memory overhead is worse for Tree-Fugue Simple (619 bytes/char) and List-Fugue Simple (230 bytes/char), but the total is still well within modern memory limits. For all Fugue variants, the network usage and operation throughput are far from being bottlenecks, given that a typical user types at \(\approx 10\) chars/sec and a typical collaborative document has \(\sphericalangle 100\) simultaneous users.
Figures 4(a) and 4(b) show how save size and memory usage vary throughout the text editing trace. The size of the plain text (without CRDT metadata or tomstones) is given for comparison. Observe that both metrics track the plain text's size at a modest multiple, and save size even decreases when text is deleted, despite tombstones.
Finally, Table 4 shows selected metrics for the same text-editing trace but repeated 100
\begin{table}
\begin{tabular}{l r r r r} \hline Implementation & Save size (kB) & Save time (ms) & Load time (ms) & Memory usage (MB) \\ \hline Yjs & 15,989 \(\pm\) 0 & 374 \(\pm\) 26 & 2,461 \(\pm\) 664 & 288 \(\pm\) 17 \\ Tree-Fugue & 17,845 \(\pm\) 0 & 1,644 \(\pm\) 193 & 695 \(\pm\) 25 & 223 \(\pm\) 0 \\ \hline \end{tabular}
\end{table}
Table 4: Metrics for the real text trace repeated 100 times sequentially. The literal text has size 10,485 kB. We exclude implementations that use excessive time or memory.
Figure 5: Metrics as a function of progress through the real text trace.
\begin{table}
\begin{tabular}{l r r r} \hline Implementation & Memory usage (MB) & Network bytes/op & Ops/sec (1,000s) \\ \hline Automerge-Wasm & – & 224 \(\pm\) 0 & 65 \(\pm\) 2 \\ Yjs & 3.2 \(\pm\) 0.2 & 29 \(\pm\) 0 & 51 \(\pm\) 0 \\ Y-Wasm & – & 29 \(\pm\) 0 & 5 \(\pm\) 0 \\ Tree-Fugue & 2.4 \(\pm\) 0.0 & 39 \(\pm\) 0 & 199 \(\pm\) 5 \\ Tree-Fugue Simple & 64.9 \(\pm\) 0.0 & 145 \(\pm\) 0 & 19 \(\pm\) 1 \\ List-Fugue Simple & 24.1 \(\pm\) 0.0 & 178 \(\pm\) 0 & 4 \(\pm\) 0 \\ \hline \end{tabular}
\end{table}
Table 3: Metrics for replaying a character-by-character text editing trace with 260k operations.
times. The final document contains 10.5 million characters--far longer than any typical text document. Nonetheless, Tree-Fugue's performance remains tolerable: 18MB save size, less than 2 seconds to save or load, and 223MB memory usage. Additionally, average network usage and throughput (not shown) remain within a \(2\times\) factor of Table 3.
## 5 Non-Interleaving
We have proven that Tree-Fugue satisfies the strong list specification (Theorem 1). We now show that it also avoids the interleaving problem described in Section 2. Specifically, we prove that Tree-Fugue is _maximally non-interleaving_: it avoids interleaving of both forward and backward insertions, to the maximum extent possible. Intuitively, this holds because concurrent edits end up in different subtrees, which are traversed separately.
### Impossibility Result
We already saw that the definition of non-interleaving by Kleppmann et al. [16] is impossible to satisfy (Section 2.4). In this section we show that a second, seemingly reasonable definition of non-interleaving is also impossible to satisfy. This second definition is the conjunction of forward and backward non-interleaving, which we define below.
Throughout this section, fix a replicated list satisfying the strong list specification [1]. Let < be its (implicit global) total order on elements. In an execution using this replicated list, the _left origin_ of an element is the element directly preceding the insertion position at the time of insertion. That is, if the element was inserted by an \(\mathsf{insert}(i,x)\) call, then its left origin was at index \(i-1\) at the time of this call. If there was no such element (\(i=0\)), then its left origin is a special symbol _start_ such that _start_ < \(e\) for every element \(e\). This definition coincides with the _leftOrigin_ variable in Algorithm 1, except using _start_ instead of \(\mathsf{root}\).
[Weak forward non-interleaving] Suppose distinct list elements \(a\) and \(b_{1},\ldots,b_{n}\) satisfy:
* \(a\) and \(b_{1}\) have the same left origin;
* \(b_{1},\ldots,b_{n}\) form a chain of left origins, i.e., the left origin of \(b_{j}\) is \(b_{j-1}\) for all \(j\geq 2\); and
* \(a\) was inserted concurrently to all \(b_{j}\).
Then an algorithm satisfies _weak forward non-interleaving_ if it guarantees that in the final list order, all \(b_{j}\) are on the same side of \(a\), i.e., either \(a\) < \(b_{1},\ldots,b_{n}\) or \(b_{1},\ldots,b_{n}\) < \(a\).
In the context of collaborative text editing, if one user types \(a\) while another user (or sequence of users) concurrently types \(b_{1}\ldots b_{n}\) at the same position, then this mandates that \(a\) is not interleaved with \(b_{1}\ldots b_{n}\). Instead, \(a\) must appear before or after the whole sequence.
Define the _left-origin tree_ to be the tree of list elements in which each element's parent is its left origin. Observe that the tree is rooted at _start_. When walking this tree, we always use the depth-first pre-order traversal: visit a node, then traverse its children in some order. This tree's definition is similar to causal trees [7] and timestamped insertion trees [1].
For any replicated list satisfying the strong list specification, the following statements are equivalent:
1. The replicated list satisfies weak forward non-interleaving as per Definition 2.
2. The replicated list satisfies forward non-interleaving as defined by Kleppmann et al. [15, SS4]: if two sequences \(a_{1}\)...\(a_{m}\) and \(b_{1}\)...\(b_{n}\) are inserted from left to right concurrently at the same position, then in the final list order, all \(b_{j}\) are on the same side of all \(a_{i}\).
3. _The list order_ < _is a depth-first pre-order traversal over the left-origin tree. In other words, the replicated list is semantically equivalent to a variant of RGA_ _[_26_]__, where the only allowed variation is the order of siblings in the tree._
We refer to any of these equivalent conditions as _forward non-interleaving_. The proof appears in Appendix B.
We analogously define _right origin_ as the element directly following the insertion position at the time of insertion, the special symbol _end_ if no following element exists, and _weak backward non-interleaving_ based on a chain of right origins. The right origin is a tombstone if the list element immediately following the left origin in the list is a tombstone, like the _rightOrigin_ variable in Algorithm 1; this choice simplifies the analysis but is not essential.
Weak backward non-interleaving is equivalent to the statement that the list order < _is a depth-first post-order traversal over the right-origin tree, which is defined analogously to the left-origin tree._
We refer to either condition as _backward non-interleaving_.
Kleppmann et al. [15] show that RGA [26] satisfies forward non-interleaving. It follows that a "reversed" version of RGA satisfies backward non-interleaving. It is then tempting to define general non-interleaving as the conjunction of forward and backward non-interleaving. Nonetheless, it turns out that this is impossible to achieve:
No replicated list algorithm satisfying the strong list specification satisfies both forward non-interleaving and backward non-interleaving.
We prove this in Appendix B, by giving an execution trace in which forward and backward non-interleaving mandate contradictory behaviors.
### Tree-Fugue is Maximally Non-Interleaving
We now present a new definition, _maximal non-interleaving_, which circumvents the previous section's impossibility results.
Since forward insertions are more common in typical text editing behavior, we begin by mandating forward non-interleaving. Proposition 3 then implies that < _is a depth-first pre-order traversal over the left-origin tree. Our only remaining degree of freedom is in how we sort siblings within that tree, i.e., nodes with the same left origin. So, we mandate backward non-interleaving for those siblings, but not otherwise.
Formally, for a set of siblings \(S\) in the left-origin tree, define the rooted tree \(T_{R}|_{S}\) by:
* The nodes of \(T_{R}|_{S}\) are \(S\cup\{end\}\), and its root is \(end\).
* The parent of \(s\in S\) in \(T_{R}|_{S}\) is its right origin, unless that is not in \(S\), in which case \(s\)'s parent is \(end\).
In other words, \(T_{R}|_{S}\) is the graph restriction of the right-origin tree to \(S\), except with extra \(end\) parent relationships to ensure that it is a tree instead of a forest.
[Maximal non-interleaving] A replicated list algorithm satisfies _maximal non-interleaving_ if:
* < _is a depth-first pre-order traversal over the left-origin tree; and_
* _for each set of siblings_ \(S\) _in the left-origin tree, the restriction of_ < _to_ \(S\) _is some depth-first post-order traversal of_ \(T_{R}|_{S}\) _(excluding end from the traversal)._
Observe that maximal non-interleaving almost completely determines < _. The only degree of freedom is: for each set of siblings_ \(S\) _in the left-origin tree, for each set of siblings_ \(S^{\prime}\) _within_ \(T_{R}|_{S}\)_, choose an arbitrary total order on_ \(S^{\prime}\) _that is consistent across replicas.
**Theorem 7**.: _Tree-Fugue (Algorithm 1) is maximally non-interleaving (Definition 6)._
The proof appears in Appendix C.
## 6 The Fugue Algorithm based on Lists
In this section, we describe List-Fugue, our second formulation of Fugue. Unlike Tree-Fugue, it sorts elements without help from an explicit tree structure. Instead, each replica's state is a list of elements, and a replica inserts a new element into its local list "at the correct location" using a for-loop. Nonetheless, we prove that List-Fugue is semantically equivalent to Tree-Fugue: they induce the same total order < on elements.
List-Fugue is based on Yjs's list CRDT implementation. In Appendix E, we perform the opposite list-to-tree conversion for Yjs. We use this conversion to give a new proof that Yjs's underlying algorithm is correct and to characterize its semantics and interleaving properties.
### Algorithm
As with Tree-Fugue, we describe List-Fugue as an operation-based CRDT, although it can easily be reformulated as a state-based CRDT. In particular, we assume that messages are received (effected) exactly once on each replica, and in causal order.
Algorithm 2 gives pseudocode. A replica's state is the list of elements it has received (including tombstones), with metadata: unique ID, left origin, and right origin. The insert generator broadcasts the new element together with its metadata. The insert effector computes _left_--the existing element immediately to the left of the new element--then inserts the new element after it, shifting all later elements to an incremented index. Deletions are handled using tombstones, like in Tree-Fugue (not shown).
The core of List-Fugue is lines 23-36, which compute _left_ for a newly received element. For any correct replicated list, we know that _elt_ must be inserted between _elt.leftOrigin_ and _elt.rightOrigin_; thus _left_ must be in the half-open interval [_elt.leftOrigin_, _elt.rightOrigin_). The insert effector starts with _left_ = _elt.leftOrigin_, then loops over the remainder of this range from left to right, occasionally updating _left_ to the current loop variable. Eventually, the loop ends and the last-set value of _left_ is used.
### Equivalence with Tree-Fugue
Algorithmically, List-Fugue bears little resemblance to Tree-Fugue. Although List-Fugue's for-loop is easier to implement than an explicit tree, it is not obvious what total order it enforces or that the order is even consistent across replicas.
In spite of this, we claim that List-Fugue is semantically equivalent to Tree-Fugue. This equivalence is why we consider both algorithms to be formulations of Fugue.
**Theorem 8**.: _List-Fugue is semantically equivalent to Tree-Fugue. That is, in any execution, every List-Fugue replica orders its list of elements according to Tree-Fugue's (implicit global) order < on elements._
The proof appears in Appendix D. Essentially, we relate the conditions in List-Fugue's for-loop to properties of the left- and right-origin tree traversals. We then show on a case-by-case basis that List-Fugue's decisions match those of Tree-Fugue.
**Corollary 9**.: _List-Fugue satisfies the strong list specification [1], and it is maximally non-interleaving._
1per-replica CRDT state:
2\(\mathit{replicaID}\in\mathsf{RID}\): the unique ID of this replica \(start,end\in\mathsf{ID}\): special symbols used as in Section 5.1 list: a list with elements \((\mathit{id}\in\mathsf{ID},\mathit{value}\in\mathsf{V}\cup\{\bot\},\mathit{ leftOrigin}\in\mathsf{ID},\mathit{rightOrigin}\in\mathsf{ID})\), initially empty.
3\(\mathit{counter}\in\mathbb{N}\): a counter for generating element IDs, initially 0.
4query\(\mathsf{values}():\mathbb{V}[]\)
5\(\mathsf{values}\leftarrow[]\)
6for\(\mathit{elt}\in list\)do
7if\(\mathit{elt}.\mathit{value}\neq\bot\)then
8\(\mathit{values}\leftarrow\mathit{values}+[\mathit{elt}.\mathit{value}]\)
9return\(\mathit{values}\)
10updateinsert
11generator\((i,x)\)
12\(\mathit{id}\leftarrow(\mathit{replicaID},\mathit{counter})\); \(\mathit{counter}\leftarrow\mathit{counter}+1\)
13\(\mathit{leftOrigin}\leftarrow\) node for \((i-1)\)-th value in \(\mathsf{values}()\), or \(\mathit{start}\) if \(i\) = 0 \(\mathit{rightOrigin}\leftarrow\) next node after \(\mathit{leftOrigin}\) in \(\mathit{list}\) including deleted nodes, or end if \(\mathit{leftOrigin}\) is the last element return\((\mathsf{insert},(\mathit{id},x,\mathit{leftOrigin}.id,\mathit{rightOrigin}.id))\)
14effector\((\mathsf{insert},\mathit{elt})\)
15function\(\mathit{rightParent}(p):\mathsf{ID}\)
16if\(p.\mathit{rightOrigin}=\mathsf{end}\)or\(p.\mathit{rightOrigin}.\mathit{leftOrigin}\neq p.\mathit{leftOrigin}\)then return end
17elsereturn\(p.\mathit{rightOrigin}\)
18\(\mathit{left}\leftarrow\mathit{elt}.\mathit{leftOrigin}\)
19scanning\(\leftarrow\)false
20for\(o\) in list from \(\mathit{elt}.\mathit{leftOrigin}\) to \(\mathit{elt}.\mathit{rightOrigin}\), exclusivedo
21if\(o.\mathit{leftOrigin}<\mathit{elt}.\mathit{leftOrigin}\)thenbreak
## 7 Conclusion
Interleaving of concurrent insertions at the same position is an undesirable but largely ignored problem with many replicated list algorithms that are used for collaborative text editing. Indeed, all CRDT and OT algorithms that we surveyed exhibit interleaving anomalies. We also found that existing definitions of non-interleaving are impossible to satisfy.
In this paper, we proposed a new definition, maximal non-interleaving, and the Fugue list CRDT that satisfies it. We described two formulations of Fugue, Tree-Fugue and List-Fugue, and proved that they are semantically equivalent. Our optimized implementation of Tree-Fugue has performance comparable to the state-of-the-art Yjs library.
|
2309.12252 | Parallelizing non-linear sequential models over the sequence length | Sequential models, such as Recurrent Neural Networks and Neural Ordinary
Differential Equations, have long suffered from slow training due to their
inherent sequential nature. For many years this bottleneck has persisted, as
many thought sequential models could not be parallelized. We challenge this
long-held belief with our parallel algorithm that accelerates GPU evaluation of
sequential models by up to 3 orders of magnitude faster without compromising
output accuracy. The algorithm does not need any special structure in the
sequential models' architecture, making it applicable to a wide range of
architectures. Using our method, training sequential models can be more than 10
times faster than the common sequential method without any meaningful
difference in the training results. Leveraging this accelerated training, we
discovered the efficacy of the Gated Recurrent Unit in a long time series
classification problem with 17k time samples. By overcoming the training
bottleneck, our work serves as the first step to unlock the potential of
non-linear sequential models for long sequence problems. | Yi Heng Lim, Qi Zhu, Joshua Selfridge, Muhammad Firmansyah Kasim | 2023-09-21T16:52:34Z | http://arxiv.org/abs/2309.12252v3 | # Parallelizing non-linear sequential models over the sequence length
###### Abstract
Sequential models, such as Recurrent Neural Networks and Neural Ordinary Differential Equations, have long suffered from slow training due to their inherent sequential nature. For many years this bottleneck has persisted, as many thought sequential models could not be parallelized. We challenge this long-held belief with our parallel algorithm that accelerates GPU evaluation of sequential models by up to 3 orders of magnitude faster without compromising output accuracy. The algorithm does not need any special structure in the sequential models' architecture, making it applicable to a wide range of architectures. Using our method, training sequential models can be more than 10 times faster than the common sequential method without any meaningful difference in the training results. Leveraging this accelerated training, we discovered the efficacy of the Gated Recurrent Unit in a long time series classification problem with 17k time samples. By overcoming the training bottleneck, our work serves as the first step to unlock the potential of non-linear sequential models for long sequence problems.
## 1 Introduction
Parallelization is arguably a main workhorse in driving the rapid progress in deep learning over the past decade. Through specialized hardware accelerators such as GPU and TPU, matrix multiplications which are prevalent in deep learning can be evaluated swiftly, enabling rapid trial-and-error in research. Despite the widespread use of parallelization in deep learning, sequential models such as Recurrent Neural Networks (RNN) (Hochreiter and Schmidhuber, 1997; Cho et al., 2014) and Neural Ordinary Differential Equations (NeuralODE) (Chen et al., 2018; Kidger et al., 2020) have not fully benefited from it due to their inherent need for serial evaluations over sequence lengths.
Serial evaluations have become the bottleneck in training sequential deep learning models. This bottleneck might have diverted research away from sequential models. For example, attention mechanism (Bahdanau et al., 2014) and transformers (Vaswani et al., 2017) have dominated language modelling over RNN in recent years, partly due to their ability to be trained in parallel (Hooker, 2021). Continuous Normalizing Flows (CNF) (Chen et al., 2018; Grathwohl et al., 2018), which used to utilize NeuralODE as their models, has been moving towards the direction where the training does not involve simulating the ODE (Lipman et al., 2022; Rozen et al., 2021; Ben-Hamu et al., 2022). More recent works (Orvieto et al., 2023;
Figure 1: Evaluating sequential models using (a) sequential method and (b) iterative method that is parallelizable.
Huang et al., 2022) have attempted to resurrect the sequential RNN, but they focus on linear recurrent layers that can be evaluated in parallel with prefix scan (Belloch, 1990; Martin and Cundy, 2018; Smith et al., 2022), leaving non-linear recurrent layers unparallelizable over their sequence length.
In this paper, we present an algorithm that can parallelize the evaluation and training of non-linear sequential models like RNN and NeuralODE without changing the output of the models beyond reasonable numerical precision. We do this by introducing a general framework to solve non-linear differential equations by restating them as fixed-point iteration problems with quadratic convergence, equivalent to Newton's method for root finding. The fixed-point iteration involves parallelizable operations and an inverse linear operator that can be evaluated in parallel even for sequential models like RNN and ODE (see Figure 1). As the convergence is quadratic, the number of fixed-point iterations can be quite small especially when the initial starting point is close to the converged solution. This is an appealing feature in training sequential models. As the model parameters are usually updated incrementally, the results from the previous training step can be used as the initial starting point. Most importantly, the proposed algorithm does not need a special structure of sequential models, removing the need to change the models' architecture to be able to take advantage of the parallelization.
## 2 Related works
There have been some attempts to parallelize the evaluation and training of sequential models, especially for RNN. However, most (if not all) of them require special structures of recurrent layers. Lei et al. (2017) changed the matrix multiplication involving the states into element-wise multiplication that enables parallelization. Luo et al. (2020) segments the sequence into several groups to be each evaluated by RNN in parallel and the interdependency between groups are learned by a higher-level RNN. Huang et al. (2022), Orvieto et al. (2023), and Martin and Cundy (2018) removes the non-linear activation function in the recurrent layer to make it linear, where it can be evaluated in parallel using prefix scan (Belloch, 1990).
On the NeuralODE side, the parallelization effort mainly comes from the past works in parallelizing the ODE solver. One of the main idea is the multiple shooting method (Kiehl, 1994; Gander and Vandewalle, 2007; Chartier and Philippe, 1993; Bellen and Zennaro, 1989; Lions et al., 2001) where the time sequence is split into several segments then multiple ODE solvers are executed for each segment in parallel. The process is repeated iteratively until the solutions from all segments are matched. Although multiple ODE solvers can be executed in parallel, each ODE solver itself still requires sequential operation. The multiple shooting method has been adapted in training NeuralODE in Massaroli et al. (2021). The multi-grid idea is also applied to parallelize a special RNN unit and ResNet by converting them to an ODE (Gunther et al., 2020; Moon and Cyr, 2022). Another work, neural rough ODE (Morrill et al., 2021) has shown that it is possible to train NeuralODE for very long sequence by computing log-signature of the input signals over large time steps and enable the ODE solver to take a big step. The calculation of the log-signature can be done in parallel. However, solving the ODE still requires sequential operation even though the time step is larger than the initial time step.
Over the past few years, there have been increasing interests in using fixed points finders (e.g., root-finder algorithms) in deep learning. DEQ (Bai et al., 2019, 2021) employs root-finding algorithms in evaluating neural networks with infinite identical layers. The concept of infinite layers paired with root-finders has been applied to wide range of problems and has yielded impressive results (Bai et al., 2020; Liu et al., 2022; Huang et al., 2021). Nonetheless, these works do not talk about parallelizing sequential models.
Several recent works have also looked into parallelizing sequential models using fixed-point iterations or root finders. In the context of stochastic generative modelling, Shih et al. (2023) divides the time span into several regions and then employs Picard iteration that are parallelizable in solving the ODE. Wang and Ragni (2021) recast RNN evaluation as a fixed-point iteration and solve it by only doing a small number of iterations without checking the convergence. As a result, this approach might produce different results than evaluating RNN sequentially. Song et al. (2021) evaluates feedforward computations by solving non-linear equations using Jacobian or Gauss-Sidel iterations. Notably, the works mentioned above only use zeroth order fixed-point iterations that converge slower than our method and might not be able to converge at all if the mapping is not contracting.
## 3 DEER framework: non-linear differential equation as fixed point iteration
We will present the DEER framework: "non-linear Differential Equation as fixed point itERation" with quadratic convergence and show its relation to Newton's method. This framework can be applied to 1D differential equations, i.e., ODEs, as well as differential equations in higher dimensions, i.e. Partial Differential Equations (PDEs). The same framework can also be adopted to discrete difference equations to achieve the same convergence rate, which can be applied to RNN. With the framework, we can devise a parallel algorithm to evaluate RNN and ODE without significant changes to the results.
### DEER framework
Consider an output signal of interest \(\mathbf{y}(\mathbf{r})\in\mathbb{R}^{n}\) which consists of \(n\) signals on a \(d\)-dimensional space, where the coordinate is denoted as \(\mathbf{r}\in\mathbb{R}^{d}\). The output signal, \(\mathbf{y}(\mathbf{r})\), might depend on an input signal, \(\mathbf{x}(\mathbf{r})\), via some non-linear delayed differential equation (DE),
\[L[\mathbf{y}(\mathbf{r})]=\mathbf{f}\left(\mathbf{y}(\mathbf{r}-\mathbf{s}_{1} ),\mathbf{y}(\mathbf{r}-\mathbf{s}_{2}),...,\mathbf{y}(\mathbf{r}-\mathbf{s}_ {P}),\mathbf{x}(\mathbf{r}),\theta\right) \tag{1}\]
where \(L[\cdot]:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) is the linear operator of the DE, and \(\mathbf{f}\) is the non-linear function that depends on values of \(\mathbf{y}\) at \(P\) different locations, external inputs \(\mathbf{x}\), and parameters \(\theta\). This form is general enough to capture various continuous differential equations such as ODE (with \(L[\cdot]=d/dt\) and \(\mathbf{r}=t\)), partial differential equations (PDEs), or even a discrete difference equations for RNN.
Now let's add \(\mathbf{G}_{p}(\mathbf{r})\mathbf{y}(\mathbf{r}-\mathbf{s}_{p})\) terms on the left and right hand side, where \(\mathbf{G}_{p}(\mathbf{r}):\mathbb{R}^{n\times n}\) is an \(n\)-by-\(n\) matrix that depends on the location \(\mathbf{r}\). The values of \(\mathbf{G}_{p}\) will be determined later. Equation 1 now becomes
\[L[\mathbf{y}(\mathbf{r})]+\sum_{p=1}^{P}\mathbf{G}_{p}(\mathbf{r}) \mathbf{y}(\mathbf{r}-\mathbf{s}_{p})=\mathbf{f}\left(\mathbf{y}(\mathbf{r}- \mathbf{s}_{1}),...,\mathbf{x}(\mathbf{r}),\theta\right)+\sum_{p=1}^{P} \mathbf{G}_{p}(\mathbf{r})\mathbf{y}(\mathbf{r}-\mathbf{s}_{p}) \tag{2}\] \[\mathbf{y}(\mathbf{r})=L_{\mathbf{G}}^{-1}\left[\mathbf{f}\left( \mathbf{y}(\mathbf{r}-\mathbf{s}_{1}),...,\mathbf{x}(\mathbf{r}),\theta \right)+\sum_{p=1}^{P}\mathbf{G}_{p}(\mathbf{r})\mathbf{y}(\mathbf{r}-\mathbf{ s}_{p})\right] \tag{3}\]
The left hand side of equation 2 is a linear equation with respect to \(\mathbf{y}\), which can be solved more easily than solving the non-linear equation in most cases. In equation 3, we introduce the notation \(L_{\mathbf{G}}^{-1}[\cdot]\) as a linear operator that solves the linear operator on the left-hand side of equation 2 with some given boundary conditions.
Equation 3 can be seen as a fixed-point iteration problem, i.e., given an initial guess \(\mathbf{y}^{(0)}(\mathbf{r})\), we can iteratively compute the right hand side of the equation until it converges. To analyze the convergence near the true solution, let's denote the value of \(\mathbf{y}\) at \(i\)-th iteration as \(\mathbf{y}^{(i)}(\mathbf{r})=\mathbf{y}^{*}(\mathbf{r})+\delta\mathbf{y}^{(i) }(\mathbf{r})\) with \(\mathbf{y}^{*}(\mathbf{r})\) as the true solution that satisfies equation 3. Putting \(\mathbf{y}^{(i)}\) into equation 3 to get \(\mathbf{y}^{(i+1)}\), and performing Taylor expansion up to the first order, we obtain
\[\delta\mathbf{y}^{(i+1)}(\mathbf{r})=L_{\mathbf{G}}^{-1}\left[\sum_{p=1}^{P} \left[\partial_{p}\mathbf{f}+\mathbf{G}_{p}(\mathbf{r})\right]\delta\mathbf{y }^{(i)}(\mathbf{r}-\mathbf{s}_{p})+O(\delta\mathbf{y}^{2})\right] \tag{4}\]
where \(\partial_{p}\mathbf{f}\) is the Jacobian matrix of \(\mathbf{f}\) with respect to its \(p\)-th parameters. From the equation above, the first order term of \(\delta\mathbf{y}^{(i+1)}\) can be made 0 by choosing
\[\mathbf{G}_{p}(\mathbf{r})=-\partial_{p}\mathbf{f}\left(\mathbf{y}(\mathbf{r}- \mathbf{s}_{1}),...,\mathbf{y}(\mathbf{r}-\mathbf{s}_{P}),\mathbf{x}(\mathbf{r }),\theta\right). \tag{5}\]
It shows that the fastest convergence around the solution can be achieved by choosing the matrix \(\mathbf{G}_{p}\) according to the equation above. It can also be shown that the iteration in equation 3 and equation 5 is equivalent to the realization of Newton's method in Banach space, therefore offering a quadratic convergence. The details of the relationship can be found in Appendix A.1 and the proof of quadratic convergence can be seen in Appendix A.3.
Iterative process in equation 3 involves evaluating the function \(\mathbf{f}\), its Jacobian, and matrix multiplications that can be parallelized in modern accelerators (such as GPU and TPU). If solving the linear
equation can be done in parallel, then the whole iteration process can take advantage of parallel computing. Another advantage of solving non-linear differential equations as a fixed point iteration problem in the deep learning context is that the solution from the previous training step can be used as the initial guess for the next training step if it fits in the memory. Better initial guess can lower the number of iterations required to find the solution of the non-linear DE.
#### 3.1.1 Derivatives
To utilize the framework above in deep learning context, we need to know how to calculate the forward and backward derivatives. For the forward derivative, we would like to know how much \(\mathbf{y}\) is perturbed (denoted as \(\delta\mathbf{y}\)) if the parameter \(\theta\) is slightly perturbed by \(\delta\theta\). By applying the Taylor series expansion to the first order to equation 1 and using \(\mathbf{G}_{p}(\mathbf{r})\) as in equation 5, we obtain
\[\delta\mathbf{y}= L_{\mathbf{G}}^{-1}\left[\frac{\partial\mathbf{f}}{\partial \theta}(\mathbf{y}(\mathbf{r}-\mathbf{s}_{1}),...,\mathbf{y}(\mathbf{r}- \mathbf{s}_{P}),\mathbf{x}(\mathbf{r}),\theta)\delta\theta\right]. \tag{6}\]
The equation above means that to compute \(\delta\mathbf{y}\), one can compute the forward derivative of the outputs of \(\mathbf{f}\) from \(\delta\theta\) (denoted as \(\delta\mathbf{f}\)), then execute the inverse linear operator \(L_{\mathbf{G}}^{-1}\) on \(\delta\mathbf{f}\).
For the backward derivative, the objective is to obtain the gradient of a loss function \(\mathcal{L}\) with respect to the parameter, \(\partial\mathcal{L}/\partial\theta\), given the gradient of the loss function to the output signal, \(\partial\mathcal{L}/\partial\mathbf{y}\). To obtain the backward derivative, we write
\[\frac{\partial\mathcal{L}}{\partial\theta}= \frac{\partial\mathcal{L}}{\partial\mathbf{y}}\frac{\partial \mathbf{y}}{\partial\theta}=\left(\left(\frac{\partial\mathcal{L}}{\partial \mathbf{y}}L_{\mathbf{G}}^{-1}\right)\frac{\partial\mathbf{f}}{\partial\theta} (\mathbf{y}(\mathbf{r}-\mathbf{s}_{1}),...,\mathbf{y}(\mathbf{r}-\mathbf{s}_{ P}),\mathbf{x}(\mathbf{r}),\theta)\right). \tag{7}\]
Note that the order of evaluation in computing the backward gradient should follow the brackets in the equation above. In contrast to equations 3 and 6, the linear operator \(L_{\mathbf{G}}^{-1}\) is operated to the left in the innermost bracket in equation 7. This is known as the dual operator of \(L_{\mathbf{G}}^{-1}\) and in practice can be evaluated by applying the vector-Jacobian product of the linear operator \(L_{\mathbf{G}}^{-1}[\cdot]\). The results of the dual operator in the innermost bracket are then the followed by vector-Jacobian product of the function \(\mathbf{f}\). The forward and backward derivatives for \(\mathbf{x}\) are similar to the expressions for \(\theta\): just substitute the differential with respect to \(\theta\) into the differential with respect to \(\mathbf{x}\).
The forward and backward gradient computations involve only one operation of \(L_{\mathbf{G}}^{-1}\), in contrast to the forward evaluation that requires multiple iterations of \(L_{\mathbf{G}}^{-1}\) evaluations. This allows the gradient computations to be evaluated more quickly than the forward evaluation. Moreover, the trade-off between memory and speed can be made in the gradient computations. If one wants to gain the speed, the matrix \(\mathbf{G}\) from the forward evaluation can be saved to be used in the gradient computation. Otherwise, the matrix \(\mathbf{G}\) can be recomputed in the gradient computation to save memory.
The gradient equations above apply even if the forward evaluation does not follow the algorithm in the previous subsection. For example, if equation 3 does not converge and forward evaluation is done differently (e.g., sequentially for RNN), the backward gradient can still be computed in parallel according to equation 7. This might still provide some acceleration during the training.
### Practical implementation
Equation 1 has a very general form that can capture ordinary differential equations (ODEs), most partial differential equations (PDEs), and even discrete difference equations. To apply DEER framework in equation 3 to a problem, there are several steps that need to be followed. The first step is to recast the problem into equation 1 to define the variable \(\mathbf{y}\), the linear operator \(L[\cdot]\), and the non-linear function \(\mathbf{f}(\cdot)\). The second step is to implement what we call the shifter function. The shifter function takes the whole discretized values of \(\mathbf{y}(\mathbf{r})\) and returns a list of the values of \(\mathbf{y}\) at the shifted positions, i.e. \(\mathbf{y}(\mathbf{r}-\mathbf{s}_{p})\) for \(p=\{1,...,P\}\). The shifter function might need some additional information such as the initial or boundary conditions. The output of the shifter function will be the input to the non-linear function. The next step, and usually the hardest step, is to implement the inverse operator \(L_{\mathbf{G}}^{-1}[\mathbf{h}]\) given the list of matrices \(\mathbf{G}_{p}\) and the vector values \(\mathbf{h}\) discretized at some points. The inverse operator \(L_{\mathbf{G}}^{-1}[\mathbf{h}]\) might also need the information on the boundary conditions.
Once everything is defined, iterating equation 3 can be implemented following algorithm 1 or following the code in appendix B.1. The Jacobian matrix in equation 5 can be calculated using automatic differentiation packages (Paszke et al., 2017; Frostig et al., 2018).
DEER framework can be applied to any differential or difference equations as long as the requirements in the algorithm 1 can be provided. This includes ordinary differential equations, discrete sequential models, and even partial differential equations (see Appendix A.4). To keep focused, we only present the application of DEER in parallelizing ODE and discrete sequential models.
### Parallelizing ordinary differential equations (ODE)
An ODE typically takes the form of \(d\mathbf{y}/dt=\mathbf{f}(\mathbf{y}(t),\mathbf{x}(t),\theta)\) where the initial condition \(\mathbf{y}(0)\) is given. The ODE form above can be represented by equation 1 with \(\mathbf{r}=t\), \(L=d/dt\), \(P=1\), and \(\mathbf{s}_{1}=0\). This means that the operator \(L_{\mathbf{G}}^{-1}\) in ODE is equivalent to solving the linear equation below given the initial condition \(\mathbf{y}(0)\),
\[\frac{d\mathbf{y}}{dt}(t)+\mathbf{G}(t)\mathbf{y}(t)=\mathbf{z}(t) \iff\mathbf{y}(t)=L_{\mathbf{G}}^{-1}[\mathbf{z}(t)]. \tag{8}\]
Assuming that \(\mathbf{G}(t)\) and \(\mathbf{z}(t)\) are constants between \(t=t_{i}\) and \(t=t_{i+1}\) as \(\mathbf{G}_{i}\) and \(\mathbf{z}_{i}\) respectively, we can write the relations between \(\mathbf{y}_{i+1}=\mathbf{y}(t_{i+1})\) and \(\mathbf{y}_{i}=\mathbf{y}(t_{i})\) as
\[\mathbf{y}_{i+1}=\mathbf{\bar{G}}_{i}\mathbf{y}_{i}+\bar{\mathbf{z}}_{i} \tag{9}\]
with
\[\mathbf{\bar{G}}_{i}=\exp{(-\mathbf{G}_{i}\Delta_{i})}\quad\mathrm{and}\quad \bar{\mathbf{z}}_{i}=\mathbf{G}_{i}^{-1}(\mathbf{I}-\mathbf{\bar{G}}_{i}) \mathbf{z}_{i}, \tag{10}\]
where \(\Delta_{i}=t_{i+1}-t_{i}\), \(\mathbf{I}\) is the identity matrix, and \(\exp(\cdot)\) is the matrix exponential. Equation 9 can be evaluated using the parallel prefix scan algorithm as described in Blelloch (1990) and Smith et al. (2022). Specifically, first we define a pair of variables \(c_{i+1}=(\mathbf{\bar{G}}_{i}|\mathbf{\bar{z}}_{i})\) for every discrete time point \(t_{i}\), the initial values \(c_{0}=(\mathbf{I}|\mathbf{y}_{0})\), and an associative operator,
\[c_{i+1}\bullet c_{j+1}=(\mathbf{\bar{G}}_{j}\mathbf{\bar{G}}_{i}|\mathbf{\bar {G}}_{j}\mathbf{\bar{z}}_{i}+\bar{\mathbf{z}}_{j}). \tag{11}\]
Given the initial value of \(c_{0}\) and the associative operator above, we can run the associative scan in parallel to get the cumulative value of the operator above. The solution \(\mathbf{y}_{i}\) can be taken from the second element of the results of the parallel scan operator.
On the implementation note, we can reduce the computational error by taking the \(\mathbf{G}_{i}\) and \(\mathbf{z}_{i}\) as the mid-point value, i.e. \(\mathbf{G}_{i}=\frac{1}{2}[\mathbf{G}(t_{i})+\mathbf{G}(t_{i+1})]\) and \(\mathbf{z}_{i}=\frac{1}{2}[\mathbf{z}(t_{i})+\mathbf{z}(t_{i+1})]\). By taking the mid-point value, we can obtain the third order local truncation error \(O(\Delta_{i}^{3})\) instead of second order error using either the left or the right value, with only a small amount of additional computational expenses. The details can be seen in Appendix A.5.
The materialization of DEER framework on ODE can be seen as direct multiple shooting method (Chartier and Philippe, 1993; Massaroli et al., 2021) that splits the time horizon as multiple regions. However, in our case each region is infinitesimally small, making the Newton step follows equation 9 and parallelizable. The details of the relationship between our method with direct multiple shooting method can be seen in Appendix A.2.
### Parallelizing RNN
Recurrent Neural Network (RNN) can be seen as a discretization of ODE. Having the input signal at index \(i\) as \(\mathbf{x}_{i}\) and the previous states \(\mathbf{y}_{i-1}\), the current states can be written as \(\mathbf{y}_{i}=\mathbf{f}(\mathbf{y}_{i-1},\mathbf{x}_{i},\theta)\). This form can capture the common RNN units, such as LSTM (Hochreiter and Schmidhuber, 1997)
and GRU (Cho et al., 2014). Also, the form can be written as equation 1 with \(\mathbf{r}=i\), \(L[\mathbf{y}]=\mathbf{y}\), \(P=1\) and \(\mathbf{s}_{1}=1\). This means that the inverse linear operator can be calculated by solving the equation below, given the initial states \(\mathbf{y}_{0}\),
\[\mathbf{y}_{i}+\mathbf{G}_{i}\mathbf{y}_{i-1}=\mathbf{z}_{i}\iff\mathbf{y}_{1 \dots T}=L_{\mathbf{G}}^{-1}[\mathbf{z}_{1\dots T}]. \tag{12}\]
Solving the equation above is equivalent to solving equation 9 from the previous subsection. It means that it can be parallelized by using the parallel prefix scan with the defined associative operator in equation 11.
### Complexity and limitations
The algorithms for solving non-linear ODE and RNN in parallel are simply repeatedly evaluating equation 3 from an initial guess. However, the equation requires the explicit Jacobian matrix for each sequence element for each shift. If y has \(n\) elements, \(L\) sampling points, and \(P\) shifted arguments, storing the Jacobian matrices requires \(O(n^{2}LP)\) memory. Also, the prefix scan for solving the linear differential equation requires matrix multiplication of matrices \(\mathbf{G}_{p}\) at different sampling points. This would introduce \(O(n^{3}LP)\) time complexity. As the algorithm has \(O(n^{2})\) memory complexity and \(O(n^{3})\) time complexity, this algorithm would offer significant acceleration for small number of \(n\).
As the proposed method is a realization of Newton's method in Banach space, it has the same limitations as Newton's method. If the starting point is sufficiently far from the solution, the iteration might not converge. This problem might be addressed by using modified Newton's method to achieve global convergence such as Nesterov and Polyak (2006) and Doikov and Nesterov (2023). However, we leave the use of globally-converged Newton's method as the future work.
Parallelizing ODE and RNN both require parallelizing the evaluation of equations 9 and 12. Those equations can be parallelized using parallel prefix scan of a custom associative operation in equation 11. This is relatively straightforward using JAX's jax.lax.associative_scan(Frostig et al., 2018) or TensorFlow's tfp.math.scan_associative(Abadi, 2016). However, as of the time of writing this paper, this operation cannot be implemented easily using PyTorch (Paszke et al., 2017), another popular deep learning framework. As parallelizing ODE and RNN cannot be done in PyTorch (for now), we used JAX for our experiments in this paper as well as flax and equinox (Kidger and Garcia, 2021) for the neural networks.
## 4 Experiments
### Performance benchmarking
The first test is to compare the speed on evaluating an RNN using the presented DEER method against the common sequential method. Specifically, we're using an untrained Gated Recurrent Unit (GRU) cell from flax.linen (a JAX neural network framework) using 32-bits floating points random inputs with 16 batch size, various number of dimensions (i.e., \(n\)), and various number of sequence lengths. The initial guess for DEER is all zeros for all the benchmark runs. The speed up on a V100 GPU obtained by the method presented in this paper is shown in figure 2.
The figure shows that the largest speed up is achieved on long sequence lengths with small number of dimensions. With 1M sequence length, 16 batch size, and \(n=1\), the sequential evaluation required 8.7 s while DEER method only took 15 ms, which translates to a speed up of over 500. However, the speed up decreases as the number of dimensions increases, where the speed up is only about 25% with 64 dimensions (64 hidden elements in GRU). This is due to the explicit computation of Jacobian matrix and the matrix multiplication that scales to \(O(n^{3})\).
The speed up for forward + gradient calculations is even greater than the speed up for forward evaluations only. With the same set up, the speed up for 1M sequence length with 1 dimension could be more than 1000 times faster. This is because the backward gradient calculations require only one evaluation of \(L_{\mathbf{G}}^{-1}\) in equation 7 as opposed to multiple evaluations in forward calculations.
Table 2 in Appendix C.1 presents the tables of speed up for Figure 2 as well as speed up when using smaller batch sizes. Generally, the speed up increases with smaller batch size, where speed up of above 2600 can be achieved with batch size 2. This means that with more devices, greater speed up can be achieved by having a distributed data parallel, effectively reducing the batch size per device.
Figure 3 shows the comparison between the output of GRU evaluated using sequential vs DEER method. The figure is generated using randomly initialized and untrained 1 layer of GRU using the same seed for both methods. There are 32 hidden elements in the GRU layer. The input to GRU is a Gaussian-random tensor with 10k sequence length and 32 dimensions. All calculations were done in single precision floating point numbers. From figure 3, we see that the output of GRU evaluated using DEER method is almost identical to the output obtained using the sequential method. The small error in figure 3(b) is due to numerical precision limits of the single precision floating point.
Figure 3: (a) The comparison between the outputs of GRU evaluated with sequential method vs DEER method. The line for sequential method output is almost not visible because overlaid by the output of DEER method. Only the last 200 indices are shown for clarity. (b) The difference between the outputs of sequential and DEER method for the whole 10k sample length.
Figure 2: The speed up of GRU calculated using DEER method (this paper) vs commonly-used sequential method on a V100 GPU for (top) forward and (bottom) forward + gradient calculations. The missing data for large number of dimensions and sequence lengths is due to insufficient memory in the DEER method. The bar height represents the mean speed up over 5 different random seeds.
### Learning physical systems with NeuralODE
We test the capability of DEER method in training a deep neural network model using a simple case from physics. Given the positions and velocities as a function of time of a two-body system interacting with gravitational force, we trained Hamiltonian Neural Networks (HNN) (Greydanus et al., 2019) to match the data. Note that this set up is different from the original HNN paper where they train the network to match the velocity + acceleration, given position + velocity at every point in time. In this case, only positions and velocities as a function of time are given and the network needs to solve the ODE to make a prediction. We use 10,000 time points sampled uniformly, whereas other works in this area typically only use less than 1,000 sampling points for the training (Matsubara & Yaguchi, 2022; Chen et al., 2019). There are 8 states in this case: \(x,y,v_{x},v_{y}\) for each body. In order to speed up the training, we start from 20 time points at the beginning of the training and increase the number of time points by 20 every 50 training steps until it reaches 10k time points. We performed the training using ADAM optimizer (Kingma & Ba, 2014). The details of the setup can be seen in Appendix B.2.
The losses during the training using DEER method vs using RK45 (Atkinson, 1991) are shown in figure 4(a, b). We use the RK45 algorithm from JAX's experimental feature. From the figure, it can be seen that the training can be 11 times faster when using DEER method presented in this paper than using the ordinary ODE solver without significant difference in the validation losses between the two methods. To achieve 55k training steps, DEER method only spent 1 day + 6 hours, while training with RK45 required about 2 weeks of training. The small difference of the validation losses between the two methods potentially comes from different methods in solving the ODE as well as from the numerical precision issue as shown in Figure 3(b).
### Time-series classification with recurrent neural network (RNN)
Faster training enabled by DEER method allows us to use classical RNN architectures, such as Gated Recurrent Unit (GRU, Cho et al. (2014)), for problems with long time series. In this subsection, we trained a neural network that consists of GRUs to classify EigenWorms dataset (Brown et al., 2013) from UEA (Bagnall et al., 2018). Each entry in the dataset consists of 17,984 time samples. With the usual sequential method, the training would take a long time to complete due to the extremely
Figure 4: (Top) The validation losses of HNN with NeuralODE training using DEER method (shown in blue) vs RK45 method (in orange) as a function of (a) training hours and (b) training steps. (Bottom) The validation accuracy of RNN training using DEER method (blue) vs the sequential method (orange) as a function of (c) training hours and (d) training steps.
long sequence length, whereas the DEER method would save a significant amount of training time. The neural network consists of 5 layers of GRUs alternated with multi-layer perceptrons (MLPs) and layer norm (Ba et al., 2016). The network includes skip connections to improve the accuracy. The channel width of each GRU is 32. The details of the architecture and the training procedure can be found in Appendix B.3.
Figure 4(c)-(d) shows the comparison of validation accuracy during the training of the GRU network using DEER method vs the common sequential method. From the figure, we can see that the validation accuracy plot when using DEER is similar to the validation plot obtained with the sequential method. The difference in the validation accuracy might be due to numerical precision issue as shown in Figure 3(b) that is accumulated at long sequence. However, the training using DEER is up to 26 times faster than the training using the sequential method. What would have taken more than 2 days to train using the common sequential method, only takes 2 hours using DEER method.
The classification results of the test dataset from EigenWorms for various methods are shown in Table 1. As we can see from the table, the network with GRUs can be as competitive as more modern methods (Morrill et al., 2021; Rusch and Mishra, 2021; Rusch et al., 2021) for this long time series dataset. The long training time of GRUs using the sequential method could be the main factor hindering the trial-and-error process of exploring GRU architectures for this dataset with long sequences. However, our DEER method enables much faster training of GRUs, facilitating iterative experimentation to identify optimal GRU architectures for long time series datasets.
## 5 Conclusion
We introduced a method to parallelize the evaluation and training of inherently sequential models, such as ODE and RNN. Evaluations using our approach can be up to 3 orders of magnitude faster than traditional sequential methods when dealing with long sequences. When training sequential models with reasonable settings, our method can achieve over a 10-fold speed increase without significantly altering the results. However, there are drawbacks: our method exhibits cubic time complexity with respect to the number of dimensions and does not ensure global convergence. Despite its cubic complexity, our approach still facilitates acceleration on GPUs for a 64-dimensional variable with a batch size of 16. While our method successfully completed training for both a NeuralODE and an RNN in the provided cases, we acknowledge potential convergence challenges in different scenarios due to the lack of guaranteed global convergence. By having a technique to accelerate the training and evaluation of sequential models, we anticipate a hastened pace of research in this domain, potentially catalyzing the emergence of novel and interesting sequential models in the future.
## Reproducibility statement
The code required for reproducing the algorithm and results in this paper can be found in [https://github.com/machine-discovery/deer/](https://github.com/machine-discovery/deer/).
\begin{table}
\begin{tabular}{l|c} \hline \hline
**Model** & **Accuracy (\%)** \\ \hline ODE-RNN (folded), step: 128 & \(47.9\pm 5.3\) \\ NCDE, step: 4 & \(66.7\pm 11.8\) \\ NRDE (depth 3), step: 32 & \(75.2\pm 3.0\) \\ NRDE (depth 2), step: 128 & \(76.1\pm 5.9\) \\ NRDE (depth 2), step: 4 & \(\mathbf{83.8\pm 3.0}\) \\ UnICORNN (2 layers) & \(\mathbf{90.3\pm 3.0}\) \\ LEM & \(\mathbf{92.3\pm 1.8}\) \\ \hline GRU (from this paper) & \(\mathbf{82.1\pm 5.5}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: The classification accuracy of EigenWorms dataset for various methods, including folded ODE-RNN (Rubanova et al., 2019), Neural CDE (Kidger et al., 2020), Neural RDE (Morrill et al., 2021), UnICORNN (Rusch and Mishra, 2021), LEM (Rusch et al., 2021), and GRU. The mean and standard deviations of the accuracy were obtained from 3 times repetition with different seeds. The numbers of non-GRU methods were obtained from Morrill et al. (2021) and Rusch et al. (2021). |
2301.00074 | Matrix Multiplication: Verifying Strong Uniquely Solvable Puzzles | Cohn and Umans proposed a framework for developing fast matrix multiplication
algorithms based on the embedding computation in certain groups algebras. In
subsequent work with Kleinberg and Szegedy, they connected this to the search
for combinatorial objects called strong uniquely solvable puzzles (strong
USPs). We begin a systematic computer-aided search for these objects. We
develop and implement constraint-based algorithms build on reductions to
$\mathrm{SAT}$ and $\mathrm{IP}$ to verify that puzzles are strong USPs, and to
search for large strong USPs. We produce tight bounds on the maximum size of a
strong USP for width $k \le 5$, construct puzzles of small width that are
larger than previous work, and improve the upper bounds on strong USP size for
$k \le 12$. Although our work only deals with puzzles of small-constant width,
the strong USPs we find imply matrix multiplication algorithms that run in
$O(n^\omega)$ time with exponent $\omega \le 2.66$. While our algorithms do not
beat the fastest algorithms, our work provides evidence and, perhaps, a path to
finding families of strong USPs that imply matrix multiplication algorithms
that are more efficient than those currently known. | Matthew Anderson, Zongliang Ji, Anthony Yang Xu | 2022-12-30T23:53:51Z | http://arxiv.org/abs/2301.00074v1 | # Matrix Multiplication:
###### Abstract
Cohn and Umans proposed a framework for developing fast matrix multiplication algorithms based on the embedding computation in certain groups algebras [12]. In subsequent work with Kleinberg and Szegedy, they connected this to the search for combinatorial objects called strong uniquely solvable puzzles (strong USPs) [11]. We begin a systematic computer-aided search for these objects. We develop and implement constraint-based algorithms build on reductions to SAT and IP to verify that puzzles are strong USPs, and to search for large strong USPs. We produce tight bounds on the maximum size of a strong USP for width \(k\leq 5\), construct puzzles of small width that are larger than previous work, and improve the upper bounds on strong USP size for \(k\leq 12\). Although our work only deals with puzzles of small-constant width, the strong USPs we find imply matrix multiplication algorithms that run in \(O(n^{\omega})\) time with exponent \(\omega\leq 2.66\). While our algorithms do not beat the fastest algorithms, our work provides evidence and, perhaps, a path to finding families of strong USPs that imply matrix multiplication algorithms that are more efficient than those currently known.
Keywords:matrix multiplication strong uniquely solvable puzzle arithmetic complexity integer programming satisfiability satisfiability benchmark upper bounds reduction application
## 1 Introduction
An optimal algorithm for matrix multiplication remains elusive despite substantial effort. We focus on the square variant of the matrix multiplication problem, i.e., given two \(n\)-by-\(n\) matrices \(A\) and \(B\) over a field \(\mathcal{F}\), the goal is to compute the matrix product \(C=A\times B\). The outstanding open question is: How many field operations are required to compute \(C\)? The long thought-optimal naive algorithm based on the definition of matrix product is \(O(n^{3})\) time. The groundbreaking work of Strassen showed that it can be done in time \(O(n^{2.808})\)[30] using a divide-and-conquer approach. A long sequence of work concluding with Coppersmith and Winograd's algorithm (CW) reduced the running time
to \(O(n^{2.376})\)[26; 28; 31; 13]. Recent computer-aided refinements of CW by others reduced the exponent to \(\omega\leq 2.3728639\)[16; 32; 22].
#### 2.0.1 Approach
Cohn and Umans [12] introduced a framework for developing faster algorithms for matrix multiplication by reducing this to a search for groups with subsets that satisfy an algebraic property called the _triple-product property_, which allows matrix multiplication to be embedded in the group algebra. Their approach takes inspiration from the \(O(n\log n)\) algorithm for multiplying degree-\(n\) univariate polynomials by embedding into the group algebra of the fast Fourier transform, c.f., e.g., [14; Chapter 30]. Subsequent work [11] elaborated on this idea and developed the notion of combinatorial objects called _strong uniquely solvable puzzles_ (strong USPs). These objects imply a group algebra embedding for matrix multiplication, and hence give a matrix multiplication algorithm as well.
A _width-\(k\)_ puzzle \(P\) is a subset of \(\{1,2,3\}^{k}\), and the cardinality of \(P\) is the puzzle's _size_. Each element of \(P\) is called a _row_ of \(P\), and each row consists of three _subrows_ that are elements of \(\{1,*\}^{k}\), \(\{2,*\}^{k}\), \(\{3,*\}^{k}\) respectively. Informally, a puzzle \(P\) is a _uniquely solvable puzzle_ (USP) if there is no way to permute the subrows of \(P\) to form a distinct puzzle \(P^{\prime}\) without cells with numbers overlapping. Figure 1 demonstrates a puzzle that is not a USP. A uniquely solvable puzzle is _strong_ if a tighter condition for non-overlapping holds (see Definition 3). For a fixed width \(k\), the larger the size of a strong USP, the faster matrix multiplication algorithm it gives [11]. In fact, Cohn et al. show that there exist an infinite family of strong USPs that achieves \(\omega<2.48\).
We follow Cohn et al.'s program by developing: (i) **verification algorithms** and heuristics to determine whether a puzzle is a strong USP, (ii) **search algorithms** to find large strong USPs, (iii) **practical implementations1** of these
Figure 1: The leftmost diagram is a width-4 size-5 puzzle \(P\). The middle three diagrams are the three sets of subrows of \(P\). The rightmost diagram is the puzzle \(P^{\prime}\) resulting from reordering the subrows of \(P\) as indicated by the arrows and then recombining them. Since \(P\) can be rearranged as \(P^{\prime}\neq P\) without overlap, \(P\) is not uniquely solvable.
algorithms, and (iv) new **upper bounds** on the size of strong USPs. The most successful of our verification algorithms work by reducing the problem through 3D matching to the satisfiability (SAT) and integer programming (IP) problems that are then solved with existing tools. The algorithms we develop are not efficient--they run in worst-case exponential time in the natural parameters. However, the goal is to find a sufficiently large strong USP that would provide a faster matrix multiplication algorithm, and the resulting algorithm's running time is independent of the running time of our algorithms. The inefficiency of our algorithms limit the search space that we can feasibly examine.
#### 2.0.1 Results
Our theoretical results and implementation produces new bounds on the size of the largest strong USP for small-width puzzles. For small-constant width, \(k\leq 12\), we beat the largest sizes of [11, Proposition 3.8]. Our lower bounds on maximum size are witnessed by strong USPs we found via search. For \(k\leq 5\) we give tight upper bounds determined by exhaustively searching all puzzles after modding out common symmetries. For \(k\leq 12\), we improve the upper bounds on the size of strong USPs. Although our current results do not beat [11] for unbounded \(k\), they give evidence that there may exist families of strong USPs that give matrix multiplication algorithms that are more efficient than those currently known. The best strong USP we can produce imply matrix multiplication algorithms with \(\omega\leq 2.66\).
We also create a benchmark data set of SAT/UNSAT instances based on our reductions from strong-USP verification and examine the performance of solvers from the 2021 SAT Competition [6].
#### 2.0.2 Related Work
For background on algorithms matrix multiplication problem, c.f, e.g., [9]. There are also a number of negative results known. Naively, the dimensions of the output matrix \(C\) implies that the problem requires at least \(\Omega(n^{2})\) time. Slightly better lower bounds are known in general and also for specialized models of computation, c.f., e.g., [29, 20]. There are also lower bounds known for a variety of algorithmic approaches to matrix multiplication. Ambainis et al. showed that the laser method cannot alone achieve an algorithm with \(\omega\leq 2.3078\)[4]. A recent breakthrough on arithmetic progressions in cap sets [15] combined with a conditional result on the Erdos-Szemeredi sunflower conjecture [3] imply that Cohn et al.'s strong USP approach cannot achieve \(\omega=2+\epsilon\) for some \(\epsilon>0\)[10]. Subsequent work has generalized this barrier [1, 2] to a larger class of algorithmic techniques. Despite this, we are unaware of a concrete lower bound on \(\epsilon\) implied by these negative results. There remains a substantial gap in our understanding between what has been achieved by the positive refinements of LeGall, Williams, and Stothers, and the impossibility of showing \(\omega=2\) using the strong USP approach.
Recently Fawzi et al. showed how reinforcement learning techniques can be used to develop new matrix multiplication algorithms [17]. Their work produces matrix multiplication algorithms with \(\omega<2.77\), which is faster than Strassen's
original algorithm (\(\omega<2.81\)), but far from the refinements of Coppersmith-Winograd (\(\omega<2.372\)) or the results achieved in this work.
#### 1.0.1 Organization
Section 2 begins with the formal definition of a strong USP and the Cohn-Umans framework. Sections 3 & 4, respectively, discuss our algorithms and heuristics for verifying that and searching for a puzzle that is a strong USP. Section 5 describes several upper bounds on the size of strong USPs. Sections 6 & 7 discuss our implementation and experimental results.
## 2 Preliminaries
For an integer \(k\), we use \([k]\) to denote the set \(\{1,2,\ldots,k\}\). For a set \(Q\), \(\operatorname{Sym}_{Q}\) denotes the symmetric group on the elements of \(Q\), i.e., the group of permutations acting on \(Q\). Cohn et al. introduced the idea of a _puzzle_[11].
Definition 1 (Puzzle): For \(s,k\in\mathbb{N}\), an \((s,k)\)-puzzle is a subset \(P\subseteq[3]^{k}\) with \(|P|=s\). We call \(s\) the _size_ of \(P\), and \(k\) the _width_ of \(P\).
We say that an \((s,k)\)-puzzle has \(s\) rows and \(k\) columns. The columns of a puzzle are inherently ordered and indexed by \([k]\). The rows of a puzzle have no inherent ordering, however, it is often convenient to assume that they are ordered and indexed by the set of natural numbers \([s]\).
Cohn et al. establish a particular combinatorial property of puzzles that allows one to derive group algebras that matrix multiplication can be efficiently embedded into. Such puzzles are called _strong uniquely solvable puzzles_. However, to give some intuition we first explain a simpler version of the property called _uniquely solvable puzzles_.
Definition 2 (Uniquely Solvable Puzzle (USP)): An \((s,k)\)-puzzle \(P\) is _uniquely solvable_ if for all \(\pi_{1},\pi_{2},\pi_{3}\in\operatorname{Sym}_{P}\): Either (i) \(\pi_{1}=\pi_{2}=\pi_{3}\), or (ii) there exists \(r\in P\) and \(c\in[k]\) such that at least two of the following hold: \((\pi_{1}(r))_{c}=1\), \((\pi_{2}(r))_{c}=2\), \((\pi_{3}(r))_{c}=3\).
Informally, a puzzle is **not** uniquely solvable if each row of the puzzle can be broken into ones, twos, and three pieces and then the rows can be reassembled in a different way so that each new row is a combination of a ones, a twos, and a threes piece where there is exactly one element of [3] for each column. Observe that uniquely solvable puzzles can have at most \(2^{k}\) rows because each ones piece, twos piece, and threes piece must be unique, as otherwise the duplicate pieces can be swapped making the puzzle not uniquely solvable.
The definition of _strong_ uniquely solvable puzzle is below, it is nearly the same except that it requires that there be a collision on a column between exactly two pieces, not two or more pieces like in the original definition.
Definition 3 (Strong USP (SUSP)): An \((s,k)\)-puzzle \(P\) is _strong uniquely solvable_ if for all \(\pi_{1},\pi_{2},\pi_{3}\in\operatorname{Sym}_{P}\): Either (i) \(\pi_{1}=\pi_{2}=\pi_{3}\), or (ii) there exists \(r\in P\) and \(c\in[k]\) such that exactly two of the following hold: \((\pi_{1}(r))_{c}=1\), \((\pi_{2}(r))_{c}=2\), \((\pi_{3}(r))_{c}=3\).
Finally, Cohn et al. defined a strengthening of SUSP which requires that every triple of rows witness the necessary overlap.
Definition 4 (Local SUSP): A local strong uniquely solvable puzzle is an \((s,k)\)-puzzle where for each triple of rows \(u,v,w\in P\) with \(u,v,w\) not all equal, there exists \(c\in[k]\) such that \((u_{c},v_{c},w_{c})\) is an element of
\[\mathcal{L}=\{(1,2,1),(1,2,2),(1,1,3),(1,3,3),(2,2,3),(3,2,3)\}.\]
Every SUSP \(P\) corresponds to a much larger local SUSP \(P^{\prime}\), which, informally, is the result of concatenating and duplicating the rows of \(P\) to explicitly demonstrate the \(\forall\pi_{1},\pi_{2},\pi_{3}\) part of Definition 3.
Proposition 1 ([11, Proposition 6.3]): _Let \(P\) be a \((s,k)\)-SUSP, then there is a local \((s!,s\cdot k)\)-SUSP \(P^{\prime}\)._
Note that in all of the definitions, local, strong, uniquely solvability is invariant to the ordering of the rows of the puzzle, because \(P\) is a set--we use this fact implicitly.
Cohn et al. show the following connection between the existence of strong USPs and upper bounds on the exponent of matrix multiplication \(\omega\).
Lemma 1 ([11, Corollary 3.6]): _Let \(\epsilon>0\), if there is a strong uniquely solvable \((s,k)\)-puzzle, there is an algorithm for multiplying \(n\)-by-\(n\) matrices in time \(O(n^{\omega+\epsilon})\) where_
\[\omega\leq\min_{m\in\mathbb{N}_{\geq 3}}\left(\frac{3\log m}{\log(m-1)}-\frac{3 \log s!}{s\cdot k\log(m-1)}\right).\]
This result motivates the search for large strong USPs that would result in faster algorithms for matrix multiplication. In the same article, the authors also demonstrate the existence of an infinite family of strong uniquely solvable puzzles, for width \(k\) divisible by three, that achieves a non-trivial bound on \(\omega\).
Lemma 2 ([11, Proposition 3.8]): _There is an infinite family of strong uniquely solvable puzzles that achieves \(\omega<2.48\)._
Finally, they conjecture that strong uniquely solvable puzzles provide a route to achieving quadratic-time matrix multiplication. Unfortunately, as mentioned in the introduction, this conjecture was shown to be false.
Lemma 3 ([10]): _Strong uniquely solvable puzzles cannot show \(\omega<2+\epsilon\), for some \(\epsilon>0\)._
That said, there remains hope that the uniquely solvable puzzle approach could beat the refinements of Coppersmith-Winograd even if it cannot reach \(\omega=2\).
```
0: An \((s,k)\)-puzzle \(P\).
0: YES, if \(P\) is a strong USP and NO otherwise.
1:functionVerifyBruteForce(\(P\))
2:for\(\pi_{2}\in\mathrm{Sym}_{P}\)do
3:for\(\pi_{3}\in\mathrm{Sym}_{P}\)do
4:if\(\pi_{2}\neq 1\vee\pi_{3}\neq 1\)then
5:\(found=false\).
6:for\(r\in P\)do
7:for\(i\in[k]\)do
8:if\(\delta_{r_{i},1}+\delta_{(\pi_{2}(r))_{i},2}+\delta_{(\pi_{3}(r))_{i},3}=2\)then\(found=true\).
9:ifnot\(found\)thenreturn NO.
10:return YES.
```
**Algorithm 1** : Brute Force Verification
## 3 Verifying Strong USPs
The core focus of this article is the problem of verifying strong USPs, i.e., given an \((s,k)\)-puzzle \(P\), output YES if \(P\) is a strong USP, and NO otherwise. In this section we discuss the design of algorithms to solve this computational problem as a function of the natural parameters \(s\) and \(k\).
All of the exact algorithms we develop in this section have worst-case exponential running time. However, asymptotic worst-case running time is not the metric we are truly interested in. Rather we are interested in the practical performance of our algorithms and their capability for locating new large strong USPs. The algorithm that we ultimately develop is a hybrid of a number of simpler algorithms and heuristics.
We begin by discussing a naive brute force algorithm based on the definition of strong USP (Subsection 3.1), see how it motivations a reduction to the 3D matching problem (Subsection 3.2), and then how we might formulate a reduction to the satisfiability and integer programming problems (Subsections 3.4 & 3.5). We then describe several verification heuristics based on properties of strong USP (Subsection 3.6) and combine them with the verification algorithms to produce a hybrid algorithm Verify (Subsection 3.7). As we discuss in Subsection 7.2, our hybrid algorithm is quickly able to check whether a given puzzle is a strong USP and aid in the search for strong USP.
### Brute Force
The obvious algorithm for verification comes directly from the definition of a strong USP. Informally, we consider all ways of permuting the twos and threes pieces relative to the ones pieces and check whether the non-overlapping condition of Definition 3 is met. A formal description of the algorithm is found in Algorithm 1.
The ones in Line 4 of Algorithm 1 denote the identity in \(\mathrm{Sym}_{P}\), and \(\delta_{a,b}\) is the Kronecker delta function which is one if \(a=b\) and zero otherwise. Observe that Algorithm 1 does not refer to the \(\pi_{1}\) of Definition 3. This is because the strong USP property is invariant to permutations of the rows and so \(\pi_{1}\) can be thought of as an arbitrary phase. Hence, we fix \(\pi_{1}=1\) to simplify the algorithm. Seeing that \(|\mathrm{Sym}_{P}|=s!\), we conclude that the algorithm runs in time \(O((s!)^{2}\cdot s\cdot k\cdot\mathrm{poly}(s))\) where the last factor accounts for the operations on permutations of \(s\) elements. The dominant term in the running time is the contribution from iterating over all pairs of permutations. Finally, notice that if \(P\) is a strong USP, then the algorithm runs in time \(\Theta((s!)^{2}\cdot s\cdot k\cdot\mathrm{poly}(s))\), and that if \(P\) is not a strong USP the algorithm terminates early. The algorithm's poor performance made it unusable in our implementation, however, its simplicity and direct connection to the definition made its implementation a valuable sanity check against later more elaborate algorithms (and it served as effective onboarding to the undergraduate students collaborating on this project).
Although Algorithm 1 performs poorly, examining the structure of a seemingly trivial optimization leads to substantially more effective algorithms. Consider the following function on triples of rows \(a,b,c\in P\colon f(a,b,c)=\vee_{i\in[k]}(\delta_{a_{i},0}+\delta_{b_{i},1}+ \delta_{c_{i},2}=2)\). We can replace the innermost loop in Lines 7 & 8 of Algorithm 1 with the statement \(found=found\lor f(r,\pi_{1}(r),\pi_{2}(r))\). Observe that \(f\) neither depends on \(P\), \(r\), nor the permutations, and that Algorithm 1 no longer depends directly on \(k\). To slightly speed up Algorithm 1 we can precompute and cache \(f\) before the algorithm starts and then look up values as the algorithm runs. We precompute \(f\) specialized to the rows in the puzzle \(P\), and call it \(f_{P}\).
### Strong USP Verification to 3D Matching
It turns out to be more useful to work with \(f_{P}\) than with \(P\). It is convenient to think of \(f_{P}\) as a function \(f_{P}:P\times P\times P\to\{0,1\}\) that is the complement of the characteristic function of the relations of a tripartite hypergraph \(H_{P}=\langle P\sqcup P\sqcup P,\bar{f}_{P}\rangle\) where the vertex set is the disjoint union of three copies of \(P\) and \(f_{P}\) indicates the edges that are not present in \(H_{P}\).
Let \(H=\langle P\sqcup P\sqcup P,E\subseteq P^{3}\rangle\) be a tripartite 3-hypergraph. We say \(H\) has a _3D matching_ (3DM) iff there exists a subset \(M\subseteq E\) with \(|M|=|P|\) and for all distinct edges \(e_{1},e_{2}\in M\), \(e_{1}\) and \(e_{2}\) are _vertex disjoint_, i.e., \(e_{1}\cap e_{2}=\emptyset\). Determining whether a hypergraph has a 3D matching is a well-known NP-complete problem (c.f., e.g., [18]). We say that a 3D matching is _non-trivial_ if it is not the set \(\{(r,r,r)\mid r\in P\}\). Figure 2 demonstrates a 3-hypergraph with a non-trivial 3D matching.
The existence of non-trivial 3D matchings in \(H_{P}\) is directly tied to whether \(P\) is a strong USP.
Lemma 4: _A puzzle \(P\) is a strong USP iff \(H_{P}\) has no non-trivial 3D matching._
Proof: We first argue the reverse. Suppose that \(H_{p}\) has a non-trivial 3D matching \(M\). We show that \(P\) is not a strong USP by using \(M\) to construct
\(\operatorname{Sym}_{P}\) that witness this. Let \(\pi_{1}\) be the identity permutation. For each \(r\in P\), define \(\pi_{2}(r)=q\) where \((r,q,*)\in M\). Note that \(q\) is well defined and unique because \(M\) is 3D matching and so has vertex disjoint edges. Similarly define \(\pi_{3}(r)=q\) where \((r,*,q)\in M\). Observe that by construction
\[M=\{(\pi_{1}(r),\pi_{2}(r),\pi_{3}(r))\mid r\in P\}.\]
Since \(M\) is a matching of \(H_{P}\), \(M\subseteq\bar{f}_{P}\). Because \(M\) is a non-trivial matching at least one edge in \((a,b,c)\in M\) has either \(a\neq b\), \(a\neq c\), or \(b\neq c\). This implies, respectively, that as constructed \(\pi_{1}\neq\pi_{2}\), \(\pi_{1}\neq\pi_{3}\), or \(\pi_{2}\neq\pi_{3}\). In each case we have determined that \(\pi_{1}\), \(\pi_{2}\), and \(\pi_{3}\) are not all identical. Thus we determined permutations such that for all \(r\in P\), \(f(\pi_{1}(r),\pi_{2}(r),\pi_{3}(r))=0\). This violates Condition (ii) of Definition 3, hence \(P\) is not a strong USP.
The forward direction is symmetric. Suppose that \(P\) is not a strong USP. We show that \(H_{P}\) has a 3D matching. For \(P\) not to be a strong USP there must exist \(\pi_{1},\pi_{2},\pi_{3}\in\operatorname{Sym}_{P}\) not all identical such that Condition (ii) of Definition 3 fails. Define \(e(r)=(\pi_{1}(r),\pi_{2}(r),\pi_{3}(r))\) and \(M=\{e(r)\mid r\in P\}\). Since Condition (ii) fails, we have that \(f_{P}(e(r))=false\) for all \(r\in P\). This means that for all \(r\in P\), \(e(r)\in\bar{f}_{P}\) and hence \(M\subseteq\bar{f}_{P}\). Since \(\pi_{1}\) is a permutation, \(|M|=|P|\). Observe that \(M\) is non-trivial because not all of the permutations are identical and there must be some \(r\in P\) with \(e(r)\) having non-identical coordinates. Thus \(M\) is a non-trivial 3D matching.
As a consequence of Definition 3, strong-USP verification is in \(\mathsf{coNP}\). Note that although 3D matching is an \(\mathsf{NP}\)-complete problem, Lemma 4 does not immediately imply that verification of strong USPs is \(\mathsf{coNP}\)-complete because \(H_{P}\) is not an arbitrary hypergraph. It remains open whether strong-USP verification is \(\mathsf{coNP}\)-complete. Lemma 4 implies that to verify \(P\) is a strong USP it suffices to determine whether \(H_{P}\) has a non-trivial 3D matching. In the subsequent subsections we examine algorithms for the later problem. We can, in retrospect, view Algorithm 1 as an algorithm for solving 3D matching.
We note that the parameters \(s\) and \(k\) are not fully independent. First, \(s\leq 3^{k}\) because the maximum number of rows in a puzzle of width \(k\) is \(|[3]^{k}|=3^{k}\). Second, we eliminate the dependence on \(k\) entirely by transforming an \((s,k)\)-puzzle
into a 3D matching instance on the vertex set \([s]^{3}\). However, this transformation is not without cost, because the size of \(H_{P}\) is a function of the cube of \(s\) rather than linear in the size of the puzzle \(s\cdot k\).
### Dynamic Programming
The realization that the verification of strong USPs is a specialization of 3D matching leads to a dynamic programming algorithm for verification that runs in linear-exponential time \(O(2^{2s}\mathrm{poly}(s)+\mathrm{poly}(s,k))\). The reduction allows us to replace the permutations from \(\mathrm{Sym}_{P}\) with subsets of \(P\) and effectively reduce the cost of the outer loops of Algorithm 1 from \(s!=\Theta(2^{s\log s})\) to \(2^{s}\).
```
1:An \((s,k)\)-puzzle \(P\).
2:YES, if \(P\) is a strong USP and NO otherwise.
3:functionVerifyDynamicProgramming(\(P\))
4: Let \(T=\emptyset\).
5: Construct 3D matching instance \(H_{P}\).
6:functionSearchHalf\((\ell,Q,\ell_{Q},R,\ell_{R},\delta,t)\)
7:if\(\ell=t\)then
8:if\(\delta=1\)then\(\triangleright\) Forward Base Case
9: Insert \((Q,R)\) into \(T\).
10:return\(false\).
11:else\(\triangleright\) Reverse Base Case
12:if\((P-Q,P-R)\in T\)then
13:return\(true\).
14:else
15:return\(false\).
16:\(res=false\).
17:for\(\ell^{\prime}_{Q}=\ell_{Q}+1\) to \(s\)do
18:for\(\ell^{\prime}_{R}=\ell_{R}+1\) to \(s\)do
19:if\((p_{\ell},p_{\ell^{\prime}_{Q}},p_{\ell^{\prime}_{R}})\in H_{P}\wedge\neg res\)then
20:\(res=\textsc{SearchHalf}(\ell+\delta,Q\cup\{p_{\ell^{\prime}_{Q}}\},\ell^{ \prime}_{Q},R\cup\{p_{\ell^{\prime}_{R}}\},\ell^{\prime}_{R},\delta,t)\).
21:return\(res\).
22:SearchHalf\((1,\emptyset,0,\emptyset,0,1,\lfloor s/2\rfloor+1)\).
23:returnSearchHalf\((s,\emptyset,0,\emptyset,0,-1,\lfloor s/2\rfloor)\).
```
**Algorithm 2** : Bidirectional Dynamic Programming Verification
Algorithm 2 describes a recursive bidirectional dynamic programming algorithm for strong-USP verification that uses the 3D matching instance. The algorithm consists of two phases. Let \(t=\lfloor s/2\rfloor\). The first phase determines all possible sets \(Q,R\subseteq P\) with \(|Q|=|R|=t\) such that there is 3D matching \(M_{1}\) of \(H_{P}\) when restricted to the vertices \(\{p_{1},p_{2},\ldots,p_{t}\}\sqcup Q\sqcup R\). The sets \(Q,R\) satisfying the requirement are stored in a table \(T\) during the first phase on Line 7. The second phase determines all possible sets \(Q,R\subseteq P\) with \(|Q|=|R|=s-t\)
such that there is a 3D matching \(M_{2}\) of \(H_{P}\) when restricted to the vertices \(\{p_{t+1},p_{t+2},\ldots,p_{s}\}\sqcup Q\sqcup R\). For each pair \((Q,R)\) the algorithm considers in the second phase, it checks whether \((P-Q,P-R)\) was inserted into \(T\) during the first phase. If the pair is present, it means that there is a 3D matching of \(H_{P}\) which is \(M=M_{1}\cup M_{2}\). This works because, by Line 10, \(M_{1}\) and \(M_{2}\) are partial 3D matchings on \(\{p_{1},\ldots,p_{t}\}\sqcup(P-R)\sqcup(P-Q)\) and \(\{p_{t+1},\ldots p_{s}\}\sqcup R\sqcup Q\), respectively, which implies that \(M_{1}\) and \(M_{2}\) are vertex disjoint. The first phase always returns \(false\), which is ignored, and the second phase returns whether a complete matching could be found, and, hence, by Lemma 4, whether \(P\) is a strong USP.
The running time of this algorithm is dominated by the number of pairs of sets \((Q,R)\) it examines. Observe that rows of \(P\) are considered in order in Lines 15 & 16. Further, the algorithm tracks the index of the last elements added to \(Q\) and \(R\) in \(\ell_{Q}\) and \(\ell_{R}\), respectively. The algorithm only adds new elements to \(Q\) or \(R\) that have higher indexes than ones previously added. Altogether this implies that each pair of sets \((Q,R)\) is only considered at most once during a phase. Since \(Q,R\subseteq P\), there are at most \(\sum_{i=0}^{t}\binom{s}{i}\cdot\binom{s}{i}\leq(\sum_{i=0}^{t}\binom{s}{i})^{2} \leq(2^{s})^{2}=4^{s}\) pairs \((Q,R)\). This means that SearchHalf is called at most \(4^{s}\) times during each phase. Hence the running time of the algorithm is \(O(4^{s}\cdot s^{2}\cdot\mathrm{poly}(s)+T_{3DM}(s,k))\) where \(s^{2}\) factor comes from the inner loops, \(\mathrm{poly}(s)\) time to manipulate the sets and track the contents of \(T\) as a hash table, and \(T_{3DM}(s,k)\) accounts for the time to construct \(H_{P}\). The memory requirements of Algorithm 2 are similarly high--the first phase uses \(O(4^{s}\cdot s)\) bits to store \(T\).
Note that Algorithm 2 does not early terminate on \(P\) that are strong USP, because it must search through all pairs before determining that none can be found. The algorithm could be modified to allow early termination when \(P\) is not a strong USP by causing the second phase of search to immediately return in Line 18 once the first 3D matching witness has been located. However, this still requires the first phase to run to completion. A remedy for this would be to run both phases in parallel and have them check against each other. We chose not to because it would substantially complicate the implementation and would be unlikely to ultimately improve the performance of our combined algorithms.
For comparison, more advanced techniques like those of Bjorklund et al. can achieve a better asymptotic time of \(O(2^{s}\mathrm{poly}(s))\)[8]. We chose not to implement their algorithm, because we judged that it would not substantially increase the domain for which verification was possible.
### 3D Matching to Satisfiability
By Lemma 4, one can determine whether a puzzle \(P\) is a strong USP by constructing the graph \(H_{P}\) and deciding whether it has a non-trivial 3D matching. Here we reduce our 3D matching problem to the satisfiability (SAT) problem on conjunctive normal form (CNF) formulas and then use a state-of-the-art SAT solver to resolve the reduced problem. To perform the reduction, we convert the graph \(H_{P}\) into a CNF formula \(\Psi_{P}\), a depth-2 formula that is the AND of
ORs of Boolean literals. We construct \(\Psi_{P}\) so that \(\Psi_{P}\) is satisfiable iff \(H_{P}\) has a non-trivial 3D matching.
Let \(H_{P}=\langle V=P\sqcup P\sqcup P,E\subseteq P^{3}\rangle\) be the 3D matching instance associated with the puzzle \(P\). Our goal is to determine whether there is a non-trivial 3D matching \(M\subseteq E\). A naive reduction would be to have variables \(M_{u,v,w}\) indicating inclusion of each edge \((u,v,w)\in P^{3}\) in the matching. This results in a formula \(\Psi_{P}\) with \(s^{3}\) variables and size \(\Theta(s^{5})\) because including an edge \(e\in P^{3}\) excludes the \(\Theta(s^{2})\) edges \(e^{\prime}\) with \(e\cap e^{\prime}\neq\emptyset\). To decrease the size of \(\Psi_{P}\) we instead use sets of variables to indicate which vertices in the second and third part of \(V\) are matched with each vertex in the first part. In particular we have Boolean variables \(M_{u,v}^{1}\) and \(M_{u,w}^{2}\) for all \(u,v,w\in P\), and these variable map to assignments in the naive scheme in the following way: \(M_{u,v}^{1}\wedge M_{u,w}^{2}\Leftrightarrow M_{u,v,w}\).
We now write our CNF formula for 3D matching. First, we have clauses that prevents non-edges from being in the matching:
\[\Psi_{P}^{\text{non-edge}}=\bigwedge_{(u,v,w)\in\overline{E}}(\neg M_{u,v}^{1} \vee\neg M_{u,w}^{2}). \tag{1}\]
Second, we add clauses require that every vertex in \(H_{P}\) is matched with some edge:
\[\begin{split}\Psi_{P}^{\geq 1}=&\left(\bigwedge_{u\in P} \left(\vee_{v\in P}\ M_{u,v}^{1}\right)\wedge(\vee_{w\in P}\ M_{u,w}^{2}) \right)\\ &\wedge\left(\bigwedge_{v\in P}(\vee_{u\in P}\ M_{u,v}^{1}) \right)\wedge\left(\bigwedge_{w\in P}(\vee_{u\in P}\ M_{u,w}^{2})\right).\end{split} \tag{2}\]
Third, we require that each vertex be matched with at most one edge and so have clauses that exclude matching edges that overlap on one or two coordinates.
\[\Psi_{P}^{\leq 1}=\bigwedge_{i\in\{1,2\}}\bigwedge_{(u,v),(u^{\prime},v^{\prime}) \in P^{2}}(u=u^{\prime}\lor v=v^{\prime})\wedge(u,v\neq u^{\prime},v^{\prime}) \Rightarrow\neg M_{u,v}^{i}\vee\neg M_{u^{\prime},v^{\prime}}^{i}. \tag{3}\]
Fourth, we exclude the trivial 3D matching by requiring that at least one of the diagonal edges not be used: \(\Psi_{P}^{\text{non-trivial}}=\bigvee_{u\in P}\neg M_{u,u}^{1}\vee\neg M_{u, u}^{2}\). Finally, we AND these into the overall CNF formula: \(\Psi_{P}=\Psi_{P}^{\text{non-edge}}\wedge\Psi_{P}^{\leq 1}\wedge\Psi_{P}^{ \geq 1}\wedge\Psi_{P}^{\text{non-trivial}}\). The size of the CNF formula \(\Psi_{P}\) is \(\Theta(s^{3})\), has \(2s^{2}\) variables, and is a factor of \(s^{2}\) smaller than the naive approach. Thus we reduce 3D matching to satisfiability by converting the instance \(H_{P}\) into the CNF formula \(\Psi_{P}\).
### 3D Matching to Integer Programming
In parallel to the previous subsection, we use the connection between verification of strong USPs and 3D matching to reduce the former to integer programming, another well-known NP-complete problem (c.f., e.g., [21]) and then apply
a state-of-the-art solver to resolve it. Again, let \(H_{P}=\langle V,E\rangle\) be the 3D matching instance associated with \(P\). We construct an integer program \(Q_{P}\) over \(\{0,1\}\) that is infeasible iff \(P\) is a strong USP. Here the reduction is simpler than the previous one because linear constraints naturally capture matching.
We use \(M_{u,v,w}\) to denote a variable with values in \(\{0,1\}\) to indicate whether the edge \((u,v,w)\in P^{3}\) is present in the matching. To ensure that \(M\) is a subset of \(E\) we add the following edge constraints to \(Q_{P}\): \(\forall u,v,w\in P,\forall(u,v,w)\not\in E,M_{u,v,w}=0\). We also require that each vertex in each of the three parts of the graph is incident to exactly one edge in \(M\). This is captured by the following vertex constraints in \(Q_{P}\): \(\forall w\in P,\sum_{u,v\in P}M_{u,v,w}=\sum_{u,v\in P}M_{u,w,v}=\sum_{u,v\in P }M_{w,u,v}=1\). Lastly, since we need that the 3D matching be non-trivial we add the constraint: \(\sum_{u\in P}M_{u,u,u}<|P|\).
To check whether \(P\) is a strong USP we determine whether \(Q_{P}\) is not feasible, i.e., that no assignment to the variables \(M\) satisfy all constraints. We note that reduction from 3D matching to IP is polynomial time and that there are \(s^{3}\) variables in \(Q_{P}\), and that the total size of the constraints is \(s^{3}\cdot\Theta(1)+3s\cdot\Theta(s^{2})+1\cdot\Theta(s^{3})=\Theta(s^{3})\), similar to size of \(\Psi_{P}\) in the SAT reduction.
### Heuristics
Although the exact algorithms presented in the previous sections make substantial improvements over the brute force approach, the resulting performance remains impractical. To resolve this, we also develop several fast verification heuristics that may produce the non-definitive answer MAYBE in place of YES or NO. Then, to verify a puzzle \(P\) we run this battery of fast heuristics and return early if any of the heuristics produce a definitive YES or NO. When all of the heuristics result in MAYBE, we then run one of the slower exact algorithms that were previously discussed. The heuristics have different forms, but all rely on the structural properties of strong uniquely solvable puzzles.
#### 3.6.1 Downward Closure
The simplest heuristics we consider is based on the fact that strong USPs are downward closed.
Lemma 5: _If \(P\) is a strong USP, then so is every subpuzzle \(P^{\prime}\subseteq P\)._
Proof: Let \(P\) be a strong USP and \(P^{\prime}\subseteq P\). By Definition 3, for every \((\pi_{1},\pi_{2},\pi_{3})\in\mathrm{Sym}_{P}^{3}\) not all identity, there exist \(r\in P\) and \(i\in[k]\) such that exactly two of the following hold: \((\pi_{1}(r))_{i}=1\), \((\pi_{2}(r))_{i}=2\), \((\pi_{3}(r))_{i}=3\). Consider restricting the permutations to those that fix the elements of \(P\backslash P^{\prime}\). For these permutations it must be the case that \(r\in P^{\prime}\) because otherwise \(r\in P\backslash P^{\prime}\) and there is exactly one \(j\in[3]\) for which \((\pi_{j}(r))_{i}=j\) holds. Thus we can drop the elements of \(P\backslash P^{\prime}\) and conclude that for every tuple of permutations in \(\mathrm{Sym}_{P^{\prime}}\) the conditions of Definition 3 hold for \(P^{\prime}\), and hence that \(P^{\prime}\) is a strong USP.
This leads to a polynomial-time heuristic that can determine that a puzzle is not a strong USP. Informally, the algorithm takes an \((s,k)\)-puzzle \(P\) and \(s^{\prime}\leq s\)
and verifies that all subsets \(P^{\prime}\subseteq P\) with size \(|P^{\prime}|=s^{\prime}\) are strong USPs. If any subset \(P^{\prime}\) is not a strong USP, the heuristic returns NO, and otherwise it returns MAYBE. For completeness, this algorithm is described in Algorithm 3.
```
0: An \((s,k)\)-puzzle \(P\), and size \(s^{\prime}\leq s\).
0: NO, if \(P\) has a set of \(s^{\prime}\) rows that do not form a strong USP, and MAYBE otherwise.
1:functionHeuristicDownwardClosed\((P,s^{\prime})\)
2:for\(P^{\prime}\subseteq P,|P^{\prime}|=s^{\prime}\)do
3:if\(P^{\prime}\) is not a strong USP then return NO.
4:returnMAYBE.
```
**Algorithm 3** : Downward-Closure Heuristic
This algorithm runs in time \(O(\binom{s}{s^{\prime}}\cdot T(s^{\prime},k))\) where \(T(s^{\prime},k)\) is the runtime for verifying an \((s^{\prime},k)\)-puzzle. In practice we did not apply this heuristic for \(s^{\prime}\) larger than 3. When \(s^{\prime}\) is some constant \(d\), the running time becomes \(O(s^{d}\cdot T(d,k))=O(s^{d}k)\) using the brute force algorithm (Algorithm 1) for verification of the puzzle \(P^{\prime}\).
#### 4.2.2 Unique Pieces
Every strong uniquely solvable puzzle is a uniquely solvable puzzle. A necessary condition for a puzzle to be a USP is that for each element in [3], the collection of subrows contains no duplicates.
Lemma 6 (Implicit in [11]): _If \(P\) is a USP, then for all \(e\in[3]\), and distinct rows \(r_{1},r_{2}\in P\), there is a column \(c\in[k]\) were one of the rows \(r_{1}\) or \(r_{2}\) has an \(e\) and the other one does not._
Proof: Suppose, for the sake of contradiction, that this is not the case, and distinct rows \(r_{1},r_{2}\in P\) have \(e\) in exactly the same columns for some \(e\in[3]\). We show that \(P\) is not a USP. Choose \(\pi_{e}=(r_{1}r_{2})\), i.e., the permutations that transposes the subrows for \(e\) in rows \(r_{1}\) and \(r_{2}\). Choose the other two permutations for the elements of \([3]\backslash\{e\}\) to be the identity. Since the permutations are not all the identity, the second half of Definition 2 applies. However, the puzzle that results from the permutations is identical to \(P\) and for all \(c\in[k]\) and each row \(r\in P\) there exists exactly on \(i\in[3]\) where \((\pi_{i}(r))_{c}=i\). Hence the definition of uniquely solvable is not satisfied and we have a contradiction.
Note that the reverse direction of Lemma 6 does not hold. The puzzle in Figure 1 is an example of this: It is not uniquely solvable, but the subrows for each element are distinct.
We can make Lemma 6 effective as via a linear-time heuristic capable of ruling out puzzles that are not (strong) USPs. Although straightforward, for completeness we formalize our approach in Algorithm 4. When the sets are implemented as hash tables, the expected running time of this algorithm is \(O(s\cdot k)\) time, which is linear in the size of the puzzle \(P\). An alternative worst-case \(O(s\cdot k)\) time implementation uses radix sort to sort the characteristic sequences
of the subrows as binary numbers and then scans adjacent rows to to detect duplication.
The unique pieces heuristic is equivalent to the downward-closure heuristic for subpuzzles of size two.
Lemma 7: _Let \(P\) be an \((s,k)\)-puzzle, then HeuristicUniquePieces\((P)=\textsc{HeuristicDownwardClosed}(P,2)\)._
Proof: We show both directions.
Suppose that \(P\) fails the unique pieces heuristic for, w.l.o.g., \(e=1\), then there are distinct rows \(r_{1},r_{2}\in P\) where the cells that contain \(1\) are all in the same columns. This means we can swap those \(1\)'s subrows without causing overlap or changing the puzzle. This implies that \(P^{\prime}=\{r_{1},r_{2}\}\) is not a (strong) USP. Since \(|P^{\prime}|=2\) and \(P^{\prime}\subseteq P\), the downward closure heuristic for \(s^{\prime}=2\) will also conclude that \(P\) is not a (strong) USP.
Suppose that \(P\) fails the downward-closure heuristic for \(s^{\prime}=2\). Then there is a pair of distinct rows \(r_{1},r_{2}\in P\) for which \(P^{\prime}=\{r_{1},r_{2}\}\) is not a strong USP. Suppose there is no columns were \(r_{1}\) and \(r_{2}\) differ, then the subrows of \(r_{1}\), \(r_{2}\) are the same for all elements, and so \(P\) fails the unique pieces heuristic. For the other case, suppose there is a least one column \(c\in[k]\) where \(r_{1}\) and \(r_{2}\) differ. W.l.o.g., let that column be \(((r_{1})_{c},(r_{2})_{c})=(1,2)\). Because \(P^{\prime}\) is not an USP and this column is \((1,2)\), there can be other no columns that are in from the set \(\{(1,3),(2,3),(3,2),(3,1)\}\) otherwise they would form an USP with the column \((1,2)\). This means the only columns that \(P^{\prime}\) contains are from the set \(\{(1,2),(2,1),(1,1),(2,2),(3,3)\}\). Therefore, the columns which contain \(2\) must match and the subrows for \(2\) in \(r_{1}\) and \(r_{2}\) are identical. Thus, \(P^{\prime}\), and so \(P\), fails the unique pieces heuristic.
A corollary of this proof is that for size-two puzzles, every USP is also a strong USP.
Corollary 1: _Let \(P\) be a \((2,k)\)-puzzle, if \(P\) is a uniquely solvable puzzle, then \(P\) is a strong uniquely solvable puzzle._
Since the unique pieces heuristic is equivalent to the downward-closure heuristic for \(s^{\prime}=2\) and the running time of unique pieces is linear in the puzzle size, \(O(s\cdot k)\), and the running time of downward closed is \(O(s^{2}\cdot k)\), we use the unique pieces heuristic in place of downward closed for \(s^{\prime}=2\).
#### 3.6.2 Greedy
This heuristic attempts take advantage of Lemma 4 and greedily search for a 3D matching for the instance \(H_{P}\). The heuristic proceeds iteratively, determining the vertex of the first part of the 3D matching instance with the least edges and randomly selecting an edge of that vertex to put into the 3D matching. If the heuristic successfully constructs a 3D matching it returns NO indicating that the input puzzle \(P\) is not a strong USP. If the heuristic reaches a point were prior commitments have made the matching infeasible, the heuristic starts again from scratch. This process is repeated some number of times before it gives up and returns MAYBE. In our implementation we use \(s^{2}\) attempts because it is similar to the running time of the reductions and it empirically reduced the number of instances requiring full verification in the domain of puzzles with \(k=6,7,8\) while not increasing the running time by too much. The greedy heuristic is formalized in Algorithm 5.
The array \(cts\) is used to store the number of edges \(cts[u]\) that remain associated with vertex \(u\) along the first coordinate. Much of the algorithm is devoted to maintaining this invariant. The sets \(U,V,W\) store the vertices along the three coordinates, respectively, that have already been incorporated into the partial 3D matching. Like in Algorithm 2 we do not store the matching itself, only the vertices involved. The break at Line 10 triggers when the partial 3D matching is a dead end and cannot be extended into a full 3D matching. The condition of Line 23 is true when a full 3D matching has been constructed and causes the algorithm to return that \(P\) is not a strong USP.
The running time of this algorithm is \(O(s^{3}t+T_{3DM}(s,k))\), where \(T_{3DM}(s,k)\) is the time required to construct 3D matching instances from \((s,k)\)-puzzles. This algorithm has the potential to be considerably slower than the downward-closure heuristic, and in practice we set \(t=s^{2}\). However, the main loop can terminate early at Line 10 when it fails to extend the 3D matching, this permits the expected time to much less than the worst case. For a puzzle \(P\) that is a strong USP, the heuristic takes the full \(\Omega(s^{3}t+T_{3DM}(s,k))\) time.
Compared to the downward-closure and unique pieces heuristics this heuristic is much less efficient. As a result we only run it when when the other heuristics have failed. See Subsection 7.2 for a comparison of effectiveness these heuristics in our experiments.
### Hybrid Algorithm
Our final verification algorithm (Algorithm 6) is a hybrid of several exact algorithms and heuristics. The size thresholds for which algorithm and heuristic to apply were determined experimentally for small \(k\) and are focused on the values where our strong USP search algorithms are tractable \(k\leq 6\) (or nearly
tractable \(k\leq 8\)). We decide to run both of the reductions to SAT and IP in parallel because it is not clear which algorithm performs better in general. Since verification halts when either algorithm completes, the wasted effort is within a factor of two of what the better algorithm could have done alone. We also chose to do this because we experimentally observed that there were many instances that one of the algorithms struggled with that the other did not--this resulted in a hybrid algorithm that out performed the individual exact algorithms on average. We show in Subsection 7.2 that our hybrid algorithm and heuristics perform well in practice at quickly verifying strong USPs for small width \(k\). Further, Subsection 7.3 contains a discussion of the relative performance of the SAT and IP approaches on different instance types from our benchmark experiments.
```
0: An \((s,k)\)-puzzle \(P\).
0: YES, if \(P\) is a strong USP, and NO otherwise.
1:functionVerify(\(P\))
2:if\(s\leq 2\)thenreturnVerifyBruteForce(\(P\)).
3: Return result ifHeuristicUniquePieces(\(P\)) is not MAYBE.
4:if\(s\leq 7\)thenreturnVerifyDynamicProgramming(\(P\)).
5: Return result ifHeuristicDownwardClosed(\(P,3\)) is not MAYBE.
6: Return result ifHeuristicGreedy(\(P\)) is not MAYBE.
7: RunVerifySAT(\(P\)) and VerifyIP(\(P\)) in parallel and return first result.
```
**Algorithm 6** : Hybrid Verification
## 4 Searching for Strong USPs
With a practical verification algorithm in hand, we consider the problem of searching for large strong USPs. Because the set of strong USPs is downward closed, a natural search strategy is: Start with the empty set and repeatedly consider adding rows while maintaining the strong-USP property. However, while this strategy will lead to a maximal-size strong USP, it is not guaranteed to produce a maximum-size strong USP. This is because the set of strong USPs does not form a matroid, rather it is only an independence system (c.f., e.g., [25]).
In particular, (i) the empty puzzle is a strong USP and (ii) the set of strong USP are downward closed by Lemma 5. The final property required to be a matroid, the augmentation property, requires that for every pair of strong USPs \(P_{1},P_{2}\) with \(|P_{1}|\leq|P_{2}|\) there is a row of \(r\in P_{2}\backslash P_{1}\) such that \(P_{1}\cup\{r\}\) is also a strong USP. For a simple counterexample consider the strong USPs \(P_{1}=\{32\}\) and \(P_{2}=\{12,23\}\). Using Lemma 6, we see that neither \(P_{1}\cup\{12\}=\{12,32\}\) nor \(P_{1}\cup\{23\}=\{23,32\}\) are strong USPs, and hence the augmentation property fails. One consequence is that naive greedy algorithms will likely be ineffective for finding maximum-size strong USPs. Furthermore, we do not currently know of an efficient algorithm that can take a strong USP \(P\) and determine a row \(r\) such that \(P\cup\{r\}\) is a strong USP.
Despite that, we have had some success in applying general-purpose tree-search techniques with pruning based on the symmetries of strong USPs together with our practical verification algorithm to construct maximum-size strong USPs for small \(k\).
### Puzzle Symmetry
Since puzzles are defined as sets of rows, the ordering of the rows of a puzzle \(P\) does not affect the USP property. Similarly, but slightly less obviously, the USP property is invariant to reordering the columns of the puzzle, because the required existential condition \(\exists c\in[k]\text{ st.\,}(...)\) from Definition 3 is independent
of the ordering of the columns. Lastly, the alphabet [3] typically used to represent the elements of a puzzle is completely arbitrary, any set of three distinct values would suffice. These values are not interpreted mathematically, aside from their convenience in expressing the SUSP definition concisely. This logic can be formalized into the following lemma.
Lemma 8: _Let \(\rho\in\operatorname{Sym}_{[k]},\delta\in\operatorname{Sym}_{[3]}\). A \((s,k)\)-puzzle \(P\) is a strong USP iff \(\{(\delta(r_{\rho(c)}))_{c\in[k]}\mid r\in P\}\) is a strong USP._
Proof: Follows immediately from Definition 1 and Definition 3.
This lemma implies that the SUSP property is invariant with respect to these kinds of puzzle transformations. We call two puzzles \(P,P^{\prime}\) that are related in this way _isomorphic_, and use the notation \(P\cong P^{\prime}\) to denote this. The relation \(\cong\) is an equivalence relation, because permutations are invertable, and so it partitions the set of puzzles into equivalence classes.
This notion of isomorphism is naturally related to the same notion in graphs. For each \((s,k)\)-puzzle \(P\) we can define a colored, undirected graph \(G_{P}\). This graph consists of vertices that are partitioned into four sets of different colors: \(V=\{row_{r}\}_{r\in[s]}\sqcup\{col_{c}\}_{c\in[k]}\sqcup\{e_{i}\}_{i\in[3]} \sqcup\{v_{r,c}\}_{(r,c)\in[s]\times[k]}\). There are \(s+k+3+s\cdot k\) vertices in \(G_{P}\). The first three parts are vertices representing the rows and columns of \(P\), and the elements of [3], respectively, and the fourth part are vertices for each of the \(s\cdot k\) cells in the \(P\). The edge relation of \(G_{P}\) is straightforward: Each vertex \(v_{r,c}\) is connected to three vertices corresponding to the row, columns and element that the cell indexed \((r,c)\) contains in \(P\). In particular, the three edges attached to \(v_{r,c}\) are \((v_{r,c},row_{r}),(v_{r,c},col_{c}),(v_{r,c},elt_{P(r,c)})\). In total, \(G_{P}\) has \(3\cdot s\cdot k\) edges. Because the vertex sets for rows, columns, and elements are each uniquely colored and each cell of \(P\) is connected to vertices representing its row, column, and element, the automorphisms of \(G_{P}\) are in 1-1 correspondence to the automorphisms of \(P\) under permutations of rows, columns, and elements. This implies that for two \((s,k)\)-puzzles \(P,P^{\prime}\), if \(G_{P}\cong G_{P^{\prime}}\) then there exists permutations of the rows, columns, and elements of \(P\) which results in \(P^{\prime}\). Further by Lemma 8, if \(G_{P}\cong G_{P^{\prime}}\), then \(P\cong P^{\prime}\), and \(P\) is an SUSP iff \(P^{\prime}\) is an SUSP.
### Symmetry-Pruned Tree Search
A natural way to search for strong USPs is based on breadth-first search and uses the fact that strong USP are downward closed (Lemma 5): To find the largest possible width-\(k\) strong USP, (i) start with all possible first rows - the \(3^{k}\)\((1,k)\)-puzzles, (ii) attempt to extend the resulting puzzles with all possible rows keeping only the puzzles that are strong USPs and which are not isomorphic to the strong USPs that have been seen before to form the new search frontier, and (iii) repeat Step (ii) until the search frontier is empty.
To ensure the algorithm does not revisit isomorphic puzzles, we use canonical graph representations \([G_{p}]\) of the puzzle graphs \(G_{P}\). A canonical graph representation is a binary encoding of a graph with the property that for any two graphs \(G_{1},G_{2}\), \([G_{1}]=[G_{2}]\) iff \(G_{1}\cong G_{2}\) (c.f., e.g., [24]). As the search algorithm runs we
record the set \(I\) of canonical graph representations \([G_{P}]\) of each distinct puzzle \(P\) that has been added to the search frontier. Each time a puzzle \(P^{\prime}\) is considered for being added to the search frontier we first check whether its canonical graph representation \([G_{P^{\prime}}]\in I\), if it is, we do not add \(P^{\prime}\) to the frontier. The use of canonical representations of puzzles dramatically shrinks the search space by searching from \([P]\) rather than every \(P^{\prime}\cong P\) and by not allowing duplicates of \([P]\) to be enqueued. This algorithm SP-BFS is formalized in Algorithm 7.
```
0: An integer \(k\geq 0\).
0: The number \(b\), which is the size of the largest width-\(k\) strong USP.
1:functionSP-BFS(\(k\))
2: Let \(Q\) be an empty queue.
3: Let \(I\) be an empty set.
4: Let \(b=0\).
5:enqueue(\(Q,\emptyset\)).
6:while\(Q\) is not empty do
7:\(P=\textsc{dequeue(Q)}\).
8:for\(r\in[3]^{k}\backslash P\)do
9: Let \(P^{\prime}=P\cup\{r\}\).
10:if\(\textsc{Verify}(P^{\prime})\) and \([G^{\prime}_{P}]\not\in I\)then
11:enqueue(\(Q,P^{\prime}\)).
12:\(I=I\cup\{[G^{\prime}_{P}]\}\).
13:\(b=|P^{\prime}|\).
14:return\(b\).
```
**Algorithm 7** : Symmetry-Pruned Breadth-First Search
We argue the correctness of this algorithm.
Lemma 9: _For \(k\in\mathbb{N}\), SP-BFS(\(k\)) returns the maximum integer \(s\) for which there exists an \((s,k)\)-SUSP._
Proof: Ignoring the pruning that \(I\) performs for a moment, it is routine to argue that SP-BFS behaves like a generic breadth-first search algorithm over the tree of all strong USPs. This is because of the downward-closure property of strong USP (Lemma 5), which makes any strong USP \(P\) reachable from the trivial strong USP \(\emptyset\) using a series of row inclusions. SP-BFS(\(k\)) results in an exhaustive search of all strong USPs of width \(k\) and return the maximum size \(b\) of such SUSPs.
We argue that when considering the pruning that \(I\) contributes to, SP-BFS(\(k\)) enqueues exactly one element of each equivalence class of puzzles that are SUSPs. Then, as a consequence of Lemma 8, the algorithm must explore every equivalence class of width-\(k\) SUSPs. Hence, it explores an equivalence class with SUSPs of maximum size and subsequently returns that size, which is the expected output.
To complete the argument and show that the symmetry pruned search covers the entire search space of equivalence classes, suppose, for the sake of contradic
tion, that there is some smallest \(s\) such that there is an \((s,k)\)-puzzle \(P\) that does not have its equivalence class \([P]\) searched. We know that \(s>1\), because the algorithm starts by considering all possible \((1,k)\)-puzzles. Let \(P^{\prime}\) be the \((s-1,k)\)-puzzle created from \(P\) by removing one of its rows \(r\), \(P^{\prime}\) has as least one row because \(s>1\). By hypothesis, the equivalence class of \([P^{\prime}]\) has been visited by SP-BFS because \(P^{\prime}\)'s size is \(s-1<s\). Consider \([P]\) and remove the row that corresponded to \(r\) to form \([P]^{\prime}\). It must be the case that \([P^{\prime}]\cong[P]^{\prime}\). This isomorphism extends to \([P]\) in that there must be a row \(r^{\prime}\) such that \(([P^{\prime}]\cup\{r^{\prime}\})\cong[P]\), where \(r^{\prime}\) is replaces the row remove from \([P]\). Therefore, since \([P^{\prime}]\) is searched, the algorithm must consider all possible rows to extend by, including \(r^{\prime}\). This is means that the equivalence class of \([P]\) is searched, a contradicting our assumption. Therefore every equivalence class of SUSPs is searched by SP-BFS.
This approach reduces the size of the search space, improving both the running time of the search and the space required to keep track of the frontier puzzles. The worst case running time of SP-BFS is \(O(3^{k}\cdot\#EQUIV(k)\cdot(T_{\textsc{Verify}}(s_{k}+1,k)+T_{\textsc{Canonize }}(s_{k},k))\), where \(\#EQUIV(k)\) is the number equivalence classes of strong USP of width \(k\), \(T_{\textsc{Verify}}(s_{k}+1,k)\) is the time to verify the maximum size \((s_{k}+1,k)\)-puzzles examined by the algorithm, and \(T_{\textsc{Canonize}}(s_{k},k)\) is the time to compute the canonical graph representation of each puzzle \(P\) considered by the algorithm (assuming \(T_{\textsc{Verify}}\) and \(T_{\textsc{Canonize}}\) are monotone in their parameters).
See Subsection 7.1 for the experimental results of running SP-BFS and a discussion of implementation issues.
## 5 Upper Bounds
Although the main focus of this research line is to construct sufficiently large strong USP that would imply faster matrix multiplication algorithms, our techniques and approach can also be applied to search for tighter upper bounds on the size of strong USP. We describe several SUSP-size upper bounds in this section.
\(\omega\) Bound.Prior work explicitly discusses bounds on the capacity of infinite families of USP (c.f., [11, Lemma 3.2, Theorem 3.3]). Since every SUSP is a USP, these bounds also apply to SUSP and can be restated to apply to individual puzzles. The first bound, which we denote as the "\(\omega\) bound", results from (i) Lemma 1, which is monotone non-increasing for fixed \(k\), and (ii) the fact that \(\omega\geq 2\). To compute this bound we evaluate the inequality of Lemma 1 on increasingly large \(s\) until just before the consequence implies \(\omega<2\) which is in contradiction with \(\omega\geq 2\).
Unique Pieces Bound.The second bound, which we denote as the "unique pieces bound", following directly from Lemma 6. Since that lemma requires that each row of a (strong) USP have a unique ones, twos, and threes piece, the total number of rows in a strong USP cannot be more than \(2^{k}\).
USP Bound.The third bound, which we denote as the "USP bound", results from the proof of [11, Lemma 3.2]. Although not spelled out in that article, the proof relies on the following subclaim that directly bounds \(s\) as a function of \(k\).
Proposition 2: _Let \(P\) be a \((s,k)\)-USP, then_
\[s\leq\sum_{c_{1}=0}^{k}\sum_{c_{2}=0}^{k-c_{1}}\min\left(\binom{k}{c_{1}}, \binom{k}{c_{2}},\binom{k}{k-(c_{1}+c_{2})}\right)=O\left(k^{2}\cdot\left( \frac{3}{2^{2/3}}\right)^{k}\right).\]
Note that the USP bound is asymptotically tighter than the unique pieces bound as \(\frac{3}{2^{2/3}}\approx 1.8899<2\).
Clique Bound.The fourth bound, which we denote as the "clique bound", results from the fact that SUSPs are downward closed (Lemma 5). In particular if \(P\) is an SUSP, then for every \(P^{\prime}\subseteq P\) with \(2\) rows must also be an SUSP. Fix \(k\in\mathbb{N}\) and consider a graph \(G_{k}\) whose vertices correspond to the possible rows of a width-\(k\) puzzle, i.e., strings in \([3]^{k}\), and where there is an edge between \(r_{1},r_{2}\in[3]^{k}\) if \(\{r_{1},r_{2}\}\) is an SUSP. Observe that by downward closure, each \((s,k)\)-SUSP corresponds to a clique of size \(s\) in \(G_{k}\). This approach naturally generalizes from the Clique problem to \(h\)-HypergraphClique problem where the graph \(G_{k}^{h}\) consists the same \(3^{k}\) vertices as \(G_{k}=G_{k}^{2}\), but instead has the arity-\(h\) edges \(\{r_{1},r_{2},\ldots,r_{h}\}\) which are \((h,k)\)-SUSPs.
Proposition 3: _Let \(P\) be an \((s,k)\)-SUSP and \(2\leq h\leq s\). Then for_
\[G_{k}^{h}=\langle V=[3]^{k},E=\{P^{\prime}\subseteq V\mid P^{\prime}\text{ is a strong USP and }|P^{\prime}|=h\}\rangle,\]
\((G_{k}^{h},s)\in h\)_-HypergraphClique._
Therefore, the size of a maximum hypergraph clique in \(G_{k}^{h}\) is an upper bound of size of width-\(k\) SUSP. We use "clique bound" to denote the specific instantiation of this bound for \(h=2\).
Exhaustive Bound.For fifth bound, which we denote as the "exhaustive bound", we consider the results of Algorithm 7 when run in the domain of \(k\) where the full search space can be feasibly explored. Because these bounds are based on exhaustive search they are inherently tight.
Downward-Closure Bound.The final bound we consider follows from the downward-closure property of SUSPs.
Proposition 4: _Let \(P\) be an \((s,k)\)-SUSP with \(k>1\), then there exists an \((\lceil\frac{s}{3}\rceil,k-1)\)-SUSP._
Proof: Fix any \(c\in[k]\) and consider the \(c^{th}\) column of \(P\), then, by averaging, there must be an element of \(e\in[3]\) that appears at least \(\lceil\frac{s}{3}\rceil\) times in that column. Let \(P^{\prime}\subset P\) be the subpuzzle of \(P\) whose rows have \(e\) in the \(c^{th}\) column. \(P^{\prime}\) is a strong USP, because \(P\) is a strong USP and strong USPs are downward
closed (Lemma 5). Form \(P^{\prime\prime}\) by removing the \(c^{th}\) column of \(P^{\prime}\). \(P^{\prime\prime}\) is a strong USP, because \(P^{\prime}\) is a strong USP and the strong-USP property is invariant to addition or removal of constant columns. By construction, \(P^{\prime\prime}\) is a \((\lceil\frac{s}{3}\rceil,k-1)\)-USP.
This bound is not as independently applicable like the others, but it can lift upper bounds of \(s\leq u\) at \(k\) to \(s\leq 3u\) at \(k+1\).
See Subsection 7.1 for the results of evaluating the above bounds for small width and a discussion of issues involved in concretely calculating them.
## 6 Implementation
We implemented our verification algorithms, heuristics, and search algorithms, along with various utilities and appropriate datastructures to represent underlying information such as puzzles in C++. The source code for our implementation is available under a MIT License at [https://bitbucket.org/paraphase/matmult](https://bitbucket.org/paraphase/matmult).
We use a number of external libraries with subroutines that are key to the functioning of our algorithms. Our IP-based verifier and Clique bound calculator both use the commercial, closed-source mixed-integer programming solver Gurobi to solve the integer programs produced by our reductions [19]. Our SAT-based verifier uses, by default, the kissat-sc2021-sat solver from the 2021 SAT Competition by A. Biere, M. Fleury, and M. Heisinger [6, page 10]. Note that the conference version of this article used the MapleCOMSPS solver--see Subsection 7.3 for a discussion of solver benchmarks, comparisons, and choice. We implemented Algorithm 7 using our hybrid verifier, and the graph automorphism library Nauty [24] as a subroutine to perform the required graph canonization on \(G_{P}\). The original versions of our SP-BFS implementation targeted a high-performance computing cluster environment, because our brute force and dynamic programming implementations were not efficient enough. Subsequent improvements to our verification algorithms made this unnecessary. Despite this, our SP-BFS implementation is still in MPI and uses a MapReduce framework [27] to maintain a distributed search frontier.
Our code base also contains multiple implementations of depth-first-search-inspired algorithms for locating strong USPs. These algorithms use our hybrid verification implementation and puzzle symmetry pruning technique discussed in Section 4. For brevity and to keep this article focused on strong-USP verification, we elect not to discuss these algorithms and defer them to a subsequent article. That said, some of the concrete puzzles we found and report in the next section were generated by such algorithms. These puzzles once found were experimentally verified as strong USPs using the techniques discussed in detail in Section 3.
## 7 Experimental Results
Our experimental results come in several flavors for small-constant width \(k\): (i) constructive lower bounds on the maximum size of width-\(k\) strong USPs witnessed by found puzzles, (ii) upper bounds on the maximum size of width-\(k\) strong USPs, (iii) the number of SUSPs and SUSP equivalence classes for width \(k\), (iv) experimental data comparing the run times of our verification algorithms and distinguishing likelihood of our heuristics, and (v) a benchmark data set of SAT/UNSAT instances that we use to compare the effectiveness of competitive SAT solvers as subroutines for the SAT-based part of our verifier.
All of the results in this section were produced by running our algorithm implementations on the same Ubuntu 20.04 PC with a 3.00 GHz Intel Core i9-10980XE CPU and 128 GB of RAM.
gives an infinite family of strong USPs that achieves \(\omega<2.48\) as \(k\) goes to infinity, which is stronger than our results are directly able to achieve.
New Upper Bounds.Table 2 summarizes the results of evaluating the bounds from Section 5 for puzzles of width \(k\leq 12\). The calculations were routine except for the clique bound that required constructing \(G_{k}\), converting it into a mixed integer program, and solving that program using Gurobi [19]. This was feasible on our test system up to \(k=11\). We also experimented with calculating the upper bounds for the 3-HypergraphClique bound, but found it infeasible to compute for \(k\geq 5\) and so have omitted the results. The final row of the table contains the best upper bounds we achieved, including applying the downward-closure bound to lift adjacent bounds at \(k=6\) and \(k=12\). These upper bounds are stronger than those immediately implied by [11].
Observe that exhaustive search produced the best and tightest bounds, and that the clique bound is considerably stronger than the unique pieces, USP, and \(\omega\) bounds. The unique pieces bounds appears to be stronger than the USP bound, but we know that that is an artifact of the small value of \(k\). As \(k\) increase, the USP bound will become tighter than the unique pieces bound. Based on the processing time we spent on \(k=6\), we conjecture that \(s=14\) is tight for \(k=6\) and that our lower bounds for \(k>6\) are not. Our results suggests there is considerable room for improvement in the construction of strong USPs, and that it is possible that there exist large puzzles for \(k=7,8,9\) that would beat [11]'s constructions and perhaps come close to the Coppersmith-Winograd refinements. That said, it seems that new insights into the SUSP search problem are required to proceed for \(k>6\).
Counting Strong USP.Table 3 shows the number of strong USPs and equivalence classes of SUSP exhaustively calculated using SP-BFS with and without symmetric pruning. Observe that the number of strong USPs is many orders of magnitude more than the number of equivalence classes of strong USPs, even for \((3,3)\)-SUSPs. Exhaustive search became infeasible even with puzzle symmetry
Figure 3: Representative maximal-size strong USPs found for width \(k=1,2,\ldots,6\).
pruning for \(k\geq 6\) as the memory usage of Algorithm 7 for storing the search frontier exceeds the 128GB available on our test system.
### Algorithm Performance
To measure the performance of our verification algorithms and heuristics we ran them on 10,000 random puzzles at each point on a sweep through parameter space for widths \(k=5\ldots 12\) and sizes \(s=1\ldots 60\). We chose to test performance via random sampling because we do not have access to a large set of solved instances. This domain coincides with the frontier of our search space, and we tuned the parameters of the heuristics and algorithms in the hybrid algorithm to perform well in this domain. We did not deeply investigate performance characteristics outside of this domain. In Figures 4, 5, & 6 we plot results, for brevity, that are representative of the parameter space only for \(k\in\{6,9\}\).
Running Time.Figure 4 shows the average running times of our verification algorithms in seconds. The brute force and dynamic programming algorithms perform poorly except for very small size, \(s\leq 8\), and their curves loosely match the exponential-time bounds we expect. The plots for the two reduction-based algorithms (SAT and IP) behave similarly to each other. They are slower than brute force and dynamic programming for small values of \(s\), and their behavior for large \(s\) is quite a bit faster. We speculate that the former is due to the cost of constructing the reduced instance and overhead of the third party tools. Further observe that the SAT reduction handily beats the IP reduction on large size for \(k=6\), but as \(k\) increases, the gap decreases. We also note that across the settings of \(k\) the IP reduction has effectively the same running time and is independent of \(k\). This is likely because the size of the IP instance depends only on \(s\). The hybrid algorithm generally performs best or close to best at small values of \(s\) and is clearly faster for large values of \(s\). Notice that it matches the dynamic programming algorithm closely for small values of \(s\) and then diverges when the
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c} \hline \hline & & & & & & & & & & & & & & \\ \cline{2-13} Bound & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 \\ \hline \(\omega\) & 3 & 7 & 15 & 31 & 62 & 120 & 230 & 438 & 831 & 1,575 & 2,890 & 5,637 \\ Unique & 2 & 4 & 8 & 16 & 32 & 64 & 128 & 256 & 512 & 1,024 & 2,048 & 4,096 \\ USP & 3 & 6 & 12 & 24 & 45 & 87 & 168 & 312 & 597 & 1,140 & 2,112 & 4,023 \\ Clique & **1** & 3 & 5 & 9 & 17 & 30 & 55 & 105 & 186 & 348 & 654 & \\ Exhaustive & **1** & **2** & **3** & **5** & **8** & & & & & & & & \\ \hline Best & **1** & **2** & **3** & **5** & **8** & 24 & 55 & 105 & 186 & 348 & 654 & 1,962 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Upper bounds on the size of SUSPs for widths \(k\leq 12\). Bold font indicates the bound is tight, and blanks indicate the calculation for this puzzle width was infeasible.
reduction-based algorithms and heuristics are activated at larger \(s\). Observe that the hybrid algorithm is effectively constant time for large \(s\), though the size for which this happens increases as a function of \(k\). We expect this is because the density of strong USPs decreases rapidly with \(s\), and that the randomly selected puzzles are likely far from satisfying Definition 3 and, hence, they are quickly rejected by the unique pieces heuristics. Further evidence of this is that running time of the hybrid algorithm converges to the running time of the unique pieces heuristic for large \(k\).
Heuristic Effectiveness.Figure 5 shows the probability that each individual heuristic distinguishes a random puzzle in our benchmark. Observe that the distinguishing power of the downward closure heuristic for \(s^{\prime}=2\) and unique pieces heuristics coincide, demonstrating experiment consistency with Lemma 7. Further, and for the same reason, the downward closure heuristic for \(s^{\prime}=3\) has at least as high a distinguishing likelihood as the unique pieces heuristic. In the plots, these three heuristics achieve almost \(100\%\) probability of distinguishing random puzzles by size \(s=30\). The greedy heuristic perform less well than the others and get substantially worse as \(k\) increases. We do not plot the running times of the heuristics here, but they behave as expected by the earlier analysis. As we noted earlier, unique pieces is linear time in the size of the puzzle and the fastest of the heuristics. Figure 4 shows how the running time of the hybrid algorithm and unique pieces converges as essentially all random puzzles of large size, which the benchmark examined, are verified as non-SUSPs by this heuristic.
Variation in Running Time.Finally, we look at the variation in the running times of the hybrid algorithm in Figure 6. For small \(s\), the running time distribution is far from a normal distribution-the average is far above the median
\begin{table}
\begin{tabular}{l r r r r r r r r r} \hline \hline & & & & & & & \(k\) & & & \\ \cline{2-10} \(s\) & 1 & 2 & 3 & 4 & 4 & 5 & & 6 & \\ \hline
1 & 1 & 3 & 2 & 9 & 3 & 27 & 4 & 81 & 5 & 243 & 7 & 729 \\
2 & & 2 & 24 & 9 & 408 & 33 & 4,848 & 91 & 50,160 & 229 & 486,024 \\
3 & & & 9 & 1,800 & 240 & 182,304 & 2,429 & 8,361,000 & 16,971 & 291,347,280 \\
4 & & & & **728** & 2,445,120 & **59,149** & 992,377,400 & **1,611,648** &? \\
5 & & & & **190** & 3,248,640 & **707,029** &? &? &? \\
6 & & & & & & **2,337,715** &? &? &? \\
7 & & & & & & **1,359,649** &? &? &? \\
8 & & & & & & **89,196** &? &? &? \\
9 & & & & & & & &? &? \\ \hline \hline \end{tabular}
\end{table}
Table 3: Number of equivalence classes (bold face, left) versus total number of encoded SUSPs (normal face, right) by \((s,k)\)-puzzle dimensions. Computed using Algorithm 7. Empty cells indicate that the number of SUSPs and equivalence classes is zero.?’s indicate unknown values that were infeasible to compute.
and middle 50% of running times. This effect becomes even more pronounced as \(k\) increases. However, we find that as \(s\) increases, the median running time converges with the median running time of the unique pieces heuristic, and then for larger \(s\), the average running time converges as well. This is a consequence of the hybrid algorithm having to run the orders of magnitude slower reduction-based algorithms when the fast heuristics fail to resolve the instance. Although not plotted here, we found that the range of the distribution of running times for the SAT-based verifier was larger than for the IP-based verifier, even though the IP-based verifier was slower on average.
Overall, our hybrid verification algorithm performs reasonably well in practice on random instances, despite reductions through NP-complete problems.
### Choice of SAT Solver
In the conference version of this article we examined only one SAT solver for use in our implementation, MapleCOMSPS, a conflict-driven solver that uses a learning rate branching heuristic, and that was a top performer at the 2016 SAT Competition [7, 23, 5]. In this article we create a set of benchmark satisfiability instances, using the SUSP verification reduction on a variety of puzzles (recall
Figure 4: Log plots of the average running times for verifying 10,000 random \((s,k)\)-puzzles for each \(s\in[50],k\in\{6,9\}\). The plots describe the behavior of five verification algorithms brute force (BF), dynamic programming (DP), reduction to satisfiability (SAT), reduction to integer programming (IP), and our hybrid algorithm (Hybrid). The running time of the unique pieces heuristic is also included.
Subsection 3.4), and examined the performance of \(35^{2}\) solvers submitted to the main track of the 2021 SAT Competition [6].
We select benchmark instances consisting of \((s,k)\)-puzzle with sizes from the set
\[\{(2,2),(3,3),(5,4),(8,5),(14,6),(21,7),(30,8),(42,9)\}.\]
We choose these sizes, because we want positive and negative instances and these sizes represent the largest strong USPs of each width we have been able to locate through search. For each size we created ten puzzles that are strong USPs and ten puzzles that are not. To create the ten non-SUSPs we randomly generated a puzzle of that size and verified it was not a strong USP. To create the ten strong USPs we for each size we used the results of our search algorithms. Then we ran all of the puzzles through our SAT reduction to create.dimacs files for each instance. Note that the SUSPs correspond to UNSAT instances and non-SUSPs correspond to SAT instances. In total there are 160 instances in this benchmark. We then ran each of the 35 solvers on each the 160 instance files and check the output of each run against the expected result. For each trial, we record the user CPU time reported by the Linux time command, or a timeout if the program runs more than 5000 seconds without halting (mimicking the rules of the real SAT competition). For comparison, we also run the MapleCOMSPS solver (from
Figure 5: Plots of the likelihood that each of the heuristics produces a definitive results on 10,000 random \((s,k)\)-puzzles for each size \(s\in[50]\) and width \(k\in\{6,9\}\). Here “row pairs” is HeuristicDownwardClosed\((P,2)\) and “row triples” is HeuristicDownwardClosed\((P,3)\). The row pairs points are plotted, but are hard to see, because the unique pieces points coincides with them.
earlier version of this article), our MIP-based verifier (recall Subsection 3.5) and our final hybrid verification algorithm on the same set of benchmark puzzles.
To compare the results of each solver we calculate the maximum time to complete each instance across all of the runs, which is \(5000\) seconds if a run timed out, and then divide by that maximum time to normalize all of the running times to the interval \([0,1]\). We calculate a benchmark score for each solver by summing their relative running times across all instances. Table 4 contains the benchmark scores for each solver.
MapleCOMSPS, the solver we used in the conference version of this article, performs similarly to the best scoring solvers from the \(2021\) competition. The recorded timeouts across all solvers come almost exclusively from the UNSAT instances derived from \((30,8)\)-SUSPs and \((42,9)\)-SUSPs. The Gurobi-based verifier performs substantially worse than the best performing satisfiability solvers on SAT instances (non-SUSPs), but dramatically better on UNSAT instances (SUSPs).
Figure 7 shows the performance of the Gurobi-based verifier against the five solvers with the best SAT scores. In this plot the instance completion times for each solver are sorted in increasing order, so that curves further to the left are better. If this were not a log-plot, the area to the left of the curve would be proportional to the benchmark scores from Table 4. Observe that for SAT instances, the SAT solvers, including MapleCOMSPS, follow similar trajectories. Gurobi performs an order of magnitude worse across all SAT instances. The hybrid algorithm, although plotted, is not visible because of how effective the heuristics are at identifying random SAT (non-SUSP) instances. For UNSAT
Figure 6: Log box plots of the distribution of the running times of the hybrid verification algorithm on 10,000 random \((s,k)\)-puzzles for each \(s\in[50],k\in\{6,9\}\). The blue circles denote the average running times of the hybrid algorithm. The dark blue blocks indicates the median times. The thick vertical lines indicate the middle \(50\%\) of times, and the thin vertical lines indicate the full range of running times at each \(s\).
instances, the situation is different. Gurobi performs relatively more slowly for small, easier instances, but substantially better than the SAT solvers for larger, harder instances. The performance of the solvers on easier UNSAT instances is more varied than the corresponding case for SAT instances, but this does not translate into much of a difference in benchmark score because the magnitude of the relative completion time is low.
For UNSAT instances, the benchmark score is dominated by the number of timeouts, each of which effectively adds one to the score. Indeed, the plots for the SAT solver cut off between instance numbers 60 to 70, because the remaining instances cause timeouts. Finally, notice that hybrid algorithm out performs the others for small UNSAT instances - these are instances of the sort where the brute force and bi-directional search algorithms are applied. For larger instances the hybrid algorithm tracks an order of magnitude worse than the Gurobi-based verifier. This is because our algorithm is tuned to encounter many more SAT instances (non-SUSPs) than UNSAT instances (SUSPs). Further, because the one-sided heuristics rule out SAT instances quickly in practice, on UNSAT instances the hybrid algorithm runs these heuristics first, but then has to fall back on the Gurobi-based verifier causing some overhead.
Ultimately, the results of these benchmarking experiments suggest that there is not a substantial difference between using the 2016 MapleCOMSPS and the best solvers from the 2021 competition. Even so, we choose kissat-sc20221-sat as the default solver in our implementation, because it performed the best on our benchmark of SAT instances. Using our current approach, Gurobi is essential to the feasible verification of SUSPs.
The benchmark instances and puzzles, and the entirety of the raw timing data can be found in our repository3.
Footnote 3: [https://bitbucket.org/paraphase/matmult/src/main/data_set/](https://bitbucket.org/paraphase/matmult/src/main/data_set/)
## 8 Conclusions
We initiated the first study of the verification of strong USPs and developed practical software for both verifying and searching for them. We give tight results on the maximum size of width-\(k\) strong USPs for \(k\leq 5\) and improved upper and lower bounds on maximum strong-USP size for \(k\leq 12\). We prove a number of properties of strong USPs related the verification and search. We also produce a new set of benchmark instances for SAT solvers.
Although our results do not produce a new upper bound on the running time of matrix multiplication, they demonstrate there is promise in this approach. There are a number of open questions. Is strong-USP verification coNP-complete? What is the maximum strong-USP capacity? Is there a way to bridge the apparent gap between the values of \(\omega\) implied by single SUSPs and the values implied by infinite families of SUSPs? What are tight bounds on maximum-size strong USPs for \(k\geq 6\) and do these bound lead to asymptotically faster algorithms for matrix multiplication?
The main bottleneck in our work is the size of the search space--new insights seem to be required to substantially reduce it. Are there subclasses of strong USPs that can be more effectively searched? Are there search strategies that would be more effective on this space?
## Acknowledgments
The authors thank the anonymous reviewers for their detailed and thoughtful suggestions for improving this work.
The second and third authors thank Union College for the Undergraduate Summer Research Fellowships funding their work. The first author thanks the many undergraduate students that have contributed in some form to this project over the years, including: Jonathan Kimber, Akriti Dhasmana, Jingyu Yao, Kyle Doney, Quoc An, Harper Lyon, Zachary Dubinsky, Talha Mushtaq, Jing Chin, Diep Vu, Hung Duong, Vu Le, Siddhant Deka, Baibhav Barwal, Aavasna Rupakheti.
|
2309.12246 | Bistable boundary conditions implying cusps | We consider generic families of gradient-like dynamical systems with a
parameter space $P$ which is a 2-dimensional simply connected domain. We prove
that if over the boundary of $P$ there is a S or Z shaped bifurcation graph
containing two opposing fold bifurcation points while over the rest of the
boundary there are no other bifurcation points then there is an odd number of
cusps in the interior of $P$. | David A Rand, Meritxell Saez | 2023-09-21T16:46:00Z | http://arxiv.org/abs/2309.12246v1 | # Bistable boundary conditions implying cusps
###### Abstract
We consider generic families of gradient-like dynamical systems with a parameter space \(P\) which is a 2-dimensional simply connected domain. We prove that if over the boundary of \(P\) there is a S or Z shaped bifurcation graph containing two opposing fold bifurcation points while over the rest of the boundary there are no other bifurcation points then there is an odd number of cusps in the interior of \(P\).
One of the most ubiquitous observations in applied dynamical systems and many areas of application is the S (or Z) shaped bifurcation graph of the sort shown in red and purple in Fig. 1A which shows how the bifurcating restpoints vary with a parameter. Such a 1-dimensional bifurcation graph can be found in almost any discussion of bistability and is often discussed in a context where there is more than one control parameter. When the parameter space is a 2-dimensional simply connected domain \(P\) it is often the case that over its boundary there is such a S or Z shaped bifurcation curve while over the rest of the boundary there is just a single equilibrium point. It has been assumed (e.g. in [1] and [2]) that under reasonable and generic conditions there must then be at least one cusp in \(P\). In fact, this was a key point of contention during the controversy about catastrophe theory in the 70s when it was claimed in [3] that it is not true even under any reasonable dynamical hypotheses. It is therefore remarkable that this claim has not been clarified except in the special case where the phase space is 1-dimensional and the system is gradient [4]. We prove that under widely applicable generic conditions the result is true without any conditions on the finite dimensionality of the phase space or on the number of equilibria present. It is also not necessary to assume the system is gradient.
A key point is that although well-known local bifurcation results imply that for gradient-like systems (precisely defined below) if there is a codimension-2 bifurcation
point in \(P\) then it must be a cusp point, this still leaves the task of showing that there must be such a point. It is necessary to provide an extension of the local results (i.e. about germs) to a global result (i.e. about systems). Catastrophe theory and local bifurcation theory provide many powerful results which are critical for applications but many other applications need such an extension. The ideas needed to prove this result have much more general utility and we will return to them in a later paper. Key amongst these are the fold approximating curves (defined in Appendix 2) that we construct and the use of certain bundles over curves in the catastrophe manifold whose fibres are dynamical objects such as center manifolds.
## 1 Main result
We consider gradient-like parameterised families of dynamical systems. Such a family consists of smooth dynamical systems (flows) depending smoothly on parameters which vary in a region \(P\) of \(\mathbb{R}^{c}\) with a piecewise smooth boundary \(\partial P\). These families are of the form \(\dot{x}=X_{\theta}(x)=X(x,\theta)\) where \(x\in M\) and \(\theta\in P\). The gradient-like condition is just that the only non-wandering points of the system are equilibria. The use of the term gradient-like is justified by the fact that when they are structurally stable these
Figure 1: A. Over the boundary \(\partial P\) of the parameter space there are no bifurcation points except the two folds in the S/Z curve over \(\partial P\) (red and purple curve). The folds are opposed in the sense defined in the text, a concept that formalises the notion of a S/Z curve. Theorem 1 asserts that in this case there are an odd number of cusps in \(P\). The figure shows the simplest case, where there is one. The coloured folded surface is the catastrophe manifold for this example. B. This shows diagrammatically one of the constructions used in the proof of the main theorem i.e. how the curve \(\Gamma_{A}\) (red) is closed using the curve \(\Gamma\) (purple) to obtain \(\Gamma_{A}^{\prime}\).
systems are equilibrium-only Morse-Smale [5] and therefore they admit smooth potential functions [6, 7]. They are not necessarily gradient systems but away from the equilibria they behave like them and some local surgery of the equilibria turns them into gradient systems [8].
We assume that the phase space \(M\) is an \(n\)-dimensional disk (i.e. diffeomorphic to \(\{x\in\mathbb{R}^{n}:||x||<1\}\)) and that the flow is always inwardly transverse to its smooth, topologically spherical boundary \(\partial M\). We call such dynamical systems _compact_. Henceforth it will be assumed that all our systems are compact.
In generic 2-parameter families of gradient-like dynamical systems the bifurcation set consists of a finite number of curves \(C\) each of which is smooth except at a finite number of cusp points [9]. The points where the curve is smooth are called _fold points_. These points \(\theta\in P\) are characterised by the following (e.g. ([9] Chap. 1, Sect. 3 and Chap. 2, Sect. 5.7): There is an invariant 1-dimensional smooth center manifold \(W^{c}(x)\) through the bifurcating restpoint \(x\) in phase space and the system on this submanifold may be transformed into the family
\[\dot{x}=\pm x^{2}+a(\theta)x^{3}+\theta_{1} \tag{1}\]
via a smooth change of coordinates with the fold point at \(\theta_{1}=0\). From equation (1), at a fold point there is a definite direction of flow on the center manifold and this induces an orientation on the center manifold and its tangent space which we call the _fold orientation_.
At a cusp point there is also a 1-dimensional smooth center manifold \(W^{c}(x)\) through the bifurcating restpoint \(x\) and the system may be similarly transformed to
\[\dot{x}=\pm x^{3}+a(\theta)x^{5}+\theta_{1}x+\theta_{2} \tag{2}\]
with the cusp point at \(\theta_{1}=0,\theta_{2}=0\). Cusps come in two forms: _standard_ and _dual_. At a standard cusp point two attractors collide with a single saddle while at a dual one two saddles collide with an attractor. These two cases correspond to the choice of \(+\) or \(-\) in equation (2): \(-\) for standard and \(+\) for dual.
For a generic family the systems \(X_{\theta}\) corresponding to parameters \(\theta\) that are not in the bifurcation set satisfy the Morse-Smale (MS) conditions [10, 6, 11] and the network of attractors and index 1 saddles (i.e. those with a 1-dimensional unstable manifold) has a nicely characterised structure ([10]). In particular, for each such saddle \(x\) the unstable manifold \(W^{u}(x)\) links the saddle to either one or two attractors. The first case is not relevant to the results we pursue and so we will always assume that each such saddle is linked to two attractors.
An important part of our analysis is a study of the structure of the _catastrophe manifold_\(\mathcal{M}\) of the parameterised family \(X_{\theta}\) which is defined by
\[\mathcal{M}=\{(x,\theta):x\text{ is a restpoint of }X_{\theta}\}\subset M\times P\]
and the associated map \(\chi:\mathcal{M}\to\mathbb{R}^{c}\) defined by the projection \(\mathbf{x}=(x,\theta)\mapsto\theta\). Generically, (see [9], I Sect. 1.4), \(\mathcal{M}\) is a \(c\)-dimensional submanifold of \(\mathbb{R}^{n}\times\mathbb{R}^{c}\) and the subset \(\mathcal{S}_{\chi}\) of singularities of \(\chi\) (i.e. the set of points \(\mathbf{x}\in\mathcal{M}\) where the derivative
of \(\chi\) does not have maximal rank) is such that its image \(\mathcal{B}_{\chi}=\chi(\mathcal{S}_{\chi})\) is the set of local bifurcation points in \(P\). An example is shown in Fig. 1A. In a generic family, the set of points \((x,\theta)\) where \(x\) is non-hyperbolic consists of 1-dimensional submanifolds of \(\mathcal{M}\) and equals \(\mathcal{S}_{\chi}\). Since \(\mathcal{M}\) is a surface, \(\mathcal{S}_{\chi}\) consists of disjoint circles and open curves. We call these _fold circles_ and _open fold curves_ in \(\mathcal{M}\) respectively and we call the images under \(\chi\)_bifurcation curves_ in \(P\). If \(\mathbf{x}=(x,\theta)\) in \(\mathcal{M}\) is a fold or a cusp then since it causes no significant confusion we respectively call both \(\mathbf{x}\) and \(x\) fold or cusp points.
Center manifolds will play a key role in our considerations. Our use of the term _center manifold_ will be a little more general than usual as normally it is discussed when the system is at a bifurcation and we will want to use it away from bifurcations when it can be justified e.g. at saddle points. For example, we want to be able to associate a center manifold to an attractor that is close to undergoing a fold bifurcation and we note that an index 1 saddle has a center manifold and it agrees with its unstable manifold. Also we need to consider the smoothness of the variation in the center manifold as parameters are changed. Details are given in Appendix 1.
Suppose that the parameter space is the square \(P=\{(\theta_{1},\theta_{2}):|\theta_{i}|\leq 1\}\) but with the boundary \(\partial P\) smoothed in a very small subset of each corner and consider the subset \(\partial P_{0}\) where \(\theta_{1}=1\). We consider a part of the catastrophe manifold sitting over the segment \(\partial P_{0}\) and assume that this is a smooth curve \(\mathcal{M}_{S/Z}\) with just two fold points on it that are _opposing_ by which we mean:
1. \(\mathcal{M}_{S/Z}\) contains just two fold points \(\mathbf{x}_{i}\), \(i=1,2\) and \(\mathcal{M}_{S/Z}\setminus\{\mathbf{x}_{1},\mathbf{x}_{2}\}\) has three connected components two of which consist of attractors and the other consisting of index 1 saddles. We call the latter the _saddle curve_.
2. the two fold points are _opposed_ in the following sense. Put an orientation on the 1-dimensional center manifold of one of the saddles in the saddle curve and extend this orientation continuously to all the points \(\mathbf{x}\) in the saddle curve. Then _the folds are opposed_ if the fold orientation of one of the fold points \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\) agrees with that of the saddles close to it while the other disagrees with those close to it. Clearly, this does not depend on the choice of the orientation of the center manifolds.
Finally, we suppose that the boundary of \(P\) contains no other bifurcation points.
**Theorem 1**: _Under the above conditions and assuming genericity, there are an odd number of cusps in \(P\)._
**Notes.** 1. It is not necessary to assume a bound on the total number of restpoints. 2. Although it is assumed that the two fold points \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\) are the only bifurcations near the boundary, other bifurcations can be allowed away from the boundary. 3. There are examples of systems satisfying the hypotheses of the theorem with any positive odd number of cusps. These are provided by the \(A_{2k}\) catastrophes [12]. 4. For problems involving function optimisation just apply the theorem to the gradient system of the function with respect to some Riemannian metric. 5. The condition in (i) above
ensures that at least one of the cusps is a standard cusp. If instead one assumes that of the three connected components two consist of index 1 saddles and the other consists of attractors then the conclusion of the theorem holds and at least one of the cusps is a dual cusp.
**History.** Questions about when the hypotheses (or a subset of them) imply a similar result as Theorem 1 were at the heart of the controversy surrounding catastrophe theory in the 70s. For example, this is a key point of Smale's argument in [13] as well as [3]. Zeeman told one of us that he has heard that it had been proved by another person but we can find no record of this and Zeeman did not mention it in his 1983 paper [14] discussing Smale's criticism. Poston in [1] cites a paper by Zeeman apparently proving something similar to our result but we can find no record of this and no mention of it in Zeeman's papers [15]. Our interest was awakened when tackling some related problems that arose in a study of cell differentiation in the early embryo [16, 17, 18].
**Proof of Theorem 1.** Let \({\cal M}_{0}\) be the connected component of \({\cal M}\) that contains \({\cal M}_{S/Z}\) and \(\chi_{0}=\chi|_{{\cal M}_{0}}\).
Since the two folds \({\bf x}_{i}=(x_{i},\theta_{i})\), \(i=1,2\), in \({\cal M}_{S/Z}\) are generic there is a fold curve crossing \(\partial P_{0}\) transversally at the first fold point \(\theta_{1}\). This curve must leave \(P\) and therefore must do this at the second fold point \(\theta_{2}\) as there is no other fold point on the boundary. Let \(C\) denote the lift via \(\chi\) of this fold curve to the catastrophe manifold \({\cal M}\). This is a smooth curve.
Then, \({\cal S}_{0}={\cal S}_{\chi}\cap{\cal M}_{0}\) consists of \(C\) and possibly some disjoint fold circles. There are no other open fold curves in \({\cal M}_{0}\) as otherwise there would be other fold points on \(\partial P\).
Consider the surface \({\cal M}_{\varepsilon}^{*}\) given by \(\mu_{1}=x^{3}-x\), \(0\leq\mu_{2}<\varepsilon\) in \((x,\mu_{1},\mu_{2})\)-space. Let \(\chi^{*}\) denote restriction to \({\cal M}_{\varepsilon}^{*}\) of the projection \((x,\mu_{1},\mu_{2})\to(\mu_{1},\mu_{2})\). In this argument we will repeatedly use the fact that there is a neighbourhood \(N\) of \({\cal M}_{S/Z}\) in \({\cal M}\) and diffeomorhisms \(\varphi\) of \(N\) into \({\cal M}_{\varepsilon}^{*}\) and \(\eta\) of \(P\) into \(\mu_{1},\mu_{2}\) space such that \(\chi^{*}\circ\varphi=\eta\circ\chi\). We call this the local triviality of \(\chi\) near \({\cal M}_{S/Z}\) and it is explained further in Appendix 3.
Clearly, there is an annular neighbourhood \(N_{\partial P}\) of \(\chi_{0}^{-1}(\partial P)\) in \({\cal M}_{0}\) such that the only fold points in \(N_{\partial P}\) are two connected open fold segments \(S_{1}\) and \(S_{2}\) in \(C\cap N_{\partial P}\). These arcs separate \(N_{\partial P}\) into a component \(N_{\partial P,S}\) consisting of saddles and a component \(N_{\partial P,A}\) consisting of attractors.
If \({\bf y}_{1}\) and \({\bf y}_{2}\) are two points in \({\cal M}_{0}\) then \({\bf y}_{1}\) and \({\bf y}_{2}\) can be connected by a smooth arc that enters \(N_{\partial P}\) and is transversal to \(S_{1}\) and \(S_{2}\). The parity of the number of intersections is independent of such an arc. Consider the equivalence relation: \({\bf y}_{1}\sim{\bf y}_{2}\) iff such curves connecting them have even parity and note that the equivalence classes are connected open sets and therefore there are just two of them. We deduce that \(C\) separates \({\cal M}_{0}\) into two connected components. We label these two components \({\cal M}_{S}^{\prime}\) and \({\cal M}_{A}^{\prime}\) according to whether the arcs first enter \(N_{\partial P,S}\) or \(N_{\partial P,A}\). Now consider the subset \({\cal M}_{S}\) (resp. \({\cal M}_{A}\)) of points in \({\cal M}_{S}^{\prime}\) (resp. \({\cal M}_{A}^{\prime}\)) that can be connected to \(N_{\partial P}\) as above
by an arc that does not contain any fold points. Then all points in the subset have the same type and hence are all saddles (in \({\cal M}_{S}\)) or all attractors (in \({\cal M}_{A}\)). It follows that each fold circle in \({\cal M}^{\prime}_{S}\) (resp. \({\cal M}^{\prime}_{A}\)) separates \({\cal M}^{\prime}_{S}\) (resp. \({\cal M}^{\prime}_{A}\) ) into two components one of which contains \({\cal M}_{S}\) (resp. \({\cal M}_{A}\)). The other component is called the interior of the fold circle.
Since there are no singularities of \(\chi\) in \({\cal M}_{S}\) and \({\cal M}_{A}\), there can be no handles in either \({\cal M}_{S}\) or \({\cal M}_{A}\) and therefore, by the classification of surfaces (e.g. [19]) they are homeomorphic to the 2-sphere with a number of (closed) disks removed. The number is 1 plus the number of fold circles in the component since this is the number of boundary components.
Now consider the circle \(\bar{C}\) made up of \(C\) and the saddle curve. This is smooth everywhere except the two fold points on \({\cal M}_{S/Z}\) where there is a corner. Every point \({\bf x}=(x,\theta)\) on this circle has a well-defined center manifold and we let \(\ell({\bf x})\) denote the tangent to this manifold at \(x\). Consider the line bundle \({\cal B}_{\bar{C}}\) over \(\bar{C}\) whose fibres are the \(\ell({\bf x})\). The key step in our proof is to show that \({\cal B}_{\bar{C}}\) is trivial and hence a cylinder.
This will prove the theorem for the following reason. As discussed above, at a fold point, the center manifold and its tangent space \(\ell({\bf x})\) have a well defined orientation and moreover, in the neighbourhood of a fold point \({\bf x}\in C\), \(\ell({\bf x})\) varies smoothly with \({\bf x}\). Using this smoothness we have that the fold direction is locally consistent on any segment of a fold curve in \(P\) which contains no cusp points. On the other hand, it switches at cusp points as can be verified by looking at the normal form (2). Thus, we have that the number of cusp points on a generic fold curve \(C\) equals the number of such switches in \(C\). Now it is easy to see that since the number of such switches is generically finite then, if \({\cal B}_{\bar{C}}\) is a cylinder, there must be an even number of switches.
Put the fold orientation on the fold point at \({\bf x}_{1}\) and continue this along the saddle curve. Then there will be a switch in orientation at the other fold point \({\bf x}_{2}\) because the two fold points \({\bf x}_{1}\) and \({\bf x}_{2}\) are opposed by assumption. Since the total number of switches on \({\cal B}_{\bar{C}}\) must be even then the number on \(C\setminus\{{\bf x}_{1},{\bf x}_{2}\}\) is odd. Thus there are an odd number of cusps on \(C\) and hence also in \(P\).
The rest of the proof is concerned with showing that \({\cal B}_{\bar{C}}\) is a cylinder. Our proof of this depends crucially on the existence of fold approximating curves and their definition which can be found in Appendix 2. Consider a pair of such approximating curves \(\gamma_{S}\) and \(\gamma_{A}\) of \(B_{C}=\chi(C)\) in \(P\) with \(\gamma_{S}\) (resp. \(\gamma_{A}\)) having a lift \(\Gamma_{S}\) (resp. \(\Gamma_{A}\)) to \({\cal M}\) that is contained in \({\cal M}_{S}\) (resp. \({\cal M}_{A}\)). If the approximations are close enough to \(C\) the lifts will not intersect any fold curves in \({\cal M}\).
Using the discussion in Appendix 2 we construct a closed curve \(\gamma^{\prime}_{A}\) in \(P\) and a lift of it, \(\Gamma^{\prime}_{A}\), in \({\cal M}\), that are arbitrarily \({\cal C}^{r}\)-close to \(\chi(\bar{C})\) and \(\bar{C}\) respectively in the following way: We choose two points \({\bf x}^{\prime}_{1}\) and \({\bf x}^{\prime}_{2}\) in \(\Gamma_{A}\) that are close to the fold points \({\bf x}_{1}\) and \({\bf x}_{2}\) respectively. Clearly, if these are close enough we can connect them by a \({\cal C}^{r}\) curve \(\Gamma\) in \({\cal M}\) that joins \({\bf x}^{\prime}_{1}\) to \({\bf x}_{1}\), passes along the saddle curve to \({\bf x}_{2}\) and then joins this to \({\bf x}^{\prime}_{2}\) (see Fig. 1B). Moreover, we can ensure that \(\chi\) is injective on \(\Gamma\) and that \(\chi(\Gamma)\) does not intersect \(\chi(\Gamma_{A})\) except at its endpoints. Then \(\Gamma^{\prime}_{A}\) is made up of that part of \(\Gamma_{A}\) between
\({\bf x}^{\prime}_{1}=(x^{\prime}_{1},\theta_{1})\) and \({\bf x}^{\prime}_{2}=(x^{\prime}_{2},\theta_{2})\) and \(\Gamma\), while \(\gamma^{\prime}_{A}\) is made up of that part of \(\gamma_{A}\) between \(\theta_{1}\) and \(\theta_{2}\) and \(\gamma=\chi(\Gamma)\).
Traverse \(\gamma^{\prime}_{A}\) starting at a point \(\theta_{S}\) in \(\gamma\) and ending there and consider the lift via \(\chi\) which starts at the point in \(\Gamma\) that projects to \(\theta_{S}\). The lifted points are in \(\Gamma^{\prime}_{A}\). Thus when \(\theta\) returns to \(\theta_{S}\), since \(\chi(\Gamma)\) does not intersect \(\chi(\Gamma_{A})\) except at its endpoints and since \(\chi|_{\Gamma}\) is injective, it follows that the final lifted point is the start point. Hence, \(\Gamma^{\prime}_{A}\) is a closed curve and if we start from any point on \(\gamma^{\prime}_{A}\) that is in \(\gamma_{A}\) and fully traverse \(\gamma^{\prime}_{A}\) the lift \((x,\theta)\) returns to its start point. Consequently, the attractor \(x\) at the start and end are equal. It follows that if we put an orientation on the center manifold of \(x\) and follow it as \(\theta\) and the lift traverse \(\gamma^{\prime}_{A}\) and \(\Gamma^{\prime}_{A}\) respectively it returns to the same orientation. This implies that \({\cal B}_{\Gamma^{\prime}_{A}}\) is a cylinder. But \(\gamma^{\prime}_{A}\) and \(\Gamma^{\prime}_{A}\) can be taken arbitrarily \({\cal C}^{r}\)-close to \(\chi(\bar{C})\) and \(\bar{C}\) and therefore by Lemma 1 it follows that \({\cal B}_{\bar{C}}\) is a cylinder. \(\blacksquare\)
## Appendix 1: Center manifolds and the center manifold bundle.
### Center manifolds and smoothness
For relevant information about center manifolds see [20] Sect. 5A. In particular note that by Theorem 5A.3 of [20], if \(W^{c}\) is a center manifold through a restpoint \(x\) and \(W\) is a backward invariant set containing \(x\) then, near \(x\), \(W\) is contained in \(W^{c}\). Thus, for example, if the unstable manifold of a saddle is asymptotic to a fold point, then close to the fold point it is in the center manifold. Center manifolds are not necessarily unique but their tangent space is. We will use this fact below.
We now consider what we call _pseudo-hyperbolic_ restpoints \(x\). At such restpoints \(x\) there is \(a>b>0\) such that the Jacobian of the vector field at \(x\) has eigenvalues \(\lambda\) that either have their real part \(\leq-a\) or \(\geq-b\). Pseudo-hyperbolic index 1 saddles and attractors have 1-dimensional center manifolds \(W^{c}(x)\) that vary smoothly with parameters (Sect. 5[20], especially Theorems 5.1, 5.5 and 5A.1). If \(\varphi^{t}\) is the flow, this manifold is characterised by the fact that \(z\in W^{c}(x)\iff||\varphi^{-t}(z)-x||/e^{ct}\to 0\) as \(t\to\infty\) for any \(c\) with \(a>c>b\). There is a complementary submanifold \(W^{ss}(x)\) transversal to \(W^{c}(x)\) at \(x\) characterised by \(z\in W^{ss}(x)\iff||\varphi^{t}(z)-x||/e^{-ct}\to 0\) as \(t\to\infty\) for such a \(c\). This we call the _strong stable manifold_. Note that our use of the term _center manifold_ is a little more general than usual as in that case one commonly takes only \(b=0\).
Index 1 saddles are always pseudo-hyperbolic and attractors are if they are close to having a fold bifurcation. For an index 1 saddle, part of the unstable manifold containing the saddle can be taken for a center manifold.
According to Theorem 5.1 of [20], \(W^{c}(x)\) has \({\cal C}^{r}\) dependence upon parameters provided \(e^{ib-a}<1\) for \(1\leq j\leq r\). Thus the center manifold for saddles always is smooth and that for attractors is smooth provided they are close enough to having a fold bifurcation. The later point is true because the closer an attractor is to being a fold, the closer one can take \(b\) to zero.
### Approximations and the CM bundle
Suppose we have a nonsingular curve \(\gamma(t)\), \(0<t<T\) in either \(P\) or \(\mathcal{M}\) together with a tubular neighbourhood \(N\) of \(\gamma\) and consider another \(\mathcal{C}^{r}\) curve \(\tilde{\gamma}\) that passes through \(N\). By definition of a tubular neighbourhood there is a retraction \(\pi:N\to\gamma\) making \((\pi,N,\gamma)\) a vector bundle whose zero section is the inclusion \(\gamma\to N\). We say that \(\tilde{\gamma}\) is \(\varepsilon-\mathcal{C}^{r}\)-close to \(\gamma\) in \(N\) if the absolute value of the derivatives of \(\tilde{\gamma}(t)\) wrt \(t\) of order \(0,\ldots,r\) are within distance \(\varepsilon\) of those of \(\pi(\tilde{\gamma})(t)\).
When \(\gamma\subset\mathcal{M}\), we say that \(\gamma\) has center manifolds if at each point \(\mathbf{x}=(x,\theta)\) of \(\gamma\) there is a \(\mathcal{C}^{1}\) center manifold at \(x\) in \(M\) and these vary \(\mathcal{C}^{2}\)-smoothly with \(\mathbf{x}\in\gamma\). We shall be especially interested in the line bundle \(\mathcal{B}_{\gamma}\) over such a curve \(\gamma\) whose fibre at \(\mathbf{x}=(x,\theta)\) is the tangent space \(\ell(\mathbf{x})\) to the center manifold at \(x\). By the above discussion, if \(\tilde{\gamma}\) is a curve in \(\mathcal{M}\) that is sufficiently \(\varepsilon-\mathcal{C}^{2}\)-close to \(\gamma\) then the center manifolds for the restpoints at \(\tilde{\gamma}(t)\) and \(\pi(\tilde{\gamma}(t))\) vary in a \(\mathcal{C}^{2}\) fashion and their difference \(d(\ell(\pi(\gamma(t)))),\ell(\gamma(t))\) is \(O(\varepsilon)\) with the constant of proportionality independent of \(t\) if the curve is compact. Here \(d(\ell(\mathbf{x}),\ell(\mathbf{x}^{\prime}))=\min||e-e^{\prime}||\) where the minimum is over all unit vectors \(e\in\ell(\mathbf{x})\), \(e^{\prime}\in\ell(\mathbf{x}^{\prime})\). Therefore, we have the following lemma.
**Lemma 1**: _If \(\gamma\) and \(\tilde{\gamma}\) are closed curves as above and \(\varepsilon>0\) is sufficiently small, \(\mathcal{B}_{\gamma}\) and \(\mathcal{B}_{\gamma^{\prime}}\) are both trivial bundles or they are both topologically Mobius bands._
## Appendix 2: Approximating curves.
The pair of equations \(u=x^{2}\), \(v=y\), are the normal form for \(\chi\) near a fold; (see Theorem 15A, [21]). Therefore, if \(\chi:\mathcal{M}\to P\) is the mapping under consideration and \(\mathbf{x}\in\mathcal{M}\) is a fold point, there is a neighbourhood \(U\) of \(\mathbf{x}\) in \(\mathcal{M}\) and \(V\) of \(\chi(\mathbf{x})\) in \(P\) and a smooth curve \(\gamma\) in \(V\) such that the lift via \(\chi\) of \(\gamma\) to \(U\) is a smooth curve \(\Gamma\) that is the set of fold points in \(U\). The curves \(\gamma\) and \(\Gamma\) separate \(U\) and \(V\) respectively into two connected components, and one of the two components of \(V\) does not intersect \(\chi(U)\).
A normal form for the catastrophe manifold for the standard cusp bifurcation is given by the equation \(x^{3}-\theta_{1}x-\theta_{2}=0\). Therefore, the map \((\theta_{1},x)\mapsto(x,\theta_{1},\theta_{2}=-\theta_{1}x+x^{3})\) from \(\mathbb{R}^{2}\) to \(\mathcal{M}\) parameterises \(\mathcal{M}\) in terms of \(x\) and \(\theta_{1}\). Thus, in this parameterisation \(\chi\) is given by \((\theta_{1},x)\mapsto(\theta_{1},\theta_{2}=-\theta_{1}x+x^{3})\) and this is singular when \(\theta_{1}=3x^{2}\) which defines a smooth curve \(C\) in \(\mathcal{M}\). The bifurcation set \(B_{C}\) is its image under \(\chi\), which is the set of points given by \(\theta_{1}=3x^{2},\theta_{2}=-2x^{3}\) i.e. \(4\theta_{1}^{3}=27\theta_{2}^{2}\). The dual cusp (\(+\)) case is entirely analogous.
Any curve in \(\mathcal{M}\) that is \(\mathcal{C}^{r}\)-close to \(C\), \(r>1\) is of the form \(\theta_{1}=3x^{2}+\varphi(x)\) and the image under \(\chi\) therefore has the parametric form
\[\theta_{1}=3x^{2}+\varphi(x),\quad\theta_{2}=-2x^{3}-x\varphi(x). \tag{3}\]
Consequently, if \(\varphi(x)\) is of constant sign the form of the image curves are as shown in Fig. 2A. In particular, if \(\varphi>0\) then this curve is smooth and loops around the cusp (blue curve in Fig. 2A) and if \(\varphi<0\) the curve has no self intersections and stays inside
the cusp (red curve). We call these respectively _cusp looping curves_ and _cusp nudging curves_ for the cusp. Conversely, any curve with the parametric form (3) lifts via \(\chi\) to a curve that is \(\mathcal{C}^{2}\)-close to \(C\), if \(\varphi\) is \(\mathcal{C}^{2}\)-small and furthermore it lies to one side of \(C\).
**Lemma 2**: _If \(C\) is a generic fold curve, \(N\) a tubular neighbourhood of \(C\), \(\mathbf{x}\) is a cusp point on \(C\) and \(\varepsilon>0\) then there is a cusp looping curve \(\gamma_{\ell}\) and a cusp nudging curve \(\gamma_{n}\) for \(\mathbf{x}\) that lift via \(\chi\) to curves \(\Gamma_{\ell}\) and \(\Gamma_{n}\) that are \(\varepsilon-\mathcal{C}^{2}\)-close to \(C\) in \(N\). If the cusp is standard, \(\Gamma_{\ell}\) will be a curve of attractors and \(\Gamma_{n}\) a curve of saddles, and vice-versa for a dual cusp._
**Proof.** We change coordinates to put the cusp in normal form as above and then the result follows from the discussion above. \(\blacksquare\)
**Theorem 2**: _Given \(\varepsilon>0\) and a fold curve \(C\) in \(\mathcal{M}\) there is a pair of \(\mathcal{C}^{2}\)- \(\varepsilon\)-approximating curves \(\gamma_{S}\) and \(\gamma_{A}\) in \(P\) with the following property: \(\gamma_{S}\) (resp. \(\gamma_{A}\)) lifts via \(\chi\) to a \(\mathcal{C}^{r}\) curve
Figure 2: Schematic of neighbourhoods and approximating curves.
\(\Gamma_{S}\) (resp. \(\Gamma_{A}\)) of saddles (resp. attractors) in \(\mathcal{M}\) that is \(\varepsilon-\mathcal{C}^{2}\)-close to \(C\). If \(C\) is a fold circle then \(\gamma_{S}\) and \(\gamma_{A}\) can be taken to be closed curves. In this case taking the lift via \(\chi\) of the curve traversed \(r\) times produces a closed curve in \(\mathcal{M}\) that is \(\varepsilon^{\prime}-\mathcal{C}^{2}\)-close to \(C\) where \(\varepsilon^{\prime}\) is \(O(\varepsilon)\). The value of \(r\) for \(\gamma_{S}\) and \(\gamma_{A}\) may be different. The curves \(\gamma_{S}\) and \(\gamma_{A}\) can be chosen so that they agree outside any neighbourhood of any cusp loops._
**Proof.** The key part of the proof is contained in Lemma 2. \(B_{C}=\chi(C)\) will contain a possibly empty set of finitely many cusps \(\mathbf{c}_{1},\ldots,\mathbf{c}_{m}\), \(\mathbf{c}_{i}=(c_{i},\theta_{i})\). This labelling can be chosen so that there are no cusps on \(C\) between \(\mathbf{c}_{i}\) and \(\mathbf{c}_{i^{\prime}}\) where \(i^{\prime}\) denotes \(i+1\) if \(C\) is open and \(i+1\,\mathrm{mod}\,m\) if \(C\) is a fold circle. Let \(C^{i}\) denote this segment and \(B^{i}_{C}\) denote \(\chi(C^{i})\). Let \(N\) be a thin tubular neighbourhood of \(C\) in \(\mathcal{M}\) and \(N_{i}\) be a thin tubular neighbourhood of \(B^{i}_{C}\) that satisfies \(\chi^{-1}(N_{i})\subset N\).
We construct \(\gamma_{A}\) and \(\Gamma_{A}\) as \(\gamma_{S}\) and \(\Gamma_{S}\) are done analogously. We consider the arc \(C^{i}\) of fold points between \(\mathbf{c}_{i}\) and \(\mathbf{c}^{\prime}_{i}\). By Lemma 2 we can find a cusp looping curve around each regular cusp and a nudging curve at each dual cusp which are in \(N_{i}\) and which have lifts that are sufficiently close to \(C\). It is then straightforward to join these by a curve inside each \(N_{i}\) that lifts to a curve that is \(\mathcal{C}^{2}\)-close to \(C\). In this way we construct the curves. To see that traversing the curve multiple times eventually gives a closed curve keep repeating the above process each time taking for the start point of the lift to \(\mathcal{M}\) the endpoint of the previous lift. \(\blacksquare\)
## Appendix 3 Local triviality of \(\chi\) near \(\mathcal{M}_{S/Z}\)
Consider a system with state variable \(x\) and parameters \(\mu_{1}\) and \(\mu_{2}\). We consider the surface \(\mathcal{M}^{*}_{\varepsilon}\) given by \(\mu_{1}=x^{3}-x\), \(0\leq\mu_{2}<\varepsilon\) in \((x,\mu_{1},\mu_{2})\)-space. Let \(\chi^{*}\) denote the restriction to \(\mathcal{M}^{*}_{\varepsilon}\) of the projection \((x,\mu_{1},\mu_{2})\rightarrow(\mu_{1},\mu_{2})\).
**Lemma 3**: _There is a diffeomorphism \(\varphi\) from a neighbourhood of \(\mathcal{M}_{S/Z}\) in \(\mathcal{M}\) to \(\mathcal{M}^{*}_{\varepsilon}\) and a diffeomorphism \(\eta\) between the two parameter spaces such that \(\chi^{*}\circ\varphi=\chi\circ\eta\)._
**Proof.** Consider a thin tubular neighbourhood \(N\) of \(\mathcal{M}_{S/Z}\) in \(\mathcal{M}\). Then provided \(N\) is thin enough, \(N\setminus C\) has three connected components which are discs. Moreover, there are two neighbourhoods \(\mathcal{U}_{1}\) and \(\mathcal{U}_{2}\) respectively containing the two connected components of \(N\cap C\) such that on \(\mathcal{U}_{i}\), \(i=1,2\), \(\chi\) has the normal form \((u_{i},v_{i})=(\pm x_{i}^{2},y_{i})\) in some coordinate system \((x_{i},y_{i})\). The choice of sign will be different at the two fold curves in \(N\setminus C\) since the fold points \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\) are assumed to be opposed.
Let \(D\) be the range \(\chi(N)\) together with a smooth structure compatible with the two sets of coordinates \((u_{i},v_{i})\), \(i=1,2\). Then there is a diffeomorphism of \(\mathcal{U}_{1}\cup\mathcal{U}_{2}\) into two neighbourhoods of the fold curves in \(\mathcal{M}^{*}_{\varepsilon}\) such that the diagram below commutes on \(\mathcal{U}_{1}\cup\mathcal{U}_{2}\).
\(\mathcal{U}_{1}\cup\mathcal{U}_{2}\subset N\)\(
Now we can extend the diffeomorphims to \(\mathcal{M}_{0}\) and \(D\) using the fact that, outside of \(\mathcal{U}_{1}\) and \(\mathcal{U}_{2}\), the restriction of \(\chi\) to any one of the connected components of \(N\setminus C\) is injective.
|
2309.15529 | Missing-modality Enabled Multi-modal Fusion Architecture for Medical
Data | Fusing multi-modal data can improve the performance of deep learning models.
However, missing modalities are common for medical data due to patients'
specificity, which is detrimental to the performance of multi-modal models in
applications. Therefore, it is critical to adapt the models to missing
modalities. This study aimed to develop an efficient multi-modal fusion
architecture for medical data that was robust to missing modalities and further
improved the performance on disease diagnosis.X-ray chest radiographs for the
image modality, radiology reports for the text modality, and structured value
data for the tabular data modality were fused in this study. Each modality pair
was fused with a Transformer-based bi-modal fusion module, and the three
bi-modal fusion modules were then combined into a tri-modal fusion framework.
Additionally, multivariate loss functions were introduced into the training
process to improve model's robustness to missing modalities in the inference
process. Finally, we designed comparison and ablation experiments for
validating the effectiveness of the fusion, the robustness to missing
modalities and the enhancements from each key component. Experiments were
conducted on MIMIC-IV, MIMIC-CXR with the 14-label disease diagnosis task.
Areas under the receiver operating characteristic curve (AUROC), the area under
the precision-recall curve (AUPRC) were used to evaluate models' performance.
The experimental results demonstrated that our proposed multi-modal fusion
architecture effectively fused three modalities and showed strong robustness to
missing modalities. This method is hopeful to be scaled to more modalities to
enhance the clinical practicality of the model. | Muyu Wang, Shiyu Fan, Yichen Li, Hui Chen | 2023-09-27T09:46:07Z | http://arxiv.org/abs/2309.15529v1 | # Missing-modality Enabled Multi-modal Fusion Architecture for Medical Data
## Abstract
Fusing multi-modal data can improve the performance of deep learning models. However, missing modalities are common for medical data due to patients' specificity, which is detrimental to the performance of multi-modal models in applications. Therefore, it is critical to adapt the models to missing modalities. This study aimed to develop an efficient multi-modal fusion architecture for medical data that was robust to missing modalities and further improved the performance on disease diagnosis.
X-ray chest radiographs for the image modality, radiology reports for the text modality, and structured value data for the tabular data modality were fused in this study. Each modality pair was fused with a Transformer-based bi-modal fusion module, and the three bi-modal fusion modules were then combined into a tri-modal fusion framework. Additionally, multivariate loss functions were introduced into the training process to improve model's robustness to missing modalities in the inference process. Finally, we designed comparison and ablation experiments for validating the effectiveness of the fusion, the robustness to missing modalities and the enhancements from each key component. Experiments were conducted on MIMIC-IV, MIMIC-CXR with the 14-label disease diagnosis task. Areas under the receiver operating characteristic curve (AUROC), the area under the precision-recall curve (AUPRC) were used to evaluate models' performance. The experimental results demonstrated that our proposed multi-modal fusion architecture effectively fused three modalities and showed strong robustness to missing modalities. This method is hopeful to be scaled
to more modalities to enhance the clinical practicality of the model.
Index Terms: multi-modal fusion, Transformer, missing modalities, deep learning, disease classification.
## 1 Introduction
The volume and variety of medical data has grown rapidly in recent years, which can be used as a source of tremendous amounts of data for the development of deep learning models, laying the foundation of clinical decision support systems and precision medicine[1; 2]. Medical data are presented in a variety of modalities, such as well-organized tabular data (e.g., demographics and laboratory results), free-style texts (e.g., radiology reports and progress notes), images (e.g., X-rays and magnetic resonance imaging [MRI]), signals (e.g., electrocardiogram and electroencephalogram), and videos such as endoscopy. It has been shown that multi-modal data can improve the performance of deep learning models[3]. Deep learning models built on multi-modal medical data can have high diagnostic performance, which can help reduce medical costs and solve the shortage of clinical experts[4; 5; 6; 7].
Most medical multi-modal fusion studies have focused on two modalities[8], such as the fusion of chest radiographs and tabular data for cardiomegaly diagnosis[9], the fusion of chest radiographs and radiology reports for multi-label classification of chest diseases[10], the fusion of MRI and tabular data for dementia diagnosis[11], and the fusion of physiological time series and clinical notes for early prediction of sepsis[12]. Few study fused three medical modalities of pathology images, medical record text and tabular pathology features for disease diagnoses[13]. Although bi-modal fusion (BiMF) models were relatively easy to build, they did not follow the practice of clinicians, who make diagnostic decisions using all possible modalities of patient data.
When more medical modalities are considered for fusion, missing modalities are inevitable in the real-world clinical application scenarios and become a critical issue that is not conducive to model application. Therefore, deep learning methods, such as autoencoder[14] and generative adversarial network (GAN)[15; 16] have been developed for the generation and imputation of the missing modality based on the feature extraction from the original modal data. These methods may not be suitable for all types of medical modalities and require massive sample data for training[17].
In this study, we proposed a Transformer-based tri-modal fusion (TriMF) architecture, including three feature embedding networks for each individual modality and a multi-modal fusion framework with a multivariate loss function. This architecture was adopted for the fusion of chest radiographs,
corresponding radiology reports, and tabular data for the diagnosis of thoracic diseases. We aimed to enhance the effect of multi-modal fusion while improving the model's robustness to missing modalities.
## 2 Method
In this section, we introduced the proposed multi-modal fusion architecture, including feature embedding networks for multi-modalities, a TriMF framework and multivariate loss functions. We designed a series of experiments to evaluate the fusion performance by applying the proposed multi-modal fusion models to a 14-label classification task.
### Datasets
We used the open access MIMIC-IV and MIMIC-CXR datasets[18-20]. Patients' demographics and laboratory tests were maintained as records in the MIMIC-IV dataset, and the digital chest radiographs, the associated radiology reports and CheXpert labels were maintained in the MIMIC-CXR dataset. Five demographic characteristics (gender, age, source of admission, insurance, and ethnicity) and 46 high-frequency (\(>50\%\)) laboratory tests were extracted from the MIMIC-IV dataset to form a tabular modality. Anteroposterior chest radiographs extracted from the MIMIC-CXR dataset were considered as an image modality. Free-text radiology reports for these radiographs were extracted simultaneously. The "Findings" section of a radiology report provides an objective description of imaging features and was therefore selected as the text modality for fusion. In addition, patients (samples) in the dataset were assigned one of the 14 disease labels, which were used in the classification task of the study. We excluded samples without the complete three modalities or disease labels and finally obtained 23,421 samples as our experimental dataset. More details about the demographic characteristics, laboratory tests, and disease labels of the study samples are listed in Tables S1, S2, and S3, respectively.
### Multi-modal Fusion Framework
Data from the three individual modalities were first seperately embedded into a low-dimensional real number space by pre-trained models and simple neural networks, which were considered to help avoid over-fitting. Instead of fine-tuning these embedding modules to obtain the definite
representations, we trained them together with the multi-modal fusion modules in order to build an end-to-end model (Figure 1).
#### 2.2.1 Feature embedding for each modality
_Feature embedding for images_
Chest radiographs of various size were first resized to a uniform size of 224*224 pixels for ease of embedding. The pre-trained Densenet-121 network[21] was used to extract features from these images. A total of 2048 feature maps of 7*7 dimension were extracted on the layer closest to the last pooling layer. Each feature map was flattened into a 49-dimensional vector, and we finally obtained a 49*2048 dimensional embedding representation of each chest radiograph image.
_Feature embedding for text_
The length of the "Findings" section of all radiology reports ranged from 10 to 280 words (Figure S1). The optimal text length for embedding was determined by trial and error to be 150 words, while the vast majority of "Findings" (23269 of 23421 samples, 99.35%) were less than 150 words. Therefore, each report was truncated to 150 words before being fused. Due to its good performance in embedding medical text, PubMedBERT[22], a well pre-trained model using PubMed as the corpus, was used for text embedding in the current study. Each word was embedded into a 768-dimensional
Figure 1: The proposed TriMF architecture for medical data.
vector, and the final embedding dimension of each report was 150*768.
_Feature embedding for tabular data_
We built a shallow neural network with one input neuron and 256 output neurons to embed each tabular feature into a 256-dimensional vector. When the 5 demographic characteristics and 46 laboratory tests for a patient (sample) were treated as a 51*1 vector, the tabular data were finally encoded as embeddings of 51*256 dimension.
#### 2.2.2 The BiMF module
The central idea behind the proposed TriMF framework was first to fuse two modalities each by a BiMF module and then to fuse the three BiMF representations together. The BiMF module contained a stacked fusion block and a low-rank multi-modal fusion structure, as shown in Figure 2.
The BiMF module contained two types of multi-head attention units, the self-attention (SA) unit and the co-attention (CA) unit. The SA unit consisted of a feed-forward layer with two fully-connected layers with GeLU activation and a multi-head attention layer[23]. Taking the embedding vector of a
Figure 2: The BiMF module based on self-attention (SA), co-attention (CA), and low-rank multi-modal fusion (LMF) components. The combination of SA and CA components was responsible for the information interaction between two modalities. They were stacked for six layers alternately in this study. The LMF component fused two [cls] tokens from the BiMF encoder into a fusion vector of arbitrary dimension.
modality as input, the multi-head attention layer could learn the relationship between tokens within a modality. Furthermore, residual connection and layer normalization were applied to facilitate optimization. The CA unit was composed of two symmetric multi-head attention layers, which took the embedding vectors of two modalities as input, one as queries and the other as keys and values. The symmetric structure helped to learn the pairwise relationship between two modalities. The fusion block, which was the combination of two SA units and one CA unit, was stacked for six layers, resulting in a deep BiMF encoder.
For the encoder input, we further added learnable position embeddings and inserted an extra learnable classification token ([cls] token) at the beginning of the embedding sequence of each modality. So far, we had gotten two fusion features from one BiMF encoder, where the two final hidden vectors of [cls] tokens represented the modalities that would be fused by an LMF structure[24]. LMF was an effective fusion method improved from the tensor fusion network[25] by decomposing the weights in the tensor fusion model into low-rank factors. This fusion vector of modalities A and B was calculated as:
\[h=\left(\sum_{i=1}^{r}w_{A}^{(i)}\cdot z_{A}\right)\left[\sum_{i=1}^{r}w_{B}^{( i)}\cdot z_{B}\right] \tag{1}\]
where \(r\) (here 128) was the rank of the decomposition tensor. \(\{w_{a}^{(I)},w_{b}^{(I)}\}_{i=1}^{r}\) were the corresponding low-rank factors of modalities A and B, and \(\mathrm{z_{A}}\) and \(\mathrm{z_{B}}\) were the final hidden vectors of [cls] tokens from the two modalities, respectively. The dimension of each fusion vector from the above BiMF module could be set to be either the same or different for each modality pair. In this study, the fusion vectors were set to have the same dimension of 256.
The final 256-dimensional TriMF vector was the simple sum of the three BiMF vectors. The advantage of fusing any modality pair into a BiMF vector of the same dimension as that of the final TriMF vector was that the TriMF framework could still work even with a missing modality. At this moment, two out of the three BiMF modules would stop work, and the remaining one would provide the output of the TriMF model.
### Model training
The training task was a 14-label classification descibed in section 2.1. The classifier was a linear
layer with 256 input neurons for the 256-deimensional TriMF vector and 14 output neurons for 14 labels.
#### 2.3.1 Loss function for the training process
Although the proposed TriMF model could deal with the imcompleteness of modalities in the following inference process, the fusion representation and further the classification performance will definitely be affected by the missing modality. For example, the BiMF representation of image and text would be different from the TriMF representation of image, text and tabular data, resulting in the different classification results. Therefore, we adopted a fusion representation contrastive loss (FRCL) function mechanism in the training process to improve the similarity between TriMF and BiMF representations to minimize the negative effect of the missing modality on the classification performance.
The contrastive loss function was in the form of mean square error:
\[L_{FRCL}\left(F_{1},F_{2}\right)=\sum_{i=1}^{N}\left(F_{I_{i}}-F_{2_{i}}\right) ^{2} \tag{2}\]
where \(F_{1}\) and \(F_{2}\) were the fusion vectors from two BiMF modules, and \(N\)=256 was the dimension of these fusion representations. The final loss function being used to optimize the framework's parameters was described as:
\[L=\lambda_{1}L_{c\!f}+\lambda_{2}L_{FRCL}\left(F_{I,T},F_{I,S}\right)+\lambda_ {3}L_{FRCL}\left(F_{I,T},F_{T,S}\right)+\lambda_{4}L_{FRCL}\left(F_{I,S},F_{T,S}\right) \tag{3}\]
where F\({}_{\text{I,T}}\), F\({}_{\text{I,S}}\), and F\({}_{\text{T,S}}\) were the BiMF vectors of modalities image and text, image and tabular, and text and tabular, respectively. L\({}_{\text{clf}}\) was the categorical loss in the form of binary cross-entropy. The weights \(\lambda_{1}\), \(\lambda_{2}\), \(\lambda_{3}\), and \(\lambda_{4}\) were set by trial and error to 1, 3, 3, and 3, respectively. Thus, BiMF representations were encouraged to be as close as possible to the TriMF representation in the embedding space, which could promote the robustness of the model to missing-modal data.
#### 2.3.2 Training details
We used an Adam optimizer, with a weight decay of 5e-4 and a batch size of 16 at a maximum of 100 epochs of parameter optimization. The learning rate started at 0.001 and decayed at an exponential rate of 0.8 when the validation loss has stopped decreasing for 2 epochs. Training was stopped when there was no improvement in validation loss for six consecutive epochs. For each
model, we selected the optimal parameter set that produced the least validation loss. All models were implemented in Pytorch 1.10 and trained on a workstation equipped with an Intel Xeon Gold 5218, 512 GB RAM, and a 16G NVIDIA Tesla T4 GPU.
### Model Evaluation
The proposed multi-modal fusion framework was evaluated on a 14-label classification task. Two common performance metrics, area under the receiver operating characteristic curve (AUROC) and area under the precision-recall curve (AUPRC), were used for each of the 14 labels, and the average AUROC and AUPRC were used for the overall performance evaluation. The training, validation, and test sets were randomly divided at a ratio of 8:1:1.
#### 2.4.1 Comparative Experiments
The comparative experiments were conducted to compare the proposed TriMF framework with other models using the same dataset, and with other models using one or two modalities to show the advantages of the fusion of three modalities.
The proposed model was first compared with MedViLL[26], a Transformer-based multi-modal fusion model. The MedViLL model was originally pre-trained on the MIMIC-CXR dataset and then used to fuse chest radiographs and reports containing both "Findings" and "Impression" sections. In this study, radiology reports containing only the "Findings" section were fused with chest radiographs. Therefore, in the current experiment, the MedViLL model was first fine-tuned and then tested using the chest radiographs and reports in our sample set to ensure that the dataset was the same as that used for the proposed model.
The proposed multi-modal fusion model was then compared with the PubMedBERT model trained and tested with text modality only, ResNet-50 with image modality only, shallow neural network with tabular data modality only, and the proposed BiMF models with two modalities of image and text, image and tabular data, and text and tabular data, respectively.
#### 2.4.2 Experiments on the robustness of the model
One of the three modalities of the test set was separately masked out to construct three incomplete modality test sets. For each incomplete modality test set, the proposed multi-modal fusion framework was trained on the training set containing A) all three modalities (the TriMF framework
was used), and B) two modalities as in the incomplete modality test set (the BiMF module was used), respectively. Both were tested on the incomplete modality test set. The classification performance of Model A was compared to that of Model B to evaluate the performance improvement of the model trained with the complete versus incomplete modality training sets when tested on the incomplete modality test set. To evaluate the model's robustness to missing modalities, the classification performance of Model A (trained on the complete and tested on the incomplete modality samples) was also compared to that of the model trained and tested all on the complete modality samples.
#### 2.4.3 Ablation Experiments
In the proposed multi-modal fusion architecture, we introduced SA units and the LMF mechanism in the BiMF module, and the FRCL function in the training process, which were considered critical to the architecture. To identify the effect of these key components on the classification performance, we conducted ablation experiments focusing on SA units, LMF, and contrastive loss function, respectively. When SA units were ablated, they were replaced by CA units. When the LMF was ablated, all output [cls] tokens from the BiMF modules were simply concatenated and fed into a classifier. When the multi-modal architecture was not trained under the proposed contrastive loss function, the binary cross-entropy for classification was used as a substitute. Samples with all three modalities were used in these ablation experiments.
## 3 Result
### Comparative Experiments
When trained and tested on the same training and test sample sets, the proposed TriMF model outperformed the MedViLL model, regardless of whether AUROC or AUPRC was used in the performance comparisons. The performance of the proposed TriMF model was higher than that of MedViLL for 11 and 9 out of 14 labels, respectively, which contributed a higher average performance (AUROC 0.914 vs. 0.870 and AUPRC 0.552 vs. 0.484) to the proposed model (Figure 3 and Table S4).
The performance of classification models based on different single modalities or combinations of modalities varied. The more modalities involved, the better the model, as shown in Tables 1 and S5. The proposed TriMF model trained and tested on all three modalities (i.e., image, text, and tabular data) achieved a higher performance than any of those trained and tested on one or two modalities on average (AUROC 0.914 and AUPRC 0.552) as well as for 10 out of 14 labels. When two modalities were fused using the proposed BiMF module, the bi-modal combination of the image and text (AUROC 0.879) outperformed both the single image and single text modalities (0.732 and 0.835, respectively). This was also the case for the bi-modal combinations of image and tabular data (AUROC 0.815 vs. 0.732 and 0.568) and text and tabular data (0.875 vs. 0.835 and 0.568). It is worth mentioning that the BiMF module trained and tested on the modality combination of image and text showed a higher performance (AUROC 0.879 and AUPRC 0.502) than MedViLL (0.870 and 0.484) with the same data set used.
In addition, the text modality tended to play the most important role among the three modalities. Among the three unimodal-based models, the model based on the text modality alone achieved the best performance on eight labels, and the model based on the image modality alone performed best on the remaining six labels (average AUROC, 0.869 and 0.819, respectively). For the three bi-modal based models, the models based on the bi-modal combination of image and text, image and tabular
Fig. 3: Performance comparison between the proposed TriMF model and the MedViLL model[26] in a classification task for 14 lung related diseases. Meanings of the 14 labels are listed in Table S1.
data, and text and tabular data showed the best performance on eight, two, and four labels, with an average AUROCs of 0.882, 0.902, and 0.930, respectively. However, the model based on the fusion of image and text outperformed the TriMF model on three labels, indicating a certain negative effect of tabular data on the fusion of three modalities.
### The robustness of the TriMF framework
Tables 2 and S6 present the performance of our TriMF model to classify the 14 labels when one modality was missing. After the proposed classification model was built on training samples with the complete modalities, the model showed only a slight degradation in classification accuracy when tested on samples with one missing modality, compared to when tested on samples with the complete modalities. The average AUROCs decreased by 0.002, 0.071, and 0.003 from 0.914 when tabular data, text, and image were missing, respectively. The result suggests that our architecture
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \multirow{2}{*}{AUROC} & \multicolumn{3}{c}{Single modality} & \multicolumn{3}{c}{Two modalities} & All three modalities \\ \cline{2-7} & Image only & Text only & \begin{tabular}{c} Tabular data \\ only \\ \end{tabular} & \begin{tabular}{c} Image and \\ text \\ \end{tabular} & \begin{tabular}{c} Image and \\ tabular data \\ \end{tabular} & \begin{tabular}{c} Text and \\ tabular data \\ \end{tabular} &
\begin{tabular}{c} All three modalities \\ \end{tabular} \\ \hline Atel & 0.764 & 0.729 & 0.548 & **0.882** & 0.712 & 0.865 & 0.867 \\ Card & 0.652 & 0.742 & 0.531 & 0.860 & 0.714 & 0.829 & **0.865** \\ Cons & 0.698 & 0.845 & 0.589 & 0.881 & 0.821 & 0.860 & **0.902** \\ Edem & 0.882 & 0.856 & 0.599 & 0.901 & 0.845 & 0.925 & **0.939** \\ EC & 0.852 & 0.889 & 0.502 & 0.869 & 0.845 & 0.890 & **0.967** \\ Frac & 0.507 & 0.926 & 0.567 & 0.807 & 0.899 & 0.876 & **0.927** \\ LL & 0.691 & 0.917 & 0.605 & 0.879 & 0.886 & 0.920 & **0.939** \\ LO & 0.839 & 0.788 & 0.542 & 0.891 & 0.786 & 0.884 & **0.893** \\ NF & 0.835 & 0.825 & 0.547 & **0.894** & 0.817 & 0.871 & 0.878 \\ PE & 0.855 & 0.819 & 0.553 & 0.918 & 0.787 & 0.909 & **0.924** \\ PO & 0.307 & 0.932 & 0.664 & 0.832 & 0.905 & 0.723 & **0.990** \\ Pneu1 & 0.737 & 0.718 & 0.572 & 0.833 & 0.702 & 0.825 & **0.838** \\ Pneu2 & 0.881 & 0.938 & 0.574 & 0.965 & 0.931 & **0.986** & 0.984 \\ SD & 0.743 & 0.763 & 0.562 & **0.900** & 0.759 & 0.888 & 0.882 \\ \hline Average & 0.732 & 0.835 & 0.568 & 0.879 & 0.815 & 0.875 & **0.914** \\ \hline \end{tabular} Note: The largest values are bolded for each label. Meanings of the 14 labels are listed in Table S1. AUROC: area under the receiver operating characteristic curve.
\end{table}
Table 1: Performance comparison among different modality combinations used in the proposed TriMF model in a classification task for 14 lung related diseases.
successfully promoted the robustness to missing modalities. In addition, the classification accuracy decreased the most in the absence of the text modality and the least in the absence of the tabular data modality, indicating that the tabular data modality had the least impact on the model's robustness, while text had the most.
On the other hand, when the classification task was performed with two modalities, the classification model built on training samples containing all three modalities outperformed those built on training samples with the same two modalities as in the test set on average and for most of the labels. For example, our TriMF model built on the image, text, and tabular data modalities outperformed the BiMF model built on the image and text modalities when tested on the test samples without the tabular data modality on average (AUROC 0.912 vs. 0.879, and AUPRC 0.540 vs. 0.502) and for 10 out of 14 labels. This indicates that even if a specific modality was not used in the inference process, the TriMF model was still able to effectively fuse the modality with other modalities in the training process, thus improving the classification performance of the model. Taken together, these results provide important insights into that our proposed architecture not only promotes the robustness to missing modalities, but also enhances the multi-modal fusion effect.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \multirow{2}{*}{Labels} & \multicolumn{2}{c}{Test modality: Im\_Tx} & \multicolumn{2}{c}{Test modality: Im\_Ta} & \multicolumn{2}{c}{Test modality: Tx\_Ta} \\ \cline{2-7} & Im\_Tx\_Ta & Im\_Tx & Im\_Tx\_Ta & Im\_Ta & Im\_Tx\_Ta & Tx\_Ta \\ \hline Atel & 0.866 & 0.882 & 0.730 & 0.712 & 0.861 & 0.865 \\ Card & 0.865 & 0.860 & 0.788 & 0.714 & 0.862 & 0.829 \\ Cons & 0.902 & 0.881 & 0.861 & 0.821 & 0.906 & 0.860 \\ Edem & 0.937 & 0.901 & 0.873 & 0.845 & 0.936 & 0.925 \\ EC & 0.966 & 0.869 & 0.840 & 0.845 & 0.960 & 0.890 \\ Frac & 0.926 & 0.807 & 0.896 & 0.899 & 0.925 & 0.876 \\ LL & 0.921 & 0.879 & 0.926 & 0.886 & 0.935 & 0.920 \\ LO & 0.890 & 0.891 & 0.787 & 0.786 & 0.891 & 0.884 \\ NF & 0.876 & 0.894 & 0.850 & 0.817 & 0.882 & 0.871 \\ PE & 0.923 & 0.918 & 0.766 & 0.787 & 0.918 & 0.909 \\ PO & 0.990 & 0.832 & 0.940 & 0.905 & 0.986 & 0.723 \\ Pneu1 & 0.834 & 0.833 & 0.775 & 0.702 & 0.831 & 0.825 \\ Pneu2 & 0.984 & 0.965 & 0.976 & 0.931 & 0.987 & 0.986 \\ SD & 0.882 & 0.900 & 0.794 & 0.759 & 0.877 & 0.888 \\ \hline \end{tabular}
\end{table}
Table 2: Performance (in AUROC) of the classification models tested on incomplete modality sets.
### Ablation Experiments
The results of the ablation study are reported in Tables 3 and S7. Due to the absence of SA, LMF, and FRCL function, the average AUROC of our model decreased by 0.030, 0.029, 0.034, and the average AUPRC decreased by 0.053, 0.056, 0.060, respectively. It could be seen that all three components improved the overall performance of the model. In particular, the complete model outperformed all three ablation models in terms of both AUROC and AUPRC, on four labels (enlarge cardiomediastinum [EC], fracture [Frac], lung lesion [LL], and pleural other [PO]) with the lowest positive rates (3.2%, 0.9%, 1.6%, and 0.4%, respectively). Taken together, all three components improved the average performance, but mainly improved the classification of extremely unbalanced labels and slightly sacrificed the classification of other labels. Specifically, ablation of SA led to an increase in AUROC and a decrease in AUPRC for two labels (cardiomegaly [Card] and pleural effusion [PE]), similar to ablation of LMF for three labels (lung opacity [LO], no finding [NF], and support devices [SD]) and ablation of FRCL function for two labels (lung opacity [LO] and no finding [NF]). This implied that the addition of any of these components would boost the true positive rate of the classification task for some labels, which is beneficial for a disease diagnostic model.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c c} \hline \hline \multicolumn{1}{c}{Labels} & \multicolumn{1}{c}{Atel} & \multicolumn{1}{c}{Card} & \multicolumn{1}{c}{Cons} & \multicolumn{1}{c}{Edem} & EC & Frac & LL & LO & NF & PE & PO & Pneu1 & Pneu2 & SD & Mean \\ \hline \multirow{6}{*}{AUROC} & Proposed & 0.867 & 0.865 & 0.902 & 0.939 & 0.967 & 0.927 & 0.939 & 0.893 & 0.878 & 0.924 & 0.990 & 0.838 & 0.984 & 0.882 & 0.914 \\ & w/o SA & 0.878 & 0.869 & 0.878 & 0.938 & 0.895 & 0.819 & 0.899 & 0.902 & 0.894 & 0.932 & 0.756 & 0.832 & 0.981 & 0.897 & 0.884 \\ & w/o LMF & 0.875 & 0.831 & 0.863 & 0.937 & 0.890 & 0.878 & 0.916 & 0.895 & 0.887 & 0.931 & 0.792 & 0.825 & 0.986 & 0.888 & 0.885 \\ & w/o FRCL & 0.875 & 0.829 & 0.860 & 0.935 & 0.890 & 0.876 & 0.920 & 0.894 & 0.891 & 0.929 & 0.723 & 0.825 & 0.986 & 0.888 & 0.880 \\ \hline \multirow{6}{*}{AUPRC} & Proposed & 0.541 & 0.319 & 0.314 & 0.793 & 0.479 & 0.221 & 0.415 & 0.701 & 0.807 & 0.794 & 0.444 & 0.328 & 0.828 & 0.751 & 0.552 \\ & w/o SA & 0.570 & 0.298 & 0.367 & 0.802 & 0.391 & 0.030 & 0.111 & 0.732 & 0.828 & 0.803 & 0.036 & 0.360 & 0.883 & 0.769 & 0.499 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Model performance in the ablation experiments around SA, LMF and FRCL components.
## 4. Discussion
Multi-modal medical data fusion model based on deep learning has achieved remarkable results. Many researchers have attempted to improve the performance of clinical tasks by fusing multi-modal data, including images, free-text reports, clinical audio, biological signals, laboratory tests, and so on. Similar to studies in the general domain, multi-modal fusion studies in the medical domain have mostly focused on bi-modal fusion of the image and text modalities[9-12]. However, many types of modalities are generated in clinical practice [27], which may facilitate the use of deep learning models to learn better patient representations[3]. Therefore, we designed a delicate multi-modal fusion architecture with the goal of effectively fusing not only the image and text modality, but also the structured data modality. Through the multiple perspectives validation experiments, showed that the proposed multi-modal fusion architecture effectively fused the three modalities and improved the classification performance of models based on the fused information.
Missing modality is a common problem for medical data, which can lead to a dramatic performance degradation of multi-modal fusion models in real-world clinical application[28, 29]. To solve this problem, the generation of missing modalities based on other modalities using e.g. autoencoder[14] and GAN[16] is the prevalent idea of current research. However, these methods have constraints on the type of modality and require a large number of samples. In this paper, instead of generating the missing modality, we concentrated on constructing a model that can accept the missing modality data as input by combining several separate BiMF modules. To reduce the negative impact of missing modalities on the classification performance, we proposed a multivariate loss function containing FRCL and classification loss in the training of the multi-modal fusion model to make the fused patient representation with and without missing modalities as similar as possible. The subtle design of the BiMF module and the multivariate loss function improved the robustness of the multi-modal fusion framework to missing-modal data, as well as its scalability, allowing us to fuse more
modalities by adding BiMF.
Feature concatenation, autoencoder, attention mechanism, etc. were previously the most prevalent multi-modal deep learning methods on medical data[30-32]. With the success of Transformer-based model on natural texts and images[33, 34], Transformer block has been used in multi-modal fusion[35-37]. The current Transformer-based medical multi-modal fusion models were mostly based on SA[10, 26]. According to Li et al.[38], SA and CA backbones were good at aligning low-level and high-level semantics, respectively. Yu et al.[36] proved that alternate stacking of SA and CA could improve the performance of Transformer-based multi-modal models, which was also confirmed by our study. When SA units were incorporated into the dual-stream architecture based on CA, the average AUROC and AUPRC increased from 0.884 to 0.914, and 0.499 to 0.552, respectively. In particular, MedViLL is an SA-based BiMF model pre-trained on MIMIC-CXR, which performed worse than not only the proposed TriMF model (AUROC 0.870 vs. 0.914), but also our BiMF module (AUC 0.870 vs. 0.879). These indicate that the Transformer-based fusion framework combining SA and CA can effectively fuse two or even three modalities on medical data.
There were still some limitations in this study. First, although this architecture could fuse more modalities by using more BiMF modules, this would lead to a massive increase in the number of parameters. Second, the modalities included in the current study were still limited. In addition, there are various forms of submodalities for the medical modalities, such as X-ray, CT, and MRI for the image modality, and radiology reports, pathology reports, and hospital admission notes for the text modality. More modalities and more submodalities would introduce a more severe modal missing problem. Whether the proposed architecture is still robust to missing more modalities needs to be verified. Finally, this study only conducted experiments on an English public dataset. The proposed architecture should be validated and evaluated on a Chinese dataset, and hopefully applied to a real clinical scenario in China.
## 5 Conclusions
In this study, we proposed a multi-modal fusion architecture based on Transformer. This architecture could effectively fuse three medical modalities and improve the diagnosis performance, while is
robust to modal-incomplete data. This study provided a novel idea for dealing with missing modalities in multi-modal medical data fusion. It has the potential to be scaled to more modalities with the enhanced clinical practicality.
## Acknowledgments
This work was supported by the National Natural Science Foundation of China (grant number 81971707) and the Beijing Natural Science Foundation (grant number L222006).
## Authors' Contributions
Hui Chen: Conceptualization, Writing- Reviewing and Editing. Muyu Wang: Data curation, Methodology, Writing- Original draft preparation. Shiyu Fan: Data curation. Yichen Li: Data curation.
## Conflicts of Interest
None declared.
## Abbreviations
AUPRC: area under the precision-recall curve
AUROC: area under the receiver operating characteristic curve
BiMF: bi-modal fusion
CA: co-attention
FRCL: fusion representation contrastive loss
GAN: generative adversarial network
LMF: Low-rank Multi-modal Fusion
MIMIC: Medical Information Mart for Intensive Care
MRI: magnetic resonance imaging
SA: self-attention
TriMF: tri-modal fusion
## References
* [1] R.T. Sutton, D. Pincock, D.C. Baumgart, D.C. Sadowski, R.N. Fedorak and K.I. Kroeker, "An overview of clinical decision support systems: benefits, risks, and strategies for success," npj Digit. Med., vol. 3, no. 1, pp. 17, 2020, doi: 10.1038/s41746-020-0221-y.
* [2] E.J. Topol, "High-performance medicine: the convergence of human and artificial intelligence," Nat. Med., vol. 25, no. 1, pp. 44-56, 2019, doi: 10.1038/s41591-018-0300-7.
* [3] Y. Huang, C. Du, Z. Xue, X. Chen, H. Zhao and L. Huang, "What Makes Multi-modal Learning Better than Single (Provably)," arXiv:2106.04538, 2021.
* [4] C. MaoL. Yao and Y. Luo, "ImageGCN: Multi-Relational Image Graph Convolutional Networks for Disease Identification with Chest X-rays," IEEE Trans. Med. Imaging, vol. 8, no. 41, pp. 1990-2003, 2022, doi: 10.1109/TMI.2022.3153322.
* [5] U. Kamal, M. Zunaed, N.B. Nizam and T. Hasan, "Anatomy-XNet: An Anatomy Aware Convolutional Neural Network for Thoracic Disease Classification in Chest X-Rays," IEEE J. Biomed. Health Inform., vol. 26, no. 11, pp. 5518-5528, 2022, doi: 10.1109/JBHI.2022.3199594.
* [6] A. Casey et al., "A systematic review of natural language processing applied to radiology reports," BMC Med. Inform. Decis. Mak., vol. 21, no. 1, pp. 179, 2021, doi: 10.1186/s12911-021-01533-7.
* [7] S. Jeon, Z. Colburn, J. Sakai, L. Hung and K.Y. Yeung, "Application of Natural Language Processing and Machine Learning to Radiology Reports," in Proc. 12th ACM Conf. Bioinf. Comput. Biol. Health Informat., pp. 1-9, 2021.
* [8] A. Kline et al., "Multimodal machine learning in precision health: A scoping review," npj Digit. Med., vol. 5, no. 1, pp. 171, 2022, doi: 10.1038/s41746-022-00712-8.
* [9] D. Grant, B.W. Papiez, G. Parsons, L. Tarassenko and A. Mahdi, "Deep Learning Classification of Cardiomegaly Using Combined Imaging and Non-imaging ICU Data," in Machine Learning in Medical Imaging, Springer, pp. 547-558, 2021.
* [10] G. JacenkowA.Q. O'Neil and S.A. Tsaftaris, "Indication as Prior Knowledge for Multimodal Disease Classification in Chest Radiographs with Transformers," in 2022 IEEE 19th
International Symposium on Biomedical Imaging (ISBI), IEEE, pp. 1-5, 2022.
* [11] S. Qiu et al., "Multimodal deep learning for Alzheimer' s disease dementia assessment," Nat. Commun., vol. 13, no. 1, pp. 3404, 2022, doi: 10.1038/s41467-022-31037-5.
* [12] Y. Wang, Y. Zhao, R. Callcut and P. Linda, "Integrating Physiological Time Series and Clinical Notes with Transformer for Early Prediction of Sepsis," arXiv:2203.14469, 2022.
* [13] M. Song, X. Shi, Y. Zhang and B. Li, "Multimodal Breast Cancer Diagnosis Based on Multi-level Fusion Network," in ISAIR 2022: Artificial Intelligence and Robotics, pp. 224-239, 2022.
* [14] Y. Xu et al., "Explainable Dynamic Multimodal Variational Autoencoder for the Prediction of Patients with Suspected Central Precocious Puberty," IEEE J. Biomed. Health Inform., vol. 26, no. 3, pp. 1362-1373, 2021, doi: 10.1109/JBHI.2021.3103271.
* [15] J. YoonJ. Jordon and M. van der Schaar, "GAIN: Missing Data Imputation using Generative Adversarial Nets," in International Conference on Machine Learning, PMLR, pp. 5689-5698, 2018.
* [16] T. Zhou, S. Canu, P. Vera and S. Ruan, "Feature-enhanced generation and multi-modality fusion based deep neural network for brain tumor segmentation with missing MR modalities," Neurocomputing, vol. 466, pp. 102-112, 2021, doi: 10.1016/j.neucom.2021.09.032.
* [17] Y. Liu, H. Ishibuchi, G.G. Yen, Y. Nojima and N. Masuyama, "Handling Imbalance Between Convergence and Diversity in the Decision Space in Evolutionary Multi-Modal Multi-Objective Optimization," IEEE Trans. Evol. Comput., vol. 24, no. 3, pp. 551-565, 2020, doi: 10.1109/TEVC.2019.2938557.
* [18] A.E.W. Johnson et al., "MIMIC-CXR, a de-identified publicly available database of chest radiographs with free-text reports," Sci. Data, vol. 6, pp. 317, 2019, doi: 10.1038/s41597-019-0322-0.
* [19] A.E. Johnson et al., "MIMIC-CXR-JPG, a large publicly available database of labeled chest radiographs," arXiv:1901.07042, 2019.
* [20] A. Johnson, L. Bulgarelli, T. Pollard, S. Horng, L.A. Celi and R. Mark, "Mimic-iv (version 1.0)," PhysioNet, 2020.
* [21] G. Huang, Z. Liu, L. van der Maaten and K.Q. Weinberger, "Densely Connected Convolutional Networks," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., IEEE, pp. 4700-4708, 2017.
* [22] Y. Gu et al., "Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing," ACM Transactions on Computing for Healthcare, vol. 3, no. 1, pp. 1-23, 2022, doi: 10.1145/3458754.
* [23] A. Vaswani et al., "Attention Is All You Need," in Adv. Neural Inf. Process. Syst., pp. 5998-6008, 2017.
* [24] Z. Liu, Y. Shen, V.B. Lakshminarasimhan, P.P. Liang, A. Zadeh and L. Morency, "Efficient Low-rank Multimodal Fusion with Modality-Specific Factors," arXiv:1806.00064, 2018.
* [25] A. Zadeh, M. Chen, S. Poria, E. Cambria and L. Morency, "Tensor Fusion Network for Multimodal Sentiment Analysis," arXiv:1707.07250, 2017.
* [26] J.H. Moon, H. Lee, W. Shin and E. Choi, "Multi-modal Understanding and Generation for Medical Images and Text via Vision-Language Pre-Training," IEEE J. Biomed. Health Inform., vol. 26, no. 12, pp. 6070-6080, 2021.
* [27] A. HaqueA. Milstein and L. Fei-Fei, "Illuminating the dark spaces of healthcare with ambient intelligence," Nature, vol. 585, no. 7824, pp. 193-202, 2020, doi: 10.1038/s41586-020-2669-y.
* [28] Q. Suo, Z. Weida, M. Fenglong, Y. Ye, G. Jing and A. Zhang, "Metric Learning on Healthcare Data with Incomplete Modalities," in Int. Joint Conf. Artif. Intell., pp. 3534-3540, 2019.
* [29] C. Zhang et al., "M3Care: Learning with Missing Modalities in Multimodal Healthcare Data," in Proceedings of the ACM international conference on knowledge discovery and data mining, pp. 2418-2428, 2022.
* [30] W. Ning et al., "Open resource of clinical data from patients with pneumonia for the prediction of COVID-19 outcomes via deep learning," Nat. Biomed. Eng., vol. 4, no. 12, pp. 1197-1207, 2020, doi: 10.1038/s41551-020-00633-5.
* [31] T. van Sonsbeek and M. Worring, "Towards Automated Diagnosis with Attentive Multi-modal Learning Using Electronic Health Records and Chest X-Rays," in ML-CDS and CLIP (MICCAI), Springer, pp. 106-114, 2020.
* [32] V. Singh, N.K. Verma, Z. Ul Islam and Y. Cui, "Feature Learning Using Stacked Autoencoder
for Shared and Multimodal Fusion of Medical Images," in Computational Intelligence: Theories Applications and Future Directions, Springer, vol. 1, pp. 53-66, 2019.
* [33] A. Dosovitskiy et al., "An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale," in Proc. Int. Conf. Learn. Represent., 2021.
* [34] J. Devlin, M. Chang, K. Lee and T. Kristina, "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding," arXiv:1810.04805, 2018.
* [35] W. KimB. Son and I. Kim, "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision," in Int. Conf. Mach. Learn. (ICML), pp. 5583-5594, 2021.
* [36] Z. Yu, J. Yu, Y. Cui, D. Tao and Q. Tian, "Deep Modular Co-Attention Networks for Visual Question Answering," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pp. 6281-6290, 2019.
* [37] Y.H. Tsai, S. Bai, L.P. Pu, J.Z. Kolter, L.P. Morency and R. Salakhutdinov, "Multimodal Transformer for Unaligned Multimodal Language Sequences," in Proc. Conf. Assoc. Comput. Linguistics, vol. 2019, pp. 6558-6569, 2019.
* [38] C. Li et al., "SemVLP: Vision-Language Pre-training by Aligning Semantics at Multiple Levels," arXiv:2103.07829, 2022.
|
2309.13158 | Size and Albedo Constraints for (152830) Dinkinesh Using WISE Data | Probing small main-belt asteroids provides insight into their formation and
evolution through multiple dynamical and collisional processes. These asteroids
also overlap in size with the potentially hazardous near-earth object
population and supply the majority of these objects. The Lucy mission will
provide an opportunity for study of a small main-belt asteroid, (152830)
Dinkinesh. The spacecraft will perform a flyby of this object on November 1,
2023, in preparation for its mission to the Jupiter Trojan asteroids. We
employed aperture photometry on stacked frames of Dinkinesh obtained by the
Wide-field-Infrared Survey Explorer and performed thermal modeling on a
detection at 12 $\mu$m to compute diameter and albedo values. Through this
method, we determined Dinkinesh has an effective spherical diameter of
$0.76^{+0.11}_{-0.21}$ km and a visual geometric albedo of
$0.27^{+0.25}_{-0.06}$ at the 16th and 84th percentiles. This albedo is
consistent with typical stony (S-type) asteroids. | Kiana D. McFadden, Amy K. Mainzer, Joseph R. Masiero, James M. Bauer, Roc M. Cutri, Dar Dahlen, Frank J. Masci, Jana Pittichová, Akash Satpathy, Edward L. Wright | 2023-09-22T19:48:49Z | http://arxiv.org/abs/2309.13158v1 | # Size and Albedo Constraints for (152830) Dinkinesh Using WISE Data
###### Abstract
Probing small main-belt asteroids provides insight into their formation and evolution through multiple dynamical and collisional processes. These asteroids also overlap in size with the potentially hazardous near-earth object population and supply the majority of these objects. The Lucy mission will provide an opportunity for study of a small main-belt asteroid, (152830) Dinkinesh. The spacecraft will perform a flyby of this object on November 1, 2023, in preparation for its mission to the Jupiter Trojan asteroids. We employed aperture photometry on stacked frames of Dinkinesh obtained by the Wide-field-Infrared Survey Explorer and performed thermal modeling on a detection at 12 \(\mu\)m to compute diameter and albedo values. Through this method, we determined Dinkinesh has an effective spherical diameter of \(0.76^{+0.11}_{-0.21}\) km and a visual geometric albedo of \(0.27^{+0.25}_{-0.06}\) at the 16th and 84th percentiles. This albedo is consistent with typical stony (S-type) asteroids.
WISE -- NEOWISE -- Main-Belt Asteroid -- IRSA
0000-0002-0002-3870]Kiana D. McFadden
0000-0002-0001-9488-0888]Amy K. Mainzer
0000-0002-0002-3133]Joseph R. Masiero
0000-0002-0002-3886]James M. Bauer
0000-0002-4703-3473]Roc M. Cutri
0000-0002-4883-0888]Dar Dahlen
0000-0002-4883-0888]Frank J. Masci
0000-0002-4883-0888]Jana Pittichova
0000-0002-0001-8887]Akash Satpathy
0000-0002-0770-0701]Edward L. Wright
## 1 Overview
Our solar system formed from a molecular cloud of dust and gas, and as the solar nebula flattened into a disk and the protosun formed, dust grains began to condense (Weidenschilling, 1977). These grains eventually formed planetesimals through accretion; some planetesimals formed the planets we know today while others may have stopped growing at smaller sizes. The main-belt asteroids' compositions depend on where they were formed within the protoplanetary disk and how they were mixed in the early solar system (Morbidelli et al., 2009; DeMeo et al., 2015). Mixing may have occurred through planetary migration or through streaming instabilities (Carrera et al., 2015; Tsiganis et al., 2005). In addition, collisional cascades resulted in showers of smaller fragments that could migrate more rapidly due to non-gravitational forces (i.e. Yarkovsky drift; Bottke et al., 2005a, b; Gomes et al., 2005). These smaller fragments can also re-accrete into rubble piles like Bennu, Ryugu, and Itokawa (e.g., Nakamura et al., 2023). The small main-belt asteroids we observe today in our solar system are likely the results from a combination of these processes (Bottle et al., 2015). By studying small main-belt asteroids we can gain insight into their formation and subsequent evolution.
Small main-belt asteroids are also important to study because they feed the current near-earth object (NEO) population, some of which have the potential to impact the Earth, and because their size scale overlaps with the largest NEOs (Alvarez et al., 1980). However, observational effects make it hard to study small main-belt asteroids because they are often too faint, far away, or small. On its way to explore the Jupiter Trojan asteroids, NASA's Lucy spacecraft will fly by the small main-belt asteroid (152830) Dinkinesh on November 1, 2023. The Lucy mission will collect observations using its thermal infrared spectrometer, high-resolution panchromatic imager, infrared imaging spectrometer, and color camera (Levison et al., 2021). These instruments will provide a diameter of the asteroid as well as a resolved shape model. Lucy will also obtain information on the visual geometric albedo, spin state, color maps,
and constraints on volume and densities. Dinkinesh will become the smallest main-belt asteroid to have detailed fly-by data.
Dinkinesh was discovered by the Lincoln Near-Earth Asteroid Program (LINEAR; Stokes et al., 2000) survey in 1999. It has multi-epoch observations of sufficient quantity to provide a well constrained orbit, which makes Dinkinesh a suitable spacecraft fly-by target. Bolin et al. (2023) were able to obtain spectroscopic observations using the Keck and Gemini-South telescopes that indicated that Dinkinesh is a S-type/Sq-type asteroid, and using the albedo range from Mainzer et al. (2011) and the absolute visible magnitude of \(H_{V}=17.62\pm 0.04\) mag from Mottola et al. (2023), Bolin et al. (2023) calculated an effective diameter range of \(0.67-0.96\) km based upon the relationship \(D_{eff}=1329p_{V}^{-1/2}10^{-H_{V}/5}\) km (Fowler and Chillemi, 1992). These results are in good agreement with de Leon et al. (2023), who obtained \(H_{V}=17.48\pm 0.05\) mag and \(0.16<G_{V}<0.23\) mag, giving them an effective spherical diameter range of \(0.542-1.309\) km, and with Mottola et al. (2023) who found \(G=0.378\) mag giving a diameter range of \(0.66-1.36\) km at \(2\sigma\). Mottola et al. (2023) also obtained light curve photometry of Dinkinesh and found a rotational period of \(P=52.67\pm 0.04\) hours, indicating a slowly rotating object.
In this paper, we report an independent measurement of size and visual geometric albedo of Dinkinesh that we derived from thermal modeling based on 12 \(\mu\)m stacked photometry from the Wide-field Infrared Survey Explorer taken in 2010 (Wright et al., 2010; Mainzer et al., 2011).
## 2 Observations
The Wide-field Infrared Survey Explorer (WISE; Wright et al., 2010) is a NASA mission that created an all-sky map at four wavelengths spanning 3.4, 4.6, 12 and 22 \(\mu\)m (denoted W1, W2, W3, and W4, respectively). It launched in December 14, 2009, at which point all four bands were available. WISE completed its fully cryogenic primary mission on August 6, 2010. Although its primary scientific objective was not observing asteroids, it was able to detect \(\sim\)190,000 of them using automated detection software (Mainzer et al., 2011). Following the successful completion of its prime mission, the WISE spacecraft was reactivated in late 2013 and renamed the Near-Earth-Object Wide-field Infrared Survey Explorer (NEOWISE; Mainzer et al., 2014) with its primary focus being characterization of NEOs using the two short-wavelength channels that were still operational, W1 and W2. Observations at thermal wavelengths allow for determination of asteroid diameters using thermal models, and if visible light data are available in addition to thermal measurements, it is possible to determine visible albedo as well as diameter.
During the primary fully cryogenic survey, WISE obtained 19 independent sets of exposures covering the position of Dinkinesh, 17 of which were of good quality. Table 1 summarizes our observations. The asteroid was generally too faint to be detected in the individual exposures, but we coadded the good quality frames by registering them to the position of the asteroid at the time of each exposure and coadding the shifted images. Exposures in which the predicted position of the asteroid coincided with a background star or galaxy were also eliminated prior to the coadd, based on examination of the AllWISE Source Catalog (Cutri, 2014). Exposures taken within 20\({}^{\circ}\) of the Moon were also excluded. The remaining 17 exposures were combined using the Image Co-addition with Optional Resolution Enhancement (ICORE; Masci and Fowler, 2009; Masci, 2013) algorithm.1 The resulting coadded images in the four WISE bands are shown in Figure 1. The object is detected with a signal-to-noise ratio (SNR) of \(\sim\)5 at 12 \(\mu\)m. We searched NEOWISE W1 and W2 exposures taken after the fully cryogenic mission and did not find any detections.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline Minimum MJD & Median MJD & Maximum MJD & RA & Dec & 12 \(\mu\)m Magnitude & SNR & \(R_{helio}\) & \(R_{obs}\) \\ Days & Days & Days & Deg & Deg & & & au & au \\ \hline
55274.2502 & 55275.0441 & 55275.7720 & 95.6132 & 25.8412 & 11.50 \(\pm\) 0.23 & 4.6 & 1.98 & 1.62 \\ \hline \end{tabular}
\end{table}
Table 1: Stacked image photometry of Dinkinesh for aperture three. MJD = Modified Julian Date. \(R_{helio}\) = radial heliocentric distance to the object. \(R_{obs}\) = radial distance to the object from the observer.
## 3 Methods
We performed aperture photometry on the coadded images of Dinkinesh using a set of nested circular apertures in band W3 with radii of 11, 12, 14, 18, 22, 26, and 33 arcsec and a large annular region to estimate the level of the sky signal. We did not perform photometry on the W1 and W2 images because they were contaminated with stars due to low galactic latitude (Dinkinesh was 5\({}^{\circ}\) off the galactic plane at the time of observations). The object was too faint to produce a significant detection in the stacked W4 frames, although these data can be used to set an upper brightness limit.
We calibrated the aperture source photometry back to a well-established photometric standard using the profile-fit photometry from WISE. We made stacks of 280 bright and unsaturated asteroids (SNRs \(>30.0\) and W3 \(>4.0\) magnitude) using the ICORE algorithm. Next, we computed aperture photometry for each object's stacked image and compared it to that object's profile-fit photometry. We employed an algorithm that flagged any extraneous sources such as stars, cosmic rays, diffraction spikes, optical artifacts, and smeared images from the profile-fit photometry. The rejection criteria included _danneal_\(>2000\) sec, _cc_flags_\(=0\), _qi_fact_\(\neq 0\), _rchi2_\(<5\), and _moon_sep_\(>20\)\({}^{\circ}\). Any detections from AllWISE that were within 5 arcsec of the asteroid's predicted position in a frame and within 1 mag of the asteroid's signal were excluded. For each of the 280 objects, we computed the offset between the profile fit W3 magnitude and all seven stacked aperture magnitudes. We then computed the average of the offsets for each aperture size to obtain the curve of growth, shown in Figure 2. We sought to minimize the background noise while also avoiding an overly small aperture size that would induce quantization effects; we selected aperture three with a 14 arcsec radius as the best balance. The limiting photometric calibration uncertainty is 0.03 mag for WISE (Cutri et al., 2012).2 We derived the aperture correction by using enough bright asteroids to reduce the correction term's uncertainty (\(0.23\pm 0.05\) mag; Figure 2) to much less than the photometric measurement uncertainty from Dinkinesh,
Figure 1: Stacked image photometry of Dinkinesh in WISE bands W1, W2, W3, and W4.
which was \(\sim\pm 0.2\) mag. We applied the aperture correction term to Dinkinesh's aperture magnitude to correct back to a calibrated profile fit W3 magnitude (Table 1). We tested apertures two and three (12 and 14 arcsec, respectively), and the results were the same within the statistical uncertainty.
### Thermal Modeling
The Near-Earth Asteroid Thermal Model, NEATM, is a simple thermal model that improves upon earlier versions such as the standard thermal model and the fast rotating model (Harris, 1998). The standard thermal model approximates an asteroid as a non-rotating sphere while solar insolation and the surface temperatures are in instantaneous equilibrium (Lebofsky et al., 1986). The fast-rotating model (Lebofsky and Spencer, 1989) assumes that the temperature is distributed in uniform bands in latitude all around the asteroid. Our study uses NEATM because previous work (Mainzer et al., 2011; Wright et al., 2018; Masiero et al., 2021) has demonstrated that the diameter measurements determined from NEOWISE using NEATM are generally reliable to within \(\sim\)10%, and it is computationally efficient. NEATM makes a number of assumptions to model the temperature distribution across the asteroid's surface. The NEATM model assumes that the asteroid is not rotating and that the night-side has a temperature of 0 K. Our implementation of NEATM approximates a sphere by using a sufficient number of triangular facets arranged in a Fibonacci lattice.3 This model also incorporates a beaming parameter \(\eta\) which is allowed to vary to account for additional thermal parameters such as thermal inertia and surface roughness, when multiple thermally dominated bands are available. We find the best fit solution using the Python _basinhopper_(Wales and Doye, 1997; Virtanen et al., 2020)
Figure 2: We computed the average profile fit magnitude for 280 bright, unsaturated objects minus the stacked aperture photometry for each object. We then averaged over the 280 objects’ offsets to compute curve of growth for band W3. The offset for aperture two was \(-0.33\) mag, and the offset for aperture three was \(-0.23\) mag.
package and find the uncertainties using the Monte Carlo Markov Chain approach implemented by the Python _emcee_ package (Foreman-Mackey et al., 2013).
We apply NEATM to Dinkinesh to obtain its effective spherical diameter and albedo values. We set the emissivity value to 0.9 based on meteoritical studies (Bates et al., 2021; Ostrowski & Bryson, 2020). We adopt \(H_{V}=17.62\pm 0.04\) mag based on the photometric measurements of Mottola et al. (2023). Similarly, we adopted \(G=0.378\) mag from Mottola et al. (2023) but tested G over the full range 0.08 to 0.38 mag and found that this did not make a statistically significant difference in the results because diameter is only weakly dependent on G. Since we do not have multiple measurements at more than one thermally dominated band, we assumed a beaming value \(\eta\) of 1.00 \(\pm 0.1\) based on Masiero et al. (2011, 2014).
## 4 Results/Discussion
We observed Dinkinesh over the course of 36.5 hours compared to its rotational period of 52.67 hours (Mottola et al., 2023). The long duration of our observations with WISE allows us to better constrain its effective spherical diameter by observing over most of its full rotational cycle. Through our thermal models, we determined that Dinkinesh has an effective spherical diameter of \(0.76^{+0.11}_{-0.21}\) km and a visual geometric albedo of \(0.27^{+0.25}_{-0.06}\) with the uncertainties specified at the 16th and 84th percentiles. Figure 3 shows the best fit solution from the thermal models. The albedo is consistent with the range of typical values for S-type asteroids (Mainzer et al., 2011).
We also used a spherical thermophysical model (TPM) as described in Wright (2007) and Koren et al. (2015) to determine the diameter of Dinkinesh, which gave a smaller estimate of \(621^{+92}_{-117}\) m. Most of the diameter difference between the TPM and NEATM is caused by the unusually long rotation period (\(P=52.67\) hours) of Dinkinesh, which is an order of magnitude longer than the median for typical objects with diameters near 700 m. This is smaller than our NEATM-fit size, but they are consistent within uncertainties. We prefer NEATM due to the limited data available.
### Potential Asteroid Family Association
Asteroid family associations can be made by comparing orbital precession rates or proper orbital elements (Hirayama, 1918; Milani & Knezevic, 1994; Knezevic et al., 2002). Asteroid (152830) Dinkinesh is not formally part of any asteroid family.4 However, the synthetic proper orbital elements (Knezevic & Milani, 2003) for Dinkinesh show a very close match to those for asteroid (8) Flora in proper semi-major axis (2.191 vs 2.201 au), proper eccentricity (0.1482 vs 0.1449), rate of perihelion precession \(g\) (32.810 vs 32.017 arcsec/year), and rate of ascending node precession \(s\) (-35.502 vs -35.511 arcsec/year), deviating only in proper inclination (1.600\({}^{\circ}\) vs 5.574\({}^{\circ}\)).
Footnote 4: see AstDys: _[https://newton.spacedys.com/astdys/index.php?pc=5_](https://newton.spacedys.com/astdys/index.php?pc=5_)
This implies that there is a potential link between Dinkinesh and the Flora family, with the large deviation in inclination being a result of the initial velocity due to the family-forming impact, or due to the seasonal Yarkovsky effect (Vokrouhlicky & Farinella, 1999) that has a large out-of-plane component for objects with obliquities that are neither parallel nor orthogonal to their orbital plane (Bottke et al., 2002). This link is further supported by the consistency between the spectral taxonomy of Dinkinesh and Flora.
## 5 Conclusions
We obtained photometry at 12 \(\mu\)m of the small main-belt asteroid (152830) Dinkinesh by coadding multiple independent exposures obtained by WISE in March 2010. We used two thermal models, NEATM and TPM, to obtain diameter and visual geometric values. These values are in good agreement with the predicted diameter and albedo ranges from Bolin et al. (2023); de Leon et al. (2023), and Mottola et al. (2023). Based on the axial ratio of \(a/b\sim 1.43\) derived from the light curve amplitude by (Mottola et al., 2023), our spherical-equivalent size of \(0.76^{+0.11}_{-0.21}\) km would correspond to axial sizes of \(2a=0.96\) km and \(2b=2c=0.67\) km, assuming that our measurements fully sample a half-rotation of Dinkinesh. From the WISE data, we determined that the visual geometric albedo is \(0.27^{+0.25}_{-0.06}\), which is consistent with typical S-type asteroids. The data that will be obtained from Lucy's flyby of Dinkinesh will provide us with a detailed look at a very small main-belt asteroid. That, in turn, will help us to better understand the provenance of such objects.
This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. This publication also makes use of data products from NEOWISE, which is a project of the Jet Propulsion Laboratory/California Institute of Technology, funded by the Planetary Science Division of the National Aeronautics and Space Administration.
This research has made use of the NASA/IPAC Infrared Science Archive, which is funded by the National Aeronautics and Space Administration and operated by the California Institute of Technology.
This research has made use of data and/or services provided by the International Astronomical Union's Minor Planet Center.
The work of J.P. was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration.
Dataset usage:
WISE All-Sky 4-band Single-Exposure Images
AllWISE Source Catalog
NEOWISE-R Single Exposure (L1b) Source Table
|
2309.10451 | The coupled-cluster self-energy | Coupled-cluster and Green's function theories are highly successful in
treating many-body electron correlation and there has been significant interest
in identifying and leveraging connections between them. Here we present a
diagrammatic definition of the irreducible coupled-cluster self-energy that
directly embeds coupled-cluster theory within the framework of many-body field
theory. The EOM-CC treatment emerges naturally from our definition via the
Dyson and Bethe-Salpeter equations, providing a unified description of RPA,
$GW$-BSE and CC theory for ground state and excitation energies. This clarifies
the origin of previously established connections between RPA, $GW$-BSE and
coupled-cluster theory, and exposes the relationship between vertex corrections
and the coupled-cluster amplitude equations. | Christopher J. N. Coveney, David P. Tew | 2023-09-19T09:12:15Z | http://arxiv.org/abs/2309.10451v2 | # The coupled-cluster self-energy
###### Abstract
An improved description of electronic correlation in molecules and materials can only be achieved by uncovering connections between different areas of electronic structure theory. A general unifying relationship between the many-body self-energy and coupled-cluster theory has remained hitherto unknown. Here, we present a formalism for constructing the coupled-cluster self-energy from the coupled-cluster ground state energy. Our approach illuminates the fundamental connections between the many-body self-energy and the coupled-cluster equations. As a consequence, we naturally arrive at the coupled-cluster quasiparticle and Bethe-Salpeter equations describing correlated electrons and excitons. This deep underlying structure explains the origin of the connections between RPA, \(GW\)-BSE and coupled-cluster theory, whilst also elucidating the relationship between vertex corrections and the amplitude equations.
The behaviour of electrons in materials and molecules has been of central importance to the scientific community since the birth of quantum mechanics [1]. From classifying photovoltaic devices and solar cells to accurately predicting a binding affinity of a drug to a protein, a theoretical description of the correlated motion of electrons is key [2; 3; 4]. Therefore, it is necessary to continue to develop a holistic understanding of the connections between different areas of electronic structure theory to generate increasingly accurate and scalable correlation methods.
The many-body self-energy is a functional of the exact single-particle Green's function, \(\Sigma\equiv\Sigma[G]\)[5]. Hedin's equations define a formally exact procedure for generating the exact many-body self-energy [6]. However, there exists no known functional form for the vertex and hence no closed functional form for the self-energy. This leads to difficulty generating well defined procedures for finding exact solutions of Hedin's equations even for simple model systems [7; 8; 9; 10; 11]. The inclusion of vertex corrections to include higher-order many-body correlation effects is neither rigorous nor trivial. The Luttinger-Ward functional is another formally exact procedure for constructing the self-energy functional [5; 6; 12; 13; 14; 15; 16]. In this case, the self-energy is obtained from the functional derivative of the Luttinger-Ward functional with respect to the exact single-particle Green's function. This generates the same infinite-order Feynman diagram perturbation series for the self-energy as described by Hedin's equations, with the additional property that self-energies which are derivable from the Luttinger-Ward functional must obey certain conservation laws such as particle number [6]. However, again, no known closed form expression for this functional exists [6; 17; 18]. The lack of a functional parametrization for the self-energy arises due to the fact that neither Hedin's equations nor the Luttinger-Ward functional assume a closed form expression for the exact many-body ground state wavefunction. However, the 'gold standard' coupled-cluster (CC) parametrization provides us with exactly this. We will show how this allows for a systematically improvable expression for an approximate self-energy.
Recent work on the supermatrix frequency-independent formulation of the quasiparticle equation within the \(GW\) and GF2 approximations have led to significant advances in our understanding of the nature of the many-body self-energy [19; 20; 21; 22; 23; 24]. The relationship between the random-phase approximation (RPA), coupled-cluster doubles (CCD) and equation-of-motion coupled-cluster theory (EOM-CC) is also now well established [25; 26; 27; 22]. The recent work on uncovering formal and numerical similarities between the \(GW\) supermatrix and an _approximate_ algebraic self-energy from ionization potential (IP)/electron affinity (EA) equation-of-motion coupled-cluster singles and doubles (IP/EA-EOM-CCSD) has helped to shed light on the nature of the \(GW\) self-energy [28]. Proof of an exact equivalence between the single-shot G\({}_{0}\)W\({}_{0}\) approximation and quasi-boson equation-of-motion ring unitary coupled-cluster doubles theory has also recently been presented [24]. As recently shown in Ref. [29], uncovering the connections between coupled-cluster theory and self-energy approaches is essential for the development of non-divergent and scalable electronic correlation theories.
No general relationship has been introduced to define the coupled-cluster self-energy functional and its relationship to standard Green's function theory. This is the main subject of this work. By taking inspiration from the connections uncovered between RPA and EOM-CCD, the supermatrix representation of the quasiparticle equation and the Luttinger-Ward functional, we reveal the structure of the coupled-cluster self-energy and hence its relationship to EOM-CC, the Green's function formalism and Kohn-Sham density functional theory (KS-DFT). The power of this approach rests in the formally exact coupled-cluster ansatz for the ground state wavefunction. This approach to the coupled-cluster self
energy provides a fundamental explanation of the equivalence between EOM-CC theory in the minimal space of determinants and Green's function theory [26; 28]. It can be shown to give the exact ground state energy as well as higher-order interaction kernels, and therefore quasiparticle states, from Green's function theory.
In the following, indices \(i,j,k,...\) denote occupied (valence band) spin-orbitals, \(a,b,c,...\) virtuals (conduction band) and \(p,q,r,...\) general spin-orbitals. The single-particle Green's function is defined as
\[iG_{pq}(t_{1}-t_{2})=\left\langle\Psi_{0}\right|\mathrm{T}\left[a_{p}(t_{1})a _{q}^{\dagger}(t_{2})\right]\left|\Psi_{0}\right\rangle, \tag{1}\]
where \(a_{p}^{\dagger}/a_{p}\) create/annihilate electrons in spin-orbital \(\left|\phi_{p}\right\rangle\), \(\mathrm{T}\) is the time-ordering operator and \(\left|\Psi_{0}\right\rangle\) is the normalized, exact \(N\)-electron ground state [30]. The poles of the exact Green's function correspond to the exact ionization potentials and electron affinities of the system. From the equation of motion of the Green's function, the exact ground state energy can be obtained from the Galitskii-Migdal formula [31]. Therefore, the Green's function simultaneously contains both the exact single-particle spectrum and ground state energy. From the Dyson equation, it can be shown that the IPs, EAs and Dyson orbitals can be obtained by solution of the quasiparticle equations [32; 30]
\[\left[f+\Sigma_{c}(\omega=\varepsilon_{p})\right]\left|\psi_{p}\right\rangle= \varepsilon_{p}\left|\psi_{p}\right\rangle \tag{2}\]
where \(f\) is the Fock operator and \(\Sigma_{c}\) is the many-body self-energy. Whilst the self-energy is formally frequency-dependent, this frequency-dependence is actually present in order to capture higher-order interactions whilst remaining in the single-particle spin-orbital basis. This is confirmed by the fact that the same quasiparticle equation may be cast in terms of a static, upfolded supermatrix representation. The exact self-energy may be defined by the Dyson equation [33; 30] but this is not a universal functional relationship. We note similarity between the quasiparticle and KS-DFT equations in Eq. 2.
The coupled-cluster Green's function (CCGF) is obtained by observing that the Green's function contains the exact single-particle spectrum. As first proposed in Refs [34; 35], one may construct the CCGF from the corresponding EOM-CC eigenvalue problem. In CC theory, the many-body ground state is expressed as \(\left|\Psi_{0}^{\mathrm{CC}}\right\rangle=e^{T}\left|\Phi\right\rangle\), where \(T\) creates all excitations with respect to a reference determinant \(\left|\Phi\right\rangle\) as
\[T=\sum_{ai}t_{i}^{a}a_{a}^{\dagger}a_{i}+\frac{1}{4}\sum_{abij}t_{ij}^{ab}a_{a }^{\dagger}a_{j}a_{i}+... \tag{3}\]
where \(\{t_{i}^{a},t_{ij}^{ab},...\}\) are the cluster amplitudes. The cluster amplitudes and ground state energy are obtained by projection onto the manifold of Slater determinants, \(\left\langle\Phi_{0}|\bar{H}|\Phi_{0}\right\rangle=E_{0}^{\mathrm{CC}};\left\langle \Phi_{i}^{a}|\bar{H}|\Phi_{0}\right\rangle=0;\left\langle\Phi_{ij}^{ab}|\bar{ H}|\Phi_{0}\right\rangle=0\) and so on, where \(\bar{H}=e^{-T}He^{T}\) is the similarity transformed Hamiltonian. The manifold of excited Slater determinants are given by \(\left|\Phi_{i}^{a}\right\rangle=a_{a}^{\dagger}a_{i}\left|\Phi_{0}\right\rangle\) and so on. The ground state energy obtained from this procedure is formally exact. Importantly, the similarity transformed Hamiltonian, \(\bar{H}\), is non-Hermitian and possesses different left and right eigenstates.
The IPs and EAs are obtained by diagonalization of \(\bar{H}\) in the basis of all Slater determinants containing \((N\pm 1)\)-electrons. In EOM-CC theory, the supermatrix splits into separate IP and EA parts. For example, the IP-EOM-CC eigenvalue problem can be written as the following eigenvalue problem
\[\bar{H}R_{k}\left|\Phi\right\rangle=\Omega_{k}^{N-1}R_{k}\left|\Phi\right\rangle \tag{4}\]
where the action \(R_{k}\left|\Phi\right\rangle\) takes us to the target excited eigenstate, \(\left|\Psi_{k}\right\rangle\). In IP-EOM-CC theory, we take the operator to be that which creates all determinants consisting of \((N-1)\) electrons. This corresponds to the choice
\[R_{k}=\sum_{i}r_{i}(k)a_{i}+\frac{1}{2}\sum_{ija}r_{ij}^{a}(k)a_{a}^{\dagger}a _{j}a_{i}+... \tag{5}\]
The eigenvalues, \(\Omega_{k}^{N-1}\), give the exact ionization potentials of the system, corresponding to the poles of the exact Green's function [34; 35; 28]. The left and right eigenstates form a complete biorthogonal set in the \((N\pm 1)\)-electron space, and can be used to construct the CCGF. However, the exact ground state correlation energy is _not_ recovered from the CCGF and the associated self-energy is found by inverting the Dyson equation [34; 35]. This definition of the coupled-cluster self-energy does not reveal its relationship to the Bethe-Salpeter kernel and neutral excitation energies. Motivated by the fact that the CCGF does not give the exact ground state correlation energy, we introduce an alternative definition for the coupled-cluster self-energy based on the relations from many-particle quantum theory [5; 12; 13].
The Brueckner formulation of coupled-cluster theory (BCC) employs the same exponential ansatz for the many-body ground state wavefunction. The difference is that the singles amplitudes, \(t_{i}^{a}\), are eliminated by construction. The corresponding expression for the exact Brueckner ground state correlation energy is given by
\[E_{c}^{\mathrm{BCC}}=\frac{1}{4}\sum_{ijab}\left\langle ij||ab\right\rangle t_{ ij}^{ab}\, \tag{6}\]
where \(\left\langle ij||ab\right\rangle\equiv v_{ij,ab}-v_{ij,ba}\) are the antisymmetrized two-electron Coulomb integrals. Clearly, the doubles amplitudes, \(t_{ij}^{ab}\), depend on all higher order amplitudes due to the structure of the projected CC equations. It has been shown that the BCC formalism represents an effective Hamiltonian theory as the total energy can be written in terms of an extended Fock operator, \(F\)[36; 37; 38]. The occupied-virtual block of this extended Fock operator is given by the \(T_{1}\) equation such that convergence of the Brueckner equations requires \(F_{ia}=0\)[37].
At zero temperature, the Luttinger-Ward theorem states that the self-energy is given by the functional
derivative of the ground state correlation energy with respect to the Green's function [12; 13; 14; 15; 39; 40; 41; 6]
\[\Sigma^{c}(1,2)=i\frac{\delta E_{c}}{\delta G(2,1)}\, \tag{7}\]
where \(1\equiv(\mathbf{r}_{1}\sigma_{1},t_{1})\) is the composite spin-space-time coordinate. Setting the Green's function to equal time in this expression, the explicitly static self-energy is simply the correlation potential from KS-DFT.
Within coupled-cluster theory, the ground state energy is expressed exactly in terms of the non-interacting Green's function as the interaction vertex is renormalized by the excitation amplitudes. As a result, the coupled-cluster self-energy functional takes the form, \(\tilde{\Sigma}\equiv\tilde{\Sigma}[G_{0}]\). We choose to define the coupled-cluster self-energy as
\[\tilde{\Sigma}_{ij}[G_{0}]=i\frac{\delta E_{c}^{\rm BCC}[\mathbf{t},G_{0}]}{ \delta G_{ji}^{0}}\Bigg{|}_{\mathbf{t}=t_{\mu}}=i\frac{\delta E_{c}^{\rm BCC} }{\delta\tilde{G}_{ji}}\, \tag{8}\]
where \(\tilde{G}_{ji}\) is the equal-time, 'non-interacting' Green's function in the basis of the Brueckner occupied orbitals and the functional derivative is evaluated for the exact doubles amplitudes. Diagrammatically, this functional derivative corresponds to cutting the hole lines in the Goldstone diagram of the Brueckner correlation energy (Figure 1). The self-energy in Eq. 8 is'static', originating from the 'upfolded' supermatrix representation [42]. Likewise, we may define the self-energy for the virtual states as the functional derivative
\[\tilde{\Sigma}_{ab}=i\frac{\delta E_{c}^{\rm BCC}}{\delta\tilde{G}_{ba}}\, \tag{9}\]
where \(\tilde{G}_{ba}\) is the equal-time, 'non-interacting' Green's function in the basis of Brueckner virtuals. As the non-interacting propagator does not contain virtual-occupied/occupied-virtual contributions, the self-energy for the occupied-virtual block vanishes identically, \(\tilde{\Sigma}_{ia}=\tilde{\Sigma}_{ai}=0\). Taking the functional derivative of the BCC correlation energy as per Eq. 8, we get (see SI)
\[i\frac{\delta E_{c}^{\rm BCC}}{\delta\tilde{G}_{ji}}=\frac{1}{2}\sum_{kab} \left\langle ik||ab\right\rangle t_{jk}^{ab}=\tilde{\Sigma}_{ij}\ \ . \tag{10}\]
This is exactly the correlation part of the generalized Fock operator introduced in Brueckner theory for the occupied states [37; 28; 38]. Importantly, the doubles amplitudes appearing here are formally exact as they are determined from the projected coupled-cluster equations. Similarly, the self-energy of the virtual states is given by
\[i\frac{\delta E_{c}^{\rm BCC}}{\delta\tilde{G}_{ba}}=-\frac{1}{2}\sum_{ijc} \left\langle ij||bc\right\rangle t_{ij}^{ac}=\tilde{\Sigma}_{ab}. \tag{11}\]
Using these results, one may write the coupled-cluster quasiparticle equations for the occupied and virtual states in terms of the extended Fock operator
\[F_{ij}\equiv f_{ij}+\tilde{\Sigma}_{ij}\ \ ;\ \ F_{ab}\equiv f_{ab}+\tilde{ \Sigma}_{ab}. \tag{12}\]
This definition of the self-energy decouples the IP and EA sectors. From Eq. 10, we obtain the exact ground state correlation energy by taking the trace of the self-energy
\[\frac{1}{2}\sum_{i}\tilde{\Sigma}_{ii}=\frac{1}{4}\sum_{ij,ab}\left\langle ij ||ab\right\rangle t_{ij}^{ab}=E_{c}^{\rm BCC}. \tag{13}\]
This is identical to the form of the RPA and \(GW\)-BSE correlation energy, thereby unifying both approaches [22; 38; 25; 38]. From Eq. 12, we may find the quasiparticle solutions by diagonalization of the extended Fock operator, where the \(N\) principle solutions give the quasiparticle energies and wavefunctions. The advantage of this approach resides in the separation of the IP and EA sectors, yielding a structure similar to equation-of-motion coupled-cluster theory (Figure 2). These eigenvalues automatically correspond to ionization energies and electron affinities, providing us with a correlated generalized Koopman's theorem [43]. The extended Fock operator exactly corresponds to the \(T_{2}\) transformed Hamiltonian in the minimal space of one hole/particle (1h)/(1p) Slater determinants
\[F_{ij}=\left\langle\Phi_{i}|e^{-T_{2}}H_{N}e^{T_{2}}|\Phi_{j}\right\rangle\ \ ;\ \ F_{ab}=\left\langle\Phi^{a}|e^{-T_{2}}H_{N}e^{T_{2}}|\Phi^{b}\right\rangle, \tag{14}\]
where the \(T_{2}\) amplitudes are exact. The dynamical degrees of freedom of the self-energy are simply re-expressed in terms of the static coupled-cluster amplitudes. In the supermatrix representation, the \(GW\) and GF2 approximations couple the IP and EA sectors, implicitly containing orbital relaxation despite giving an inexact ground state energy and spectrum [24; 28; 19]. It can be shown that the extended Fock matrix may also be related to KS-DFT using the similarity transformed Hamiltonian.
The two-particle Bethe-Salpeter interaction kernel is given by the functional derivative [5; 44]
\[\Xi(1,2;3,4)=i\frac{\delta\Sigma(1,3)}{\delta G(4,2)}. \tag{15}\]
Figure 1: The series of coupled-cluster functional derivatives obtained by cutting lines in the Goldstone diagrams.
Depending on the relative time-ordering of the field operators, the two-particle Green's function may describe particle-hole or particle-particle correlations [26; 38]. We similarly define the analogous particle-hole interaction kernel from the CC self-energy as
\[\Xi^{c}_{ia,jb}=i\frac{\delta\tilde{\Sigma}_{ij}}{\delta\tilde{G}_{ba}}. \tag{16}\]
This functional derivative gives the coupled-cluster two-particle interaction kernel from the Bethe-Salpeter equation (BSE) in the space of single excited determinants
\[i\frac{\delta\tilde{\Sigma}_{ij}}{\delta\tilde{G}_{ba}}=\sum_{kc}\bra{ik}\ket{ bc}t^{ca}_{jk}=\Xi^{c}_{ia,jb}\ \ . \tag{17}\]
This is exactly the form of the correlated part of the \(T_{2}\) transformed interaction in the space of singly excited determinants [26; 28; 38]. It should be noted that this kernel retains the fermionic symmetry of the many-body wavefunction via the doubles amplitudes. Combining the functional derivative of the HF self-energy together with Eq. 17, we generate the full kernel and write the effective CC-BSE Hamiltonian in terms of the coupled-cluster doubles amplitudes as (see SI) [22; 25; 26]
\[\bar{H}^{\text{BSE}}_{ia,jb}=F_{ab}\delta_{ij}-F_{ij}\delta_{ab}+\bra{ia}\ket{ jb}+\sum_{kc}\bra{ik}\ket{bc}t^{ca}_{jk} \tag{18}\]
whose eigenvalues give the exciton energies. This effective Hamiltonian is exactly the \(T_{2}\) transformed Hamiltonian in the space of singly excited determinants [26; 27; 28]
\[\bar{H}^{\text{BSE}}_{ia,jb}\equiv\bra{\Phi^{a}_{i}}e^{-T_{2}}H_{N}e^{T_{2}} |\Phi^{b}_{j}\, \tag{19}\]
where the \(T_{2}\) amplitudes are solved for exactly. This is equivalent to the upper-left block from the excitation energy (EE)-EOM-CC treatment (Figure 2). Through the identities derived above (Eqs 17 and 18), the relationship between IP- and EE-EOM-CC theory is clear.
The random phase approximation (RPA) and \(GW\)-BSE approximation are formulated in the space of singly excited determinants and the neutral excitation energies are obtained from the eigenvalue problem [22; 25; 45]
\[\left(\begin{array}{cc}\mathbf{A}&\mathbf{B}\\ -\mathbf{B}^{*}&-\mathbf{A}^{*}\end{array}\right)\left(\begin{array}{c} \mathbf{X}\\ \mathbf{Y}\end{array}\right)=\left(\begin{array}{c}\mathbf{X}\\ \mathbf{Y}\end{array}\right)\mathbf{\Omega}\, \tag{20}\]
where \(A_{ia,jb}=\Delta^{a}_{i}\delta_{ab}\delta_{ij}+V_{ia,jb}\) and \(B_{ia,jb}=V_{ij,ab}\). Within the RPA, \(\Delta^{a}_{i}=\epsilon_{a}-\epsilon_{i}\), \(V_{ia,jb}=\langle ia||jb\rangle\) and \(B_{ia,jb}=\langle ij||ab\rangle\). In the \(GW\)-BSE approach, \(\Delta^{a}_{i}=\epsilon^{GW}_{a}-\epsilon^{GW}_{i}\), \(V_{ia,jb}=\langle ia|jb\rangle-W_{ia,jb}\) and \(B_{ia,jb}=\langle ij|ab\rangle-W_{ij,ab}\), where \(W\) is the static screened interaction [32; 44]. The corresponding eigenstates of Eq. 20 are known as excitons, namely correlated electron-hole pairs. It has been shown that RPA reduces to a form of CCD where the doubles amplitudes are solved for by keeping only the so-called 'ring' contractions (rCCD), and recently a similar structure for effective doubles amplitudes has been uncovered for \(GW\)-BSE [22; 25]. The relationship between RPA, \(GW\)-BSE and the doubles amplitudes is obtained by identifying that the doubles amplitude is given by \(t^{ab}_{ij}\equiv\mathbf{T}=\mathbf{Y}\mathbf{X}^{-1}\)[25; 45]. Using this relation, Eq. 20 yields the following Ricciatti equation for the doubles amplitude [25]
\[\mathbf{B}^{*}+\mathbf{A}^{*}\mathbf{T}+\mathbf{T}\mathbf{A}+\mathbf{T} \mathbf{B}\mathbf{T}=0. \tag{21}\]
These are exactly the rCCD amplitude equations. The resulting RPA/\(GW\)-BSE equations can therefore be written as [22; 26]
\[\mathbf{H}^{\text{RPA}}=\mathbf{A}+\mathbf{B}\mathbf{T}\, \tag{22}\]
which, using the definition of the matrix elements defined above, gives exactly the same structure as in Eq. 18
\[H^{\text{RPA}}_{ia,jb}=\Delta^{a}_{i}\delta_{ij}\delta_{ab}+\langle ia||jb \rangle+\sum_{kc}\bra{ik}\ket{bc}t^{ca}_{jk}\, \tag{23}\]
remembering that the doubles amplitudes are determined from the rCCD equations (Eq. 21).
Here, we see the connection between our formalism and RPA in a simple and clear way. When higher-order excitation processes are neglected, as is the case for RPA where \(R_{k}=\sum_{ai}X^{a}_{i}(k)a^{\dagger}_{a}a_{i}-Y^{i}_{a}(k)a^{\dagger}_{i}a_{a}\), we see that the structure of our derived CC-BSE effective Hamiltonian, \(\bar{H}^{\text{BSE}}\), is identical to the RPA/\(GW\)-BSE treatment. Our general formalism reduces to the RPA eigenvalue problem when the effects of the extended Fock operator are neglected and the doubles amplitude are solved via the rCCD equations [26; 27; 38]. Likewise, the \(GW\)-BSE approximation can be obtained by using the \(GW\) eigenvalues for the valence and conduction bands and
Figure 2: Schematic of the relationship between the coupled-cluster self-energy and EOM-CC supermatrices.
the screened instead of bare interaction in the rCCD amplitude equations [19; 20; 21]. Therefore, in RPA and \(GW\)-BSE, the rCCD amplitudes miss important additional contributions from the full CC equations (Figure 3). Our formalism provides a simple and well-defined way to improve upon both the RPA and \(GW\)-BSE approximations by including three body terms via perturbative triples amplitudes (or better) as well as the effects of the extended Fock operator. It is clear from this analysis that the presence of the extended Fock operator is a result of the dressing of the single-particle Green's function. Likewise, the doubles amplitude equations can be likened to an iterative approach to including vertex corrections to the self-energy [46; 47; 48; 49; 50; 51; 52; 53]. This correspondence can be identified from the structure of the doubles amplitude equations which include higher-order correlation effects depending on the truncation of the cluster operator, \(T\). Inclusion of higher-order excitations in the cluster operator and iterative solution of the resulting doubles amplitude equations systematically includes higher-order correlation effects analogous to the vertex function appearing in Hedin's equations.
Likewise, if we want to describe the particle-particle (2p) or hole-hole (2h) two-particle correlations, via a different time-ordering of the two-particle Green's function, we may define these kernels as
\[\Xi_{ab,cd}^{e}=i\frac{\delta\tilde{\Sigma}_{ac}}{\delta\tilde{G}_{db}}\ \ ;\ \ \Xi_{ij,kl}^{e}=i\frac{\delta\tilde{\Sigma}_{ik}}{\delta\tilde{G}_{lj}}. \tag{24}\]
For the particle-particle and hole-hole effective interaction we have
\[i\frac{\delta\tilde{\Sigma}_{ac}}{\delta\tilde{G}_{db}}=\sum_{i< j}\left\langle ij\right||cd\right\rangle t_{ij}^{ab}\ \ ;\ \ \ i\frac{\delta\tilde{\Sigma}_{ik}}{\delta\tilde{G}_{lj}}=\sum_{a< b}\left\langle ij\right||ab\rangle t_{kl}^{ab}. \tag{25}\]
These are exactly the relationships found when solving the particle-particle RPA problem [26] and correspond to the projection of the \(T_{2}\) similarity transformed interaction in the minimal basis of 2p or 2h determinants [54]. The equations derived here demonstrate the equivalence between the BSE, ground state CC and EOM-CC theory in a minimal basis of determinants. It is now clear why only the \(T_{2}\) transformed Hamiltonian has appeared in previous work on connecting coupled-cluster to RPA [22; 25].
We see that the exact ground state correlation energy is obtained by taking the trace over the interaction kernels
\[\frac{1}{2}\operatorname{tr}\boldsymbol{\Sigma}=\frac{1}{4} \operatorname{tr}\boldsymbol{\Xi}^{e}=\frac{1}{4}\sum_{ijab}\left\langle ij \right||ab\rangle t_{ij}^{ab}=E_{c}^{\text{BCC}}. \tag{26}\]
This is similar to the elegant proof of the relationship between the RPA and CCD ground state correlation energies [25]. However, here we have shown that this relationship holds more generally. Importantly, it is the CC ansatz which provides us with a master equation for the doubles amplitudes and therefore the _exact_ ground state correlation energy.
The relationship to EOM-CC theory can now be made explicit (Figure 2). In regular IP-EOM-CC, the similarity transformed Hamiltonian is diagonalized in the basis of 1h and 2h1p determinants. By neglecting the off-diagonal couplings between the 1h and 2h1p sectors in the IP-EOM-CC approach, we are left with exactly the resulting expressions obtained for the self-energy and its higher-order functional derivatives. However, in our approach, formally the doubles amplitudes are solved for exactly. Similarly, we see the equivalency for neutral excitations from EE-EOM-CC theory. By neglecting the off-diagonal couplings between the singly (1p1h) and doubly (2p2h) excited determinants, we are again left with exactly the expressions obtained from the CC-BSE and third-order functional derivative of the self-energy. The higher-order relaxation effects encoded in the off-diagonal elements of the EOM-CC supermatrices are not captured in this framework.
In summary, we have introduced a natural definition of the CC self-energy that unifies elements of CC theory with the Green's function formalism. By appealing to the CC similarity transformed Hamiltonian, we present a closed form functional expression for the self-energy which is systematically improvable via the CC amplitude equations. This allows us to derive a correlated extension of Koopmann's theorem and correlated exciton states from the formally exact ground state energy. We have discovered that the EOM-CC treatment naturally emerges as a consequence of our formalism, providing us with a simple way to improve over \(GW\)-BSE and RPA for ground state and excitation energies. The connections uncovered in this work directly connect IP- and EE-EOM-CC theory by identifying the relationship between the single-particle self-energy and Bethe-Salpeter kernel. We hope that our contribution will stimulate renewed work on novel electronic structure theories for correlated excited state phenomena.
Figure 3: Relationship between the coupled-cluster self-energy, RPA, \(GW\)-BSE and CC-BSE. |
2305.00597 | Incremental procedural and sensorimotor learning in cognitive humanoid
robots | The ability to automatically learn movements and behaviors of increasing
complexity is a long-term goal in autonomous systems. Indeed, this is a very
complex problem that involves understanding how knowledge is acquired and
reused by humans as well as proposing mechanisms that allow artificial agents
to reuse previous knowledge. Inspired by Jean Piaget's theory's first three
sensorimotor substages, this work presents a cognitive agent based on CONAIM
(Conscious Attention-Based Integrated Model) that can learn procedures
incrementally. Throughout the paper, we show the cognitive functions required
in each substage and how adding new functions helps address tasks previously
unsolved by the agent. Experiments were conducted with a humanoid robot in a
simulated environment modeled with the Cognitive Systems Toolkit (CST)
performing an object tracking task. The system is modeled using a single
procedural learning mechanism based on Reinforcement Learning. The increasing
agent's cognitive complexity is managed by adding new terms to the reward
function for each learning phase. Results show that this approach is capable of
solving complex tasks incrementally. | Leonardo de Lellis Rossi, Leticia Mara Berto, Eric Rohmer, Paula Paro Costa, Ricardo Ribeiro Gudwin, Esther Luna Colombini, Alexandre da Silva Simoes | 2023-04-30T22:51:31Z | http://arxiv.org/abs/2305.00597v1 | # Incremental procedural and sensorimotor learning in cognitive humanoid robots
###### Abstract
The ability to automatically learn movements and behaviors of increasing complexity is a long-term goal in autonomous systems. Indeed, this is a very complex problem that involves understanding how knowledge is acquired and reused by humans as well as proposing mechanisms that allow artificial agents to reuse previous knowledge. Inspired by Jean Piaget's theory's first three sensorimotor substages, this work presents a cognitive agent based on CONAIM (_Conscious Attention-Based Integrated Model_) that can learn procedures incrementally. Throughout the paper, we show the cognitive functions required in each substage and how adding new functions helps address tasks previously unsolved by the agent. Experiments were conducted with a humanoid robot in a simulated environment modeled with the Cognitive Systems Toolkit (CST) performing an object tracking task. The system is modeled using a single procedural learning mechanism based on Reinforcement Learning. The increasing agent's cognitive complexity is managed by adding new terms to the reward function for each learning phase. Results show that this approach is capable of solving complex tasks incrementally.
Cognitive Robotics, Cognitive Architectures, Reinforcement Learning, Incremental Learning, Developmental Robotics.
## I Introduction
Advancements in artificial intelligence and robotics increased the interest in introducing robots into daily activities that involve interaction with other agents, both robots and humans. These robots should operate autonomously in complex, partially unknown, unpredictable, and unstructured scenarios, making pre-programming impossible and requiring robots to have a superior capability to perform tasks. This challenge raises questions such as how to incorporate new knowledge and skills through interactions with the world, resulting in the research area of Cognitive Robotics. Cognitive Robotics is intrinsically related to Cognitive Architectures (CA), which represent comprehensive computer models providing theoretical frameworks to work with cognitive processes searching for complex behavior.
Cognitive architectures are systems that can reason in different domains, develop different views, adapt to new situations, and reflect on themselves [1, 2]. They are general control systems inspired by scientific theories developed to explain cognition in humans and other animals, comprising modules responsible for implementing different cognitive abilities, such as perception, attention, memory, reasoning, and learning.
Inspired by how humans build knowledge through interactions with the world, cognitive architecture researchers seek to reproduce this behavior with artificial creatures [3]. However, the development of cognitive skills in machines requires the coordination of complex mechanisms that depend on each other. According to Piaget [4], the process of developing these skills is incremental and evolutionary.
In this work, a cognitive agent based on the CONAIM model (_Conscious Attention-Based Integrated Model_) [2] was proposed and implemented with the _Cognitive Systems Toolkit_[3]. A humanoid robot was designed to incrementally learn procedures to perform object tracking experiments inspired by the first three sensorimotor substages of Jean Piaget's Theory [4]. Throughout the work, we present the cognitive functions necessary to form circular reactions in each substage using a Reinforcement Learning (RL) [5] environment and how new functions can be added to the reward function allowing the agent to solve complex tasks, previously unresolved.
As the main contributions of this work, we can list the following:
1. The proposition of a cognitive architecture based on CONAIM with attention, memories, and learning modules focused on sensorimotor and procedural learning;
2. The design and implementation of CONAIM's top-down pathway in CST that can be incorporated into any agent implemented with CST;
3. The design and implementation of a single procedural learning mechanism in CST that can incrementally learn and reuse schemas for the first three sensorimotor substages of Piaget's Theory;
4. The modeling of a set of environments for sensorimotor experiments for the movements learning in humanoid robots;
5. The implementation and evaluation of sensorimotor experiments for object tracking in the first three sensorimotor substages of Piaget's Theory as proposed by [6].
The code used to implement the architecture is available at: [https://github.com/CST-Group/cst](https://github.com/CST-Group/cst).
## II Cognitive Architectures
Cognitive architectures are systems that can reason in different domains, develop different views, adapt to new situations, and reflect on themselves [7]. They are general control systems inspired by the cognition of humans and other animals, comprising modules responsible for implementing different cognitive abilities, such as perception, attention, memory, reasoning, and learning [3].
A cognitive architecture plays the role of cognition in computational modeling, making explicit the set of processes and assumptions on which this cognitive model is based [8]. It consists of processing units that represent, extract, select and combine knowledge and memories to produce behavior [9, 10, 11].
Next, the main aspects of the reference cognitive architecture used in this work and the computational toolkit used to program the cognitive agent are described. Both were proposed in previous work by the working group.
### _Comaim_
The CONAIM model (_Conscious Attention-Based Integrated Model_) [12, 13, 2] is a formal attention-based model for machine consciousness. CONAIM incorporates several relevant aspects for a cognitive agent (memories, body schema, motivation, attention, among others) and is capable of dealing with multiple sensory systems, multiple processes of feature extraction, decision making, and learning [13]. The model provides a consciousness-based agent that performs calculations on attention-directed schemas, significantly reducing the space of the model's input dimensions. In its cognitive cycle, **top-down** and **bottom-up** mechanisms are used. During the **bottom-up** cycle, the sensors provide external data. The data is stored in the sensory memory to form feature, attentional and salience maps [13]. The complete modeling is detailed in [13]. In the **top-down** cycle, the attentional modulation of the system will depend on the global state of attention (for example, the robot's battery level). It will also depend on the agent's objective, and the attentional dynamics of the current state [14].
### _Cst_
The _Cognitive Systems Toolkit_[3] is an open-source Java-based [15] toolkit for building cognitive architectures. The core of CST consists of a set of basic concepts that can be generalized within any cognitive architecture built. CST tools allow the creation of multi-agent systems running asynchronously and in parallel. The CST architecture is **codelet** oriented. Codelets are small pieces of code, implemented as asynchronous functions, and that run in parallel with simple and defined tasks [3]. A **memory object** (MO) is a signal or representation used, with other MOs, by codelets to store and access data [3].
## III Developmental Robotics
The development of artificial agents with autonomy, adaptive behavior and incremental learning capabilities are research goals in Cognitive Robotics and Developmental Robotics (DevRobotics) [16, 17]. The area emerged due to the need for robots to perform tasks that require comparable levels of human intelligence in complex and unpredictable environments involving adaptation and evolution [18]. The models and experiments in the area are inspired by the principles and mechanisms of development observed in early life, involving robots performing the same cognitive abilities as children, as in the experiments proposed by Jean Piaget.
### _Piaget's Theory_
A relevant concept in Piaget's theory [4] are the **schemas**, which represent networks of mental structures that help remember specific concepts and understand the environment. When simple mental processes become more sophisticated, new schemas are developed, and behavior becomes more complex and suited to the environment [19].
In mental development, according to Piaget, **adaptation** - or learning - is the tendency to adjust mental processes to the environment by changing cognitive structures [4]. The adaptation process involves balancing the processes of assimilation and accommodation. Assimilation and Accommodation are inseparable, complementary and simultaneous processes [4].
**Assimilation** is the creation of new schemas following the same cycle or sequence of existing schemas for interpreting experiences and making decisions. **Accommodation** is the complementary process that involves altering existing schemas as a result of new information acquired through assimilation. Assimilation can originate **circular reactions**, repetitions of cycles acquired or in the process of acquisition [4, 20]. The circular reaction results from the assimilation of an interesting result unknown to the subject, which was produced by the rediscovery or repetition of the action. Circular reactions can be primary, secondary, or tertiary:
* **Primary Circular Reactions** are behaviors derived from reflexes, activities of the body itself that form new schemes through the coordination of the senses;
* **Secondary Circular Reactions** are derived from intentional behaviors that direct interest to external outcomes rather than the baby's body;
* **Tertiary Circular Reactions** are the subject's effort to seek new experiences.
### _Sensorimotor Experiments for Incremental Learning_
Experiments with cognitive architectures in the field of DevRobotics are heavily based on theories of childhood development. However, a common point in these experiments is the lack of standardization to conduct and evaluate agent development. To help resolve these issues, we proposed in previous work [6] a set of incremental experiments for the robotic scenario according to Piaget's sensorimotor stages of development, along with the expected results. These experiments are based on Piaget's studies [4] on the sensorimotor period, on the Bayley Child Development Scale [21], in different scenarios described in the literature for the assessment of learning development in infants [22, 23, 24], and in the parameters and levels of ConsScale [25]. The computational
scenarios focused on incorporating human behaviors in robots and evaluating their cognitive development. Experiments are classified according to the type of skill to be learned by the agent in many scenarios.
## IV Proposed Approach
In the current work, we addressed a subset of the theoretical scenarios proposed by [6], focusing on building an agent to perform exclusively those experiments related to **object tracking** task, as shown in Table I. In those experiments, we expect a robot to incrementally learn the skills of tracking objects in a scene using RGB-D sensors. The goal is to investigate incremental computational processes that can allow robots to learn intentional sensorimotor schemes from an initially unintentional perspective, that is, computational processes able to model circular reactions using procedural memory elements.
The experiments proposed for object tracking describe what capabilities are expected from the agent in this task at each developmental stage and what it should not accomplish. These experiments are designed for scenarios with increasing complexity. It also defines which sensors should be employed to achieve such capabilities and any sensor limitations. For example, it considers the child's visual acuity development when vision is employed. Objects are positioned at different distances from the robot but are perceived according to the visual acuity compatible with the presented by a baby at the specified sensorimotor substage. We can expect a less precise associated behavior when considering a less accurate sensor, as discussed later in the experiments. The agent's visual acuity and its distance to the objects are variable in the experiments. We assess the development level of the robot by verifying if it can achieve the expected result, as described in Table I.
## V Methodology
### _Robots and Environment_
The humanoid robot Marta was adopted in our experiments. Marta is a teen-size 1.1m tall female robot designed and built by our workgroup. Marta has 25 degrees of freedom, and it's head - particularly relevant in the present work - can perform _pitch_ and _yaw_ movements. The robot was equipped with an RGB-D camera on its head, inspecting the world in four distinct channels of color (R, G and B) and distance information (D). The robot was controlled by a cognitive system detailed in the next sections. Several simulated scenes were also created in CoppeliaSim for the experiments. In the scenes, Marta is sitting in a small space with a wide view of its surroundings. An arena was delimited outside this first space, and colored blocks (blue and green) were randomly distributed. A second robot, a red Pioneer P3DX, randomly navigates the arena as a mobile distractor. This robot was modeled with reactive behaviors using the Braitenberg Algorithm [26]. Both robots and the environment are shown in Figure 1.
### _Robot control overview_
Table II summarizes the key aspects of the humanoid robot control strategies adopted in the proposed approach, focusing on Piaget's schemas created or utilized along developmental sensorimotor stages. It emphasizes the cognitive system functions present, the configuration of the robot's sensors and actuators, and the Reinforcement Learning reward strategy adopted. We will address these aspects in the following sections.
### _Cognitive model_
The cognitive system of the humanoid Marta was modeled according to CONAIM [2]. A schematic of the adopted cognitive model is shown in Figure 2. The first level of architecture is the **attentional system**[14], responsible for collecting data from the environment, compressing this information and selecting the most relevant points in the scene. The system's inputs receive multiple sensory information from the four camera channels (R, G, B, D) with a previously configured resolution. These observations generate four distinct _bottom-up_ feature maps (\(\mathcal{F}_{R}\), \(\mathcal{F}_{G}\), \(\mathcal{F}_{B}\), \(\mathcal{F}_{D}\)), which carry information about the most discrepant signals in each channel. Three _top-down_ feature maps (\(\mathcal{F}_{color}\), \(\mathcal{F}_{dist}\), \(\mathcal{F}_{reg}\)) were also adopted, which can emphasize particular aspects of color, distance and region of the input data according to agent's goals (if applicable). All feature maps are weighted and combined into a single _Combined Feature Map_ (\(\mathcal{C}\)) that carries the information of the most relevant information considering all input data. The _Attentional Map_ (\(\mathcal{M}\)), which carries the information about the attentional focus at time \(t-1\) modulated by inhibition of return (IOR) effects over previously selected points, is combined to \(\mathcal{C}\) and produces the _Salience Map_ (\(\mathcal{L}\)), which contains the most relevant points of the visual field at present \(t\).
As a result, all exogenous stimuli will compete against each other to be the _winner_ of this _bottom-up_ competitive process, i.e., the most relevant point of the scene. Besides the _bottom-up_ process, any particular feature can also be emphasized via an endogenous _top-down_ process. Via the top-down pathway, a specific scene region or a desired color can receive the attentional focus depending on the agent's goal.
The second level of the architecture is the **cognitive system**, responsible for modulating the relationship between the robot and the environment, as well as for the cognitive evolution of the agent. In the present experiments, we considered only some modules of CONAIM [2] cognitive system. These modules were activated incrementally during successive experiments. Each procedure/schema \(m_{p}\in M_{P}\) represents learned knowledge stored in procedural memory \(\mathbb{M}_{\mathbb{P}}\). Initially, the working memory (\(\mathbb{M}_{\mathbb{W}}\)) receives the salience map (\(\mathcal{L}\)) - used as a state in reinforcement learning - emerging from the attentional system. As the agent does not know the beginning, a new procedure \(m_{p}\) is created in \(\mathbb{M}_{\mathbb{P}}\), and the cognitive agent can gradually learn something about it. If the agent has some prior knowledge stored in \(\mathbb{M}_{\mathbb{P}}\) that fits the current state, an _recall_ of procedures (\(\mathbb{R}_{\mathbb{P}}\)) takes place. Decision Maker (\(\mathbb{D}\)) will consider this knowledge. In some experiments, a set of motivations \(mv_{i}\in\mathbb{M}_{\mathbb{V}}\) was also modeled to explore the use of new actions in some states. The volition \(\mathbb{V}\mathbb{O}\) is the function responsible for transforming the agent's motivations into tasks that the decision process will also consider. A procedural learning function (\(\mathbb{L}_{\mathbb{P}}\)) is responsible for creating or updating the content of \(m_{p}\in M_{P}\), in this case acting respectively in an analogous way to assimilation and accommodation in Piaget's theory. The cognitive model was fully implemented in _Java
Fig. 1: Simulation environment. From Left to Right. a) Marta robot equipped with an RGB-D camera; b) Environment with distributed colored blocks and a Pioneer P3DX robot acting as a distractor. Marta’s view of the scene is shown at left. (c) Agent’s degrees of freedom. Motors used in red. (d) Division of the agent’s visual space for the virtual actuator (named as “eye”, “look” or “fovea”).
using CST [3]. Figure 3 shows the implementation scheme of the CONAIM+CST architecture for the proposed incremental learning.
### _Reinforcement Learning (RL) in substages_
Due to the trial and error nature of learning at early child developmental stages [27], we adopted RL as the primary paradigm to learn state-action pairs, that is, the agent's procedures in the procedural memory. This section details (1) states, (2) actions, and (3) learning in this approach.
#### Iv-D1 States
Before we can compute the agent State (\(\mathbb{S}\)), the input of our RL algorithm, we must go back to the sensors data observation (\(\mathcal{O}\)). Our approach gradually increases the robot's visual acuity, refining the agent's perception of the world. Three different image resolutions were adopted among the experiments: 64x64 pixels (1st Substage), 128x128 pixels (2nd Substage), and 256x256 pixels (3rd Substage). The _bottom-up_ maps of the RGB-D channels (\(\mathcal{F}_{\mathcal{R}}\), \(\mathcal{F}_{\mathcal{G}}\), \(\mathcal{F}_{\mathcal{B}}\) and \(\mathcal{F}_{\mathcal{D}}\)) were computed using an _average pool_ over the observation of each color map at time \(t\), and then the difference between each region mean and the image mean. Since the resolution of the image changes among the experiments, we computed the necessary size of the kernel and stride to reduce the feature map to a final size of 16x16. In other words, the most discrepant elements at each map at each time \(t\) are highlighted. _Top-down Feature Maps_ (\(\mathcal{F}_{color}\), \(\mathcal{F}_{dist}\) and \(\mathcal{F}_{reg}\)) allow the agent to target its attention to desired elements deliberately. We compare each pixel value to a particular (color, distance, or spatial region) goal to build these maps. The closer these elements are to the target values according to predefined percentage ranges, the higher the map activation in that region. Our experiments adopted 20%, 40%, 60%, and 80% proximity ranges, respectively. The corresponding attentional values are \(1\), \(0.75\), \(0.5\), and \(0.25\). Particularly, in the _regions top-down Feature Map_ (\(\mathcal{F}_{reg}\)), the visual space was divided into \(5\) distinct regions, as shown in Figure 1 (d). These regions define particular regions of interest for the agent. The _Combined Feature Map_ (\(\mathcal{C}\)) computes an element-wise weighted mean of the \(i\) enabled _Features Maps_. An element-wise multiplication of the _Attentional Map_ (\(\mathcal{M}\)) and (\(\mathcal{C}\)) results in the _Salience Map_ (\(\mathcal{L}\)). Finally, we compute the State (\(\mathbb{S}\)) vector that will be used as input for the learning algorithm. \(\mathbb{S}\) is computed in the _Working Memory_ (\(\mathbb{M}_{\mathbb{W}}\)) using a _MaxPool_ operator with a 4x4 kernel and a 4 stride over the Salience Map (\(\mathcal{L}\)), generating a 4x4 matrix with a 2-level discretization per element obtained with a threshold. This process results in a state vector of size 65.536 (\(2^{16}\)).
#### Iv-D2 Actions
In analogy to the typical actions performed by children in each substage, the robot was allowed to perform 17 possible actions (\(\mathbb{A}\)), divided into three groups: motor, virtual and attentional actions. The Motor Actions (\(\mathbb{A}_{m}\)) are the actions on the physical actuators on the robot's neck, capable of turning the head motors _pitch_ and _yav_. Virtual Actions (\(\mathbb{A}_{w}\)) are internal to the agent and simulate eye movement. The virtual actuator selects a point in the visual space where the agent focuses (eye). Attention Actions (\(\mathbb{A}_{a}\)) are divided into two subgroups. The first group of actions involves directing the robot's head toward the most salient point in the image (winner). The second group refers to _top-down_ actions. It can emphasize specific colors, distances, or regions in resource maps. Some of these possible actions have been enabled or disabled for each of the three sets of experiments. Figure 4 presents the actions available to each distinct substage and experiment.
#### Iv-D3 Learning
The _Learning Process_ (\(\mathbb{L}_{\mathbb{P}}\)) has a central role in the current investigation. We selected a _Reinforcement Learning_ (RL) algorithm, the Q-learning [5], for the cognitive agent's learning. The memory elements \(m_{p}\in\mathbb{M}_{\mathbb{P}}\) were modeled as _QTables_ capable of storing State-Action pairs (\(\mathbb{S}\rightarrow\mathbb{A}\)) for particular procedures. The states (\(\mathbb{S}\)) were modeled from the saliency maps (\(\mathcal{L}\)) that represent the environment. Reinforcement positively rewards the robot if there is space-time synchronization between the visual stimulus (most salient point of the image) and the robot's current focus (motor or virtual). There is no reward if there is no such timing, and the reward is strongly negative if the robot loses balance. The learning mechanism remains unchanged during all experiments. Figure 5 details the reinforcement policy for some states of the agent.
Fig. 2: Schematic model of the cognitive-attentive system adopted. a) Full view; b) Details of the Cognitive System in 1st Substage, with some of the modules (\(\mathsf{MO},\mathsf{VO},\mathsf{MV},\mathsf{G},\mathbf{t}\), _top-down_) disabled (painted in grey); c) Details of the Cognitive System in 2nd Substage and 3rd Substage.
Fig. 4: Possible actions for the cognitive robot in experiments for 1st Substage, 2nd Substage and 3rd Substage. **Motor actions (\(\lambda_{m}\))**: 1. No-action; 2-5. Move neck pitch/yaw actuators with low discretization of Move neck pitch/yaw actuators with high discretization; **Virtual actions (\(A_{v}\))**: 8-1.0 Move virtual actuators (eyes) to particular image zones; **Attentional actions (\(\lambda_{n}\))**: 11-14. Move neck pitch/yaw actuators towards the attentional stimulus; _Top-down Attenational Actions_: 15. Emphasize stimuli of a particular color; 16. Emphasize stimuli at a particular distance; 17. Emphasize stimuli in a particular region of the space.
Fig. 3: Implementation scheme of the CONAIM+CST architecture with the robot Marta that receives attentional stimuli _bottom-up_ and _top-down_.
### _Proposed Experiments_
Three sets of experiments (EXP-01/03) were proposed based on the scenarios to assess the agent's ability to learn to track other agents or objects, proposed in section IV[6]. During training, for each episode, we initialize the agent in a fixed position in the environment, adding random noise to its actuators.
The episode ends when one of the following conditions is reached: _i)_ the agent reached the maximum number of steps/actions; _ii)_ the robot falls over or exceeds the limits of its motorized actuators, or _iii)_ the robot has no saliences for several iterations. The following parameters were adopted:
* **Simulation parameters.** Maximum number of episodes: 200; Maximum number of steps: 500; Maximum number of iterations without bosses: 5;
* **Learning parameters.** A \(\epsilon-\)greedy policy was employed with an exploitation rate starting at 0.95 and linearly decaying to 0 in the last episode. Learning rate \(\alpha\): 0.9; Time discount rate \(\gamma\): 0.99;
* **Visual acuity / Vision sensor resolutions.** 64x64 for \(1^{st}\) substage; 128x128 for \(2^{nd}\) substage; 256x256 for \(3^{rd}\) substage;
* **Rewards.** +1 reward for each new data inserted in procedural memory; +1 for holding or directing an actuator (motor or virtual) to the attention winner; -10 if the agent falls or exceeds the limits of physical actuators, or if the agent has no saliences in its attentional cycle for several iterations; and, only for \(3^{rd}\) substage, +1 when the agent identifies regions in which a certain desired characteristic is highlighted according to the _top-down_ process.
Table III details the experiments carried out, the cognitive system, available modules, functions and learning.
#### V-E1 1st Substage: Use of Reflexes
The 1st Substage experiments dynamics is schematically presented in Figure 6. This set of experiments investigates a computational process proposed to model **reflex reactions**. In this set, there is no intentionality or motivation.
#### V-E2 2nd Substage: Primary Circular Reactions
In this set of experiments, we investigated whether the reflex reactions in the agent, initiated in the 1st substage, can evolve into behaviors similar to the primary circular reactions proposed by Piaget. As in the previous experiment, only the _bottom-up_ stimuli were considered. **Motivation.** A motivation function (\(\mathbb{M}\)O) is applied, related to the agent's curiosity about the effects of its actions, to encourage the agent to explore schemes that have not yet been explored in the current episode. All previously learned contents of \(\mathbb{M}_{\mathbb{P}}\) are preserved.
#### V-E3 3rd Substage: Secondary Circular Reactions
In this last set of experiments, we investigate the computational process by which behaviors associated with the secondary circular reactions proposed by Piaget can be observed. We verified whether the agent could intentionally select an action that would allow him to reach a goal. The _top-down_ attention mechanism is used in this phase. All previously learned contents of \(\mathbb{M}_{\mathbb{P}}\) are preserved.
## VI Results and Discussion
We carried out three experiment sets to train and validate the integrated architecture, corresponding to the first three substages of the sensorimotor period in Piaget's Theory [4]. At the end of each training episode, the reward obtained and the number of actions performed were restarted and the robot actuators returned to the starting position. The Pioneer P3DX robot was randomly positioned in the scene. Figure 7 demonstrates the resulting reward and the number of actions performed per episode for each learning experiment performed. It can be noted from the top graphs that, as the agent reuses knowledge from previous substages, both the
Fig. 5: Reinforcement for the cognitive robot depends on the current state (\(s_{i}\)) and previous action. **Reinforcement for motor actions (Rma)**: The system is positively reinforced if the direction the robot’s head moved matches the emergence of a visual stimulus; The system receives no reinforcement if there is no such space-time synchronicity. The system is strongly negatively reinforced if the robot loses its balance; **Reinforcement for virtual actions (Rva)**: The system is positively reinforced if the direction in which the virtual actuator (eye) moved matches the emergence of a visual stimulus; The system receives no reinforcement if there is no such spatial-temporal synchronicity.
reward and the number of actions are greater for a more developed agent (\(3rd>2nd>1st\)). The bottom images depict the training results when we do not reuse knowledge from prior stages. We can note that, in these cases, either no learning occurs (the reward for the 2nd substage experiment does not increase over episodes) or it results in very low rewards when compared to the scenario where knowledge was reused (3rd substage - reward peak around 200 versus 500).
Fig. 6: System dynamics in 1st Substage. \(t_{1}\): The robot sensors sample the environment; \(t_{2}\): The attentional system generates the saliency map (\(\mathcal{L}_{1}\)); \(t_{3}\): The working memory (\(\mathbb{M}_{\mathbb{W}}\)) identifies this new state of the world (\(\mathcal{S}_{1}\)) based on (\(\mathcal{L}_{1}\)). The recall function (\(\mathbb{R}_{\mathbb{P}}\)) is called to look for for any procedure in \(\mathbb{M}_{\mathbb{P}}\) that could be applicable to the current state \(\mathcal{S}_{1}\); \(t_{4}\): Such procedure is not found in \(\mathbb{M}_{\mathbb{P}}\); \(t_{5}\): The decision maker (\(\mathbb{D}\)) informs that the robot do not know what to do in the current state; \(t_{6}\): The decision maker (\(\mathbb{D}\)) decides to start a new QTable associated to this state (\(\mathcal{S}_{1}\)) in \(\mathbb{M}_{\mathbb{P}}\); \(t_{7}\): The new QTable is created with random values, enabling the cognitive robot to learn more about this state; \(t_{8}\): The recall function \(\mathbb{R}_{\mathbb{P}}\) returns the possible actions associated to this state and their \(\mathbb{Q}\) values; \(t_{9}\): current state \(\mathcal{S}_{1}\) and possible actions are sent to the decision maker (\(\mathcal{D}\)); \(t_{10}\): The decision maker (\(\mathcal{D}\)) chooses the next action between a random action and the best action for the current state (\(\mathcal{Q}_{max}\)) according to an _e-greedy_ policy. In this example, the action to move the robot’s head to the right (\(\mathbb{A}_{2}\)) was selected; \(t_{11}\): A new salience map (\(\mathcal{L}_{2}\)) is generated by the attentional system; \(t_{12}\): The working memory (\(\mathbb{M}_{\mathbb{W}}\)) identifies the new state (\(\mathcal{S}_{2}\)) based on (\(\mathcal{L}_{2}\)); \(t_{13}\): The decision maker (\(\mathcal{D}\)) evaluates the current state (\(\mathcal{S}_{2}\)) and the previous action performed (\(\mathcal{L}_{2}\)). Since there is a synchronicity between the robot’s head-resulting position and the external stimulus, a positive reinforcement \(R_{\mathcal{S}_{1}\mathcal{L}_{2}}\) increases the \(\mathbb{Q}\) value for the action \(\mathcal{L}_{2}\) in the state \(\mathcal{S}_{1}\); \(t_{14}\): The \(\mathbb{Q}\) value is sent to the QTable; \(t_{15}\): The QTable is updated; \(t_{16}\): A new recall \(\mathbb{R}_{\mathbb{L}}\) looks for the possible actions for the current state \(\mathcal{S}_{2}\); \(t_{17}\): The possible actions are found in \(\mathbb{M}_{\mathbb{P}}\); \(t_{18}\): The decision maker (\(\mathbb{D}\)) is informed about the current state \(\mathcal{S}_{2}\) and the possible actions; \(t_{19}\): The decision maker (\(\mathbb{D}\)) decides by the action \(\mathcal{A}_{4}\) (move the head bellow).
### _1st Substage: Use of Reflexes._
The experiment demonstrates the attentional course of selection for perception for the agent executing only the _bottom-up_ attentional course. The results obtained for Procedural Learning (training) in this experiment are shown in Figure 8. In the first episodes, the agent explores the limits of its actuators due to the high exploration rate while promoting the refinement of state-action pairs for the fovea selection virtual actuator. The agent established its attentional focus on the Pioneer P3DX robot while it moved in regions closer to the humanoid. With the distance from the Pioneer P3DX robot, the agent directed its attention to other nearby objects and its own body. The stimuli obtained while exploring the agent's body reinforced the reflexes used. The absence of a motivation system made the agent perform the reinforced actions in greater quantity, even when the stimuli that promoted this reinforcement were no longer present and in smaller quantities the actions that did not participate in these interactions.
**Learning Validation - 1st Substage:** For the validation of the agent in this substage, the QTable resulting from the end of the last episode of Procedural Learning was used. To evaluate the learned policy, 100 test episodes were executed without updating the learning parameters for each experiment, with a maximum of 500 actions.
#### Iv-A1 **Experiment A - 1st Substage - Object in fixed position and with primary color**
The humanoid Marta was positioned 80cm from the Pioneer P3DX robot. The results obtained for this experiment are shown in Figure 9. The agent initially directed its attention to the Pioneer P3DX robot, which remained stationary during this experiment, as suggested by Berto (2020). [6]. However, the action of excitatory and inhibitory cycles promoted by the CONAIM attentional system directed the attentional focus to regions closer to the humanoid. The performance of the reinforced reflexes during the exploration of the agent's body in Procedural Learning resulted in the alternation of the agent's actuators between the regions closest to the humanoid and the Pioneer P3DX robot. As expected, the robot learned to respond to salient stimuli using reflexes.
#### Iv-A2 **Experiment B - 1st Substage - Object moving slowly and of primary color**
Marta was positioned 80cm in front of
Fig. 7: Top: Resulting reward (left) per episode and number of actions (right) for each learning experiment for all substages when incremental learning is in course. Bottom: Resulting reward (left) per episode and number of actions (right) for each learning experiment for all substages with no incremental learning.
the Pioneer P3DX robot, which has a Braitenberg algorithm and moves at a constant speed of 0.1m/s. The results of this experiment are shown in Figure 10. The agent directed its attention to the Pioneer P3DX robot while it remained in the regions closest to the agent (frontal region). However, as the Pioneer P3DX moves to the lateral areas, the performance of the reinforced reflexes during Procedural Learning again resulted in directing attentional focus to the regions closer to the humanoid. Thus, Marta could not track the moving object outside its visual field, as expected for this developmental stage.
### _2nd Substage: Primary Circular Reactions._
In this experiment, Marta continues with only _bottom-up_ perception elements. With the implementation of a motivation model, the agent starts to explore possible actions that do not have defined schemes in the Procedural Memory \(M_{p}\). The reflex reactions developed in the 1st substage can generate primary circular reactions, stabilizing the learning of certain actions. The results obtained for Procedural Learning (training) in this experiment are shown in Figure 11. Using the QTable from the previous substage allows the agent to perform better in the object-tracking task. The Attentional System of the 2nd Substage, during Procedural Learning, allowed the agent to establish its attentional focus on the Pioneer P3DX robot while it moved in regions closer to the humanoid, as in the 1st substage. With the withdrawal of the Pioneer P3DX robot, the agent returned to direct its attention to nearby objects and its own body. However, with the action of the motivation system, the agent was motivated to explore all possible actions for each new scheme not found in the Procedural Memory. This behavior minimized the performance of the reinforced actions developed in the 1st substage, allowing the agent to acquire greater rewards and promoting the formation of primary circular reactions.
**Learning Validation - 2nd Substage**
#### Vi-B1 **Experiment A - 2nd Substage - Moving object with primary color.**
The QTable resulting from the end of the last episode of Procedural Learning was used. Marta was positioned with the Pioneer P3DX robot out of its field of view, as illustrated in the scene in Figure 12 (a). The Pioneer P3DX robot has a Braitenberg algorithm and moves at a constant speed of 0.1m/s. The agent directed its attention to the closest regions of its body when the Pioneer P3DX robot left its field of vision. The use of primary circular reactions promoted a performance with more actions performed compared to the previous substage. The increased visual acuity and the refinement of actuator movement in this substage also gave the agent greater control over its actuators.
### _3rd Substage: Secondary Circular Reactions._
In this substage, the humanoid Marta has a cognitive-attentional algorithm that has all the elements shown in Figure 2, with elements of perception _bottom-up_ and _top-down_, and models of intentionality and motivation. The agent can exploit the primary circular reactions developed in the previous steps and develop secondary circular reactions. The results obtained for Procedural Learning (training) in this experiment are shown in Figure 13. The Attention System of the 3rd Substage, during Procedural Learning, allowed the agent to establish its focus of attention on the P3DX robot. By using attentional actions, the agent could follow the movement of the P3DX even when it was in the most distant regions of the humanoid and out of its field of vision.
**Learning Validation - 3rd Substage**
#### Vi-B1 **Experiment A - 3rd Substage - Moving object and primary color.**
The QTable resulting from the last episode of Procedural Learning was used. We positioned Marta with the Pioneer P3DX robot out of its field of view. The Pioneer P3DX robot uses the Braitenberg algorithm and moves at a constant speed of 0.1m/s. In this experiment, the agent maintained its focus on the Pioneer P3DX robot even when it was far away
Fig. 8: 1st Substage. Sensory data obtained in the 1st episode of Procedural Learning. Left to Right: (a) Overview of the scene in _CoppeliaSim_ (t = 1s); (b) Marta’s camera view (t = 1s); (c) Salience Map (t = 3s); (d) Winner of the attentional cycle (t = 3s); (e) Agents and objects’ positions in scene.
Fig. 9: 1st Substage. Sensory data obtained in validation Experiment A. Left to right: (a) Overview of the scene in _CoppeliaSim_ (t = 40s); (b) Marta’s camera view (t = 40s); (c) Salience Map (t = 43s); (d) Winner of the attentional cycle (t = 43s); (e) Agents and objects’ positions in scene.
because the agent was able to track the moving robot. This is mostly caused by motivation and intention to follow the moving object. The increased visual acuity in this substage and the refinement of actuator movement gave the agent greater control over its actuators than in previous substages. The secondary circular reactions developed during this substage promoted a higher performance than in previous substages, as we can see in Figure 7.
## VII Conclusion
In this work, we proposed and implemented an incremental procedural learning mechanism to create and reuse previously learned schemas inspired by Piaget's sensorimotor development substages. To this end, we employed a simulated humanoid robot and static and moving objects. By building our cognitive agent, we investigated which modules in a cognitive architecture are needed to control a robot interacting with its environment while performing a set of sensorimotor experiments with increasing difficulty. We discussed the importance of motivation and attention to forming primary and secondary circular reactions from reflexes. This approach allowed for employing a single incremental mechanism that evolves over time. We showed that reusing previous knowledge is mandatory for the success of incremental learning. The experiments demonstrated the feasibility of using a cognitive-attentional architecture based on CONAIM and implemented with CST. Furthermore, we successfully implemented experiments corresponding to the first three substages proposed by Berto (2020) [6] for tracking objects. With these experiments, we could show which cognitive functions are required to achieve specific levels of development through object-tracking experiments.
Fig. 11: 2nd Substage. Sensory data obtained in the 1st episode of Procedural Learning. Left to right. (a) Overview of the scene in the simulator _CoppeliaSim_ (t = 40s); (b) Marta’s camera view (t = 1s); (c) Salience Map (t = 3s); (d) Winner of attentional cycle (t = 3s); (e) Agents and objects’ positions in scene.
Fig. 12: 2nd Substage. Sensory data obtained in Experiment A. Left to right. (a) Overview of the scene in the simulator _CoppeliaSim_ (t = 1s); (b) Marta’s camera view (t = 1s); (c) Salience Map (t = 3s); (d) Winner of attentional cycle (t = 3s); (e) Agents and objects’ positions in scene.
Fig. 13: 3rd Substage. Sensory data obtained in the 1st episode of Procedural Learning. Left to right. (a) Overview of the scene in the simulator _CoppeliaSim_ (t = 1s); (b) Marta’s camera view (t = 1s); (c) Salience Map (t = 3s); (d) Winner of attentional cycle (t = 3s); (e) Agents and objects’ positions in scene.
## VIII Acknowledgments
This work was developed within the scope of PPI-Softex with support from MCTI through the Technical Cooperation Term [01245.013778/2020-21].
|